Skip navigation.

Feed aggregator

Simple Ways to Simplify: Quick Fixes to Enhance OBIEE Visuals

Rittman Mead Consulting - Tue, 2014-07-08 13:18

Over the past couple of months I worked with a few clients in order to conduct an assessment on dashboard and analysis design. In most cases I was noticing that the dashboards were overflowing with content to the point that there was really no discernible “story” or “flow” to the information. Most of the dashboards were an overwhelming hub of large tables, excess dashboard prompts, and complicated charts and graphs. Many times the client complaints were that the dashboards were not being fully adopted by the user base. The users knew that something was in fact wrong with what they were looking at, but just couldn’t put their finger on why the analysis process was so frustrating. While this may seem unavoidable during a time when you are trying to provide your users with what they want and “need,” there are a few simple ways to aid this issue. During the development process, simplicity is often overlooked and undervalued, and that is why a few key design principles can drastically improve the user experience.

In many cases that I’ve seen, all dashboards start with the good intention of quickly relaying information to the user. However, as the requirements process grows in length, more and more compromise is made in keeping clean, consistent, functional design. Many of the dashboards in my assessments often hand the user everything they ask for without ever questioning the value. Developers often settle on poor design just to have something accepted by their users and released into production.

While each client is unique and has their own set of issues to resolve, there are a few consistent principles of dashboard design that can be applied for everyone.

Top-Down Analysis—The practice of allowing users to drill from summary to detail, gives your user community the ability to make their dashboards as dynamic as they need to be. This method will never give the user too much, or too little information. With detail being a choice, not the default, the user that can log in and be prompted to have to drill into further detail, rather than being presented everything all at once. While this is a widely accepted best practice when it comes to dashboard design, this principle is ignored more than you would think. Odds are that your VP does not need to see 50 rows of detail level information in a table every time they view their dashboard. The primary purpose of presenting this detail level is to answer a very important question that should be posed by analyzing your data at the summary level. The user can examine something simple like a trend analysis, and then decide whether further examination is needed. The benefit of this common principle is that your user is never overwhelmed and irritated with an overload of information. The dashboard should be treated with the same care and consideration you would . Your organization’s dashboard (product) should be a joy to use, not an experience coupled with frustration.

top down image

Simplicity and Trimming the Fat-This is another very simple, yet often ignored design principle. From my observation, many developers  will create a chart or graph and will leave all the default settings, no modifications needed right? While there is nothing inherently wrong with this, the default will leave a lot of extra pixels on the graph such as unneeded axis titles, shadowing, canvas size, etc. With just a little effort here, you can remove all of these unnecessary data pixels (pictured below) and provide a more professional, clean design. The point that I’m trying to make here is, let’s not get lazy with our visualizations. Instead, we should try to give a lot of thought what is useful in the visualization, and what can be discarded without hindering the message. The less cluttered we make our visualizations, the more pleasant the user experience will be. And as we all know, the happier our user community is, the easier your life as a developer’s will be.

default to flat

 Options for Option’s Sake-The types of visualizations used for a dashboard are one of the most important criteria for user adoption. By choosing visualizations that do not adequately display the type of analysis needed, or tell the correct story about the data, can be a frustrating waste of time for the user. Just because there are a lot of available graphing options in your analytical tool, does not mean they need to be used. This mistake is often made in an attempt to visually enhance the dashboard, or add “variety” . Try to consider things like what type of scale is in my graph (nominal, or interval pictured below), or do I want to provide summary or detail? Be sure to choose your graph based on these factors, rather than picking a graph that you think will add to variety and then figuring out how you can make it useful.

interval graphs

Visually appealing dashboards are important, however this is only relevant when the graphs are enhancing the users analytical experience. These mistakes are very costly because the overall goal of a dashboard is not to provide variety for varieties sake, but to quickly and accurately relay a message. By focusing only on visualization variety, we run a terrible risk of rendering a dashboard useless.

There are a lot of great resources out there that can provide more detail that can surely take your dashboards to the next level so I certainly suggest reading up information design methodology. I think the principles I’ve listed above are a great way to get started and provide some quick fixes on the road to enhancing the user experience within your organization.

 

Categories: BI & Warehousing

PeopleTools version end of support/life and PeopleTools support for applications

PeopleSoft Technology Blog - Tue, 2014-07-08 12:53

As a follow on from the CPU Analysis posting and for the frequent enquiries we get regarding an end of support/life for a particular PeopleTools version and the versions of PeopleSoft applications which PeopleTools versions support, here is some background you can use for your own research and upgrade planning.

Oracle Support Document 1272860.1 (E-UPG How to Determine End of Support or  End of Life of a PeopleTools or PeopleSoft Application) can be found at:
https://support.oracle.com/epmos/faces/DocumentDisplay?id=1272860.1


PeopleTools releases are designated as follows - 8.xx.yy
-  xx is the minor release version
-  yy is the patch version

PeopleTools versions are patched until 12 months after the GA of the next minor release version.
CPU (security) patches/fixes are provided for a further 12 months.

For example:
General Availability of PeopleTools 8.52 - Friday Oct 28, 2011
General Availability of PeopleTools 8.53 - Wednesday Feb 06, 2013

PeopleTools 8.52 was patched until 12 months after PeopleTools 8.53 GA - i.e. until Feb 2014
First CPU only patch release for PeopleTools 8.52 was April 2014
PeopleTools 8.52 support ends Jan 2015 (depending on SV's, alignment with CPU cycle, and some other circumstances, CPU may be extended for another cycle, i.e. additional 3 months)

Please see this document for Maximum PeopleTools releases for particular PeopleSoft versions:
Oracle Support Document 1348959.1 (Lifetime Support Summary for PeopleSoft Releases) can be found at:
https://support.oracle.com/epmos/faces/DocumentDisplay?id=1348959.1

PeopleSoft applications support is defined in Applications Unlimited support - http://www.oracle.com/us/support/library/lifetime-support-applications-069216.pdf

Here is an example from the Applications Unlimited support document for HRMS (HCM) 9.0

HCM 9.0 support

For additional information on Critical Patch Updates, Security Alerts and Third Party Bulletin, see http://www.oracle.com/technetwork/topics/security/alerts-086861.html

--


Oracle Linux and MySQL : Progress

Tim Hall - Tue, 2014-07-08 11:26

A few months ago I wrote about some MySQL on Oracle Linux migrations we were working through. It’s been a long time coming, but last weekend was the go-live for this batch of migrations. So far so good! :)

Most of the elapsed time since my last post on this subject has been spent with the developers and users testing the migrations.

The process has taken a bit longer than some people would have liked. Rather than doing a quick and dirty upgrade, I’ve been pushing to get things done properly. Since I was the person who set up the infrastructure, I’ve been extremely anal about levels of privilege I’m granting. This has caused some minor rewrites of applications, which were essentially relying on admin privileges to perform some actions. Not any more! :)

I’m no MySQL guru, but I think what we have now is pretty darn decent, especially compared to where we started. I guess time will tell how bold a statement that is. :)

Onwards and upwards…

Cheers

Tim…

Oracle Linux and MySQL : Progress was first posted on July 8, 2014 at 6:26 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Used Delphix to quickly recover ten production tables

Bobby Durrett's DBA Blog - Tue, 2014-07-08 10:24

Yesterday I used Delphix to quickly recover ten production tables that had accidentally been emptied over the weekend.  We knew that at a certain time on Saturday the tables were fully populated and after that some batch processing wrecked them so we created a new virtual database which was a clone of production as of the date and time just before the problem occurred.  We could have accomplished the same task using RMAN to clone production but Delphix spun up the new copy more quickly than RMAN would have.

The source database is 5.4 terabytes and there were about 50 gigabytes of archive logs that we needed to apply to recover to the needed date and time.  It took about 15 minutes to complete the clone including applying all the redo.  The resulting database occupies only 10 gigabytes of disk storage.

If we had used RMAN we would first have to add more disk storage because we don’t have a system with enough free to hold a copy of the needed tablespaces.  Then, after waiting for our storage and Unix teams to add the needed storage we would have to do the restore and recovery.  All these manual steps take time and are prone to human error, but the Delphix system is point and click and done through a graphical user interface (GUI).

Lastly, during the recovery we ran into Oracle bug 7373196 which caused our first attempted recovery to fail with an ORA-00600 [krr_init_lbufs_1] error.  After researching this bug I had to rerun the restore and recovery with the parameter _max_io_size set to 33554432 which is the workaround for the bug.  Had we been using RMAN we probably would have to run the recovery at least twice to resolve this bug.  Maybe we could have started at the point it failed but I’m not sure.  With Delphix it was just a matter of setting the _max_io_size parameter and starting from scratch since I knew the process only took 15 minutes.  Actually it took me two or three attempts to figure out how to set the parameter, but once I figured it out it was so simple I’m not sure why I didn’t do it right the first time.  So, at the end of the day it was just under 3 hours from my first contact about this issue until they had the database up and were able to funnel off the data they needed to resolve the production issue.  Had I been doing an RMAN recover I don’t doubt that I would have worked late into the night yesterday accomplishing the same thing.

- Bobby

P.S. These databases are on HP-UX 11.31 on IA64, Oracle version 11.1.0.7.0.

 

 

Categories: DBA Blogs

<b>Contribution by Angela Golla,

Oracle Infogram - Tue, 2014-07-08 10:13
Contribution by Angela Golla, Infogram Deputy Editor

Get Proactive with Oracle Database
Check out Note:1389167.1 for Oracle Database.  It contains a wealth of links to important topics such as:  Oracle Database Resource Portfolio, Oracle Recommended Patches, RAC and DB Support Tools Bundle, Database Diagnostic Tools and Upgrade Advisor. 

Oracle WebCenter Mobile Development Skillsets

WebCenter Team - Tue, 2014-07-08 09:32

By Mitchell Palski, Oracle WebCenter Sales Consultant

The Important of Enterprise Mobility
Enterprise mobility is a growing area of interest for all organizations – public sector and commercial – mainly because of the widespread use of mobile devices. A majority of users have mobile access to the web and an ever-growing percentage of those users depend on that capability to successfully perform their day-to-day responsibilities. Rather than combat this trend, the burden is on IT development teams to develop user interfaces that enhance the productivity of their workforce and encourage user participation through mobile devices. I wrote a blog in April 2014 called “The Evolution of Enterprise Content in the Mobile Era” in which I talked about the enterprise benefits of mobile access to content. Aside from the benefits to end users, I also noted that organizations can analyze usage analytics from personal devices to gather information about their mobile workforce. The point is this; enterprise mobility isn’t just important to end users’ satisfaction, it’s also important to an organization’s operational awareness.

Developing a Mobile Interface with Oracle WebCenter Portal
Oracle WebCenter Portal is a Web platform that allows organizations to quickly and easily create intranets, extranets, composite applications, and self-service portals. Oracle WebCenter Portal provides users a more secure and efficient way of consuming information and interacting with applications, processes, and other users. Oracle WebCenter Portal provides IT with a comprehensive and flexible enterprise portal and composite applications solution to quickly build portals, websites and composite applications. This common user experience architecture is based on ADF and combines run-time and design time customization of applications in one. 

Oracle WebCenter Portal supports enterprise mobility through several development techniques:
  • Responsive Design – develop an interface that adapts the layout of a website automatically based on the dimensions of the device viewing that site.
  • Device Settings and Page Variants – control how a Portal renders on specific devices or groups of devices.
  • Mobile Applications – provide users with native applications for their iOS and Android devices.
The rest of this blog will be dedicated to explaining the differences between these three techniques, as well as the skillsets that your staff will require to use them.
Responsive Design
1 Responsive design is a client-side strategy that depends on CSS Media Query to carry out the client-side responsiveness. Oracle WebCenter Portal is based on the Oracle Application Development Framework (ADF), whose user interface components (rich client components) are based on JavaServer Faces (JSF). When developing a responsive Oracle WebCenter Portal user interface, your development team will have to leverage those ADF components to quickly and easily build interactive user interfaces. When building a responsive user interface layout, developers are not limited to using ADF components – they can also leverage the traditional HTML5+CSS3 technique. Here’s how it breaks down:

Interactive Components  Page Layout  ADF  Yes  Yes  HTML5+CSS3  No  Yes

What it comes down to is this:

  • Oracle WebCenter Portal comes out-of-the-box with a plethora of UI components that can be dragged and dropped onto a page. No ADF knowledge is needed to accomplish this.
  • ADF is used for any UI component that interacts with Oracle WebCenter services. This includes anything from an Event Calendar to an Administration link.
  • ADF, HTML5, or a hybrid of the two, can all be used to design the layout of your Portal.
The only other note I would like to make here is that many Oracle WebCenter Portal customers prefer to change the out-of-the-box look and feel of ADF components. Those components generate HTML on the client side that assigns unique CSS classes that HTML. The styles associated with those classes can be altered by using ADF skin selectors2 in the Portal skin.
Oracle recommends the use of JDeveloper to develop page templates and skins for Oracle WebCenter Portal. In JDeveloper, you can build new templates and skins from scratch or refine and further develop existing ones that come with Oracle WebCenter Portal.

Page Variants
Oracle WebCenter Portal includes the capability to recognize which type of device a given request comes from, and to render the portal properly on that device. Portal administrators can use device settings to specify which page templates and skins to associate with specific devices or classes of devices. In addition, administrators can create and edit page variants – alternative pages designed to display on specific groups of devices.

When it comes to developing the actual page templates and skins, the same skillsets described above apply. However, there are two categories of additional skills that Portal developers and administrators should learn; both are specific to Oracle WebCenter Portal: Managing device groups allows an administrator to assign specific page templates and skins to device types. The value of this feature is realized by standardizing the look-and-feel of a portal across devices within the same group. For example, it may be beneficial to replace flashy image-filled backgrounds with CSS3 gradients to improve page load times. 
The advantage of using page variants is that you aren’t just altering the layout of the page based on a device’s dimensions – you are actually providing an alternate user experience. You are also controlling what content is actually being displayed on that page. You may want to completely re-structure the way that your navigation renders, or which Business Intelligence reports show up on the home page, or provide links that are more useful to mobile workers rather than those in the office. Responsive design can be incorporated into this technique, but the real value in using page variants comes from defining mobile user’s goals and tailoring the interface to optimize their experience.
Mobile Applications for Oracle WebCenter Oracle ADF Mobile enables developers to build and extend enterprise applications for iOS and Android from a single code base. Based on a hybrid mobile architecture, ADF Mobile supports access to native device services, enables offline applications and protects enterprise investments from future technology shifts.
The Java language is used for developing the business logic in Oracle ADF Mobile applications – a fairly commonplace skillset. This makes mobile app development easy for most organizations because it doesn’t require their Java developers to learn any new programming languages. The Oracle Fusion Middleware stack has a set of APIs for all products, including Oracle WebCenter. These APIs can be used to access Oracle WebCenter security, to display Oracle WebCenter services (i.e. People connections, announcements, events, etc.), to render content from the Content Repository, and perform many other Oracle WebCenter-related actions. Local device services such as camera, phone, SMS, and GPS, can also be accessed through the Apache Cordova platform. ADF mobiles can authenticate against a remote login server and then make the appropriate tokens accessible for further web service calls to data sources.
For developers that already familiar with developing with Oracle Application Development Framework (ADF), the transition to using ADF mobile will be even easier. Developers can still expose Java classes and web services as “data controls”. JDeveloper uses a declarative binding layer and drag-and-drop technology to create forms, lists, charts, and other data visualizations from an application’s data controls. Developers that are already accustomed to building interfaces using these declarative technologies will find ADF mobile easy to use, especially considering that the ADF Mobile components are already designed for mobile devices, allow for additional customization through CSS3, and support touch gestures.
Conclusion Why is Enterprise Mobility Important?
  • More and more users depend on web capabilities to successfully perform their day-to-day responsibilities 
  • Encouraging user engagement through mobile devices can enhance the productivity of your mobile workforce 
  • Organizations can analyze usage analytics from personal devices to gather information about their mobile workforce
What options does Oracle WebCenter Portal provide for Delivering Mobile Engagement?
  • Responsive design in page templates and skins
  • Apply layouts and skins to the UX for specific devices and device-groups
  • Develop a mobile application using ADF Mobile
What skillsets will are needed by the development staff to build this mobile experience? Features  Features  Features  Skill-sets  Skill-sets  Mobile Methodology  Adaptive Layouts  Device-specific User Experiences  Works Offline  ADF Skill-Level  HTML5+CSS3 Skill-Level  Responsive Design  Yes  No  No Minimal Expert  Device Settings  Yes  Yes  No Minimal Proficient   Mobile App  Yes No  Yes  Expert Proficient 
At the end of the day, there is no substitute for hands-on training and reading the Oracle Documentation. For more guidance on this subject, reach out to your local Oracle representative and open a discussion!

____________________________________________________________________________________________________________________

1Building a Responsive WebCenter Portal Application, April 2014, By JayJay Zheng
2ADF-WebCenter Responsive-Adaptive Design Beyond, By Martin Deh

Passed the 11g RAC and Grid Expert Exam

Hemant K Chitale - Tue, 2014-07-08 09:08
I passed the 11g RAC and Grid Expert Exam yesterday.
.
For those who are interested :

You must absolutely read the documentation on ASM, Grid Infrastructure and RAC. 

I also recommend 3 books 
1) Pro Oracle Database 11g RAC on Linux -- by Steve Shaw and Martin Bach [Apress Publishing]
2) Oracle 11g R1/R2 Real Application Clusters Essentials -- by Ben Prusinsky and Syed Jaffer Hussain [Packt Publishing] 
OR 
2) Oracle 11g R1/R2 Real Application Clusters Handbook -- by Ben Prusinsky, Guenad Jilveski and Syed Jaffer Husssain [Packt Publishing] 
3) Oracle Database 11g Release 2 High Availability -- by Scott Jesse, Bill Burton and Bryan Vongray [Oracle Press] 

The 11gR2 Grid and RAC Accelerated training at Oracle University is also recommended but expensive.
.
.
.
Categories: DBA Blogs

ADF BC 12c New Feature - Entity-Level Triggers

Andrejus Baranovski - Tue, 2014-07-08 08:13
We have triggers support in ADF 12c! Powerful new feature is available - Entity-Level triggers. Previously it was often confusing if certain use case belongs to validation rule, or it is more generic business logic. Everything was implemented as part of validation rule in the EO. ADF 12c provides cleaner approach by supporting option of Entity-Level triggers, along with regular validation rules. Validation logic can be implemented as part of validation rule, non-validation (but still dependent on data management lifecycle) logic can be implemented as part of new Entity-Level trigger.

ADF 12c EO wizard offers new section - Entity-Level Triggers. Here you can define different triggers, for example - before update, before commit, after insert, before delete, etc.:


This means, you can inject any custom code to be invoked by the framework, during certain event from ADF BC lifecycle.

Wizard allows to select trigger type and define Groovy expression, here you can call any custom method from EO Impl class. However, there is one trick related to expression execution in untrusted mode - I will describe it below (thanks for a hint to Steve Muench):


Custom method is defined in EO Impl class, you can see it here:


If you simply define a trigger and try to test it, you will get an error about not permitted method call:


Trigger expression by default is running in untrusted mode, this means your custom method must be annotated with @AllowUntrustedScriptAccess. If you don't want to annotate, you could change trust mode to trusted for the expression. By default trust mode is set to be untrusted:


Change it to trusted mode:


Trigger should work just fine now. Try to change data and save:


There are two triggers defined - before update and before commit. Both of these triggers are invoked successfully, as you can see from the printed log (right before update and before commit):


Download sample application - ADF12cApp.zip.

Making it Easier to Graph Your Infrastructure’s Performance Data

Pythian Group - Tue, 2014-07-08 07:46

Today I would like to share a story with you about the development of a Puppet module for Grafana and its release on the Puppet Forge. But first, I’d like to provide some context so you can understand where Grafana fits in and why I feel this is important.

Those of you that know me have likely heard me talk about the importance of data-driven decision making, and more specifically some of the tools that can be used to help enable individuals to make smart decisions about their IT infrastructure. A common approach is to deploy a graphing system such as Graphite, which stores performance data about your infrastructure to aide you in performing a number of functions including problem diagnosis, performance trending, capacity planning, and data analytics.

If you are unfamiliar with the software, I’ll briefly describe its architecture. Graphite consists of a daemon, called carbon, which listens for time series data and writes it to a fixed-size database called whisper. It also provides a web application to expose the data and allow the user to create and display graphs on demand using a powerful API.

While Graphite does a good job of storing time series data and providing a rich API for visualizing it, one of the things it does not really focus on is providing a dashboard for the data. Thankfully we have Grafana to fill this role and it happens to do it quite well.

If you have ever worked with the ELK stack (Elasticsearch, Logstash, and Kibana) before, Grafana’s interface should be familiar to you, as it is based on Kibana. It is a frontend for Graphite or InfluxDB, and runs as a client side application in your browser. Its only (optional) external dependency is Elasticsearch, as it can use it to store, load and search for dashboards.

Below are some of Grafana’s most notable features (see its feature highlights for a more comprehensive list):

  • Dashboard search
  • Templated dashboards
  • Save / load from Elasticsearch and / or JSON file
  • Quickly add functions (search, typeahead)
  • Direct link to Graphite function documentation
  • Graph annotation
  • Multiple Graphite or InfluxDB data sources
  • Ability to switch between data sources
  • Show graphs from different data sources on the same dashboard

We like to make use of IT automation software whenever possible to deploy tools for our clients. Most tools already have Puppet modules or Chef cookbooks available for them, including the other components of the graphing system: Graphite itself, and a great Python-based collector named Diamond. Grafana, however, had no Puppet module available so I decided to rectify the situation by creating one and publishing it to the Puppet Forge.

The module would be pretty simple: all that is required is to download and extract Grafana into an installation directory, and ensure appropriate values for the Elasticsearch, Graphite and InfluxDB servers / data sources are inserted into its configuration.

I decided to offload the work of downloading and extracting the software to another module, namely gini/archive. And managing the configuration file, config.js, would be done with a combination of module parameters and ERB template.

The only real complication arose when it came time to test serving Grafana with a web server such as Apache or Nginx. I decided not to have my module manage the web server in any way, so I would leverage Puppet Labs’ own Apache module for this purpose.

My test environment consisted of a CentOS virtual machine provisioned by Vagrant and Puppet, with Graphite and Grafana on the same server. I decided to use Daniel Werdermann’s module to deploy Graphite on my virtual machine as it had worked well for me in the past.

I quickly ran into problems with duplicate resources, however, due to the Graphite module managing Apache for creation of its virtual host etc. I moved to separate virtual machines for Graphite and Grafana, and that made my life easier. If you do decide to run both pieces of software on the same server, and are also using Daniel’s module, you can work around the problem by setting gr_web_server to ‘none’ like this:

class { 'graphite':
  gr_web_server			=> 'none',
  gr_web_cors_allow_from_all	=> true,
}

Since my module does not manage Apache (or Nginx), it is necessary to add something like the following to your node’s manifest to create a virtual host for Grafana:

# Grafana is to be served by Apache
class { 'apache':
  default_vhost   => false,
}

# Create Apache virtual host
apache::vhost { 'grafana.example.com':
  servername      => 'grafana.example.com',
  port            => 80,
  docroot         => '/opt/grafana',
  error_log_file  => 'grafana-error.log',
  access_log_file => 'grafana-access.log',
  directories     => [
    {
      path            => '/opt/grafana',
      options         => [ 'None' ],
      allow           => 'from All',
      allow_override  => [ 'None' ],
      order           => 'Allow,Deny',
    }
  ]
}

And the Grafana declaration itself:

class { 'grafana':
  elasticsearch_host  => 'elasticsearch.example.com',
  graphite_host       => 'graphite.example.com',
}

Now that my module was working, it was time to publish it to the Puppet Forge. I converted my Modulefile to metadata.json, added a .travis.yml file to my repository and enabled integration with Travis CI, built the module and uploaded it to the Forge.

Since its initial release, I have updated the module to deploy Grafana version 1.6.1 by default, including updating the content of the config.js ERB template, and have added support for InfluxDB. I am pretty happy with the module and hope that you find it useful.

I do have plans to add more capabilities to the module, including support of more of Grafana’s configuration file settings, having the module manage the web server’s configuration similar to how Daniel’s module does it, and adding a stronger test suite so I can ensure compatibility with more operating systems and Ruby / Puppet combinations.

I welcome any questions, suggestions, bug reports and / or pull requests you may have. Thanks for your time and interest!

Project page: https://github.com/bfraser/puppet-grafana
Puppet Forge URL: https://forge.puppetlabs.com/bfraser/grafana

Categories: DBA Blogs

C program to dump shared memory segments to disk on Linux.

ContractOracle - Tue, 2014-07-08 01:26
The following program was written to help investigate Oracle database shared memory on Linux.  It dumps the contents of existing shared memory segments to files on disk.  Note that it won't work against Oracle 11g and 12C databases as they use mmap instead of shmat for managing shared memory.  Sample program for reading from 11g and 12C here (mmap example )

Compile it using "gcc -o shared shared.c"  It is free for anyone to copy or modify as they wish, but I do not guarantee the functionality.
Check the format of the include listings below as I had to remove hashes and greater-than/less-than symbols to keep blogger happy.
include stdio.h
include stdlib.h
include sys/shm.h

int main (int argc, char *argv[]) {    int maxkey, id, shmid = 0;    struct shm_info shm_info;    struct shmid_ds shmds;    void * shared_data;    FILE * outfile;        maxkey = shmctl(0, SHM_INFO, (void *) &shm_info);    for(id = 0; id <= maxkey; id++) {        shmid = shmctl(id, SHM_STAT, &shmds);        char shmidchar[16];        snprintf(shmidchar, sizeof(shmidchar), "%d", shmid);        if (shmid < 0)            continue;        if(shmds.shm_segsz > 0) {            printf("Shared memory segment %s found.\n",shmidchar);                        shared_data = shmat(shmid, NULL, 0666);            if(shared_data != NULL) {                outfile = fopen(shmidchar, "wb");                if(outfile == NULL) {                    printf("Could not open file %s for writing.", shmidchar);                }                else {                    fwrite(shared_data, shmds.shm_segsz, 1, outfile);                    fclose(outfile);                                        printf("Dumped to file %s\n\n", shmidchar);                }            }        }    }}


Categories: DBA Blogs

It was 12 years ago today…

Richard Foote - Tue, 2014-07-08 01:07
It was exactly 12 years ago today that I first presented my Index Internals – Rebuilding The Truth presentation at a local ACT Oracle User Group event. And so my association with Oracle indexes started. It would be an interesting statistic to know how many people have subsequently read the presentation :) It would no doubt result in […]
Categories: DBA Blogs

Instructure’s CTO Joel Dehlin Abruptly Resigns

Michael Feldstein - Mon, 2014-07-07 16:32

One week after the conclusion of Instructure’s Users’ Conference, CTO Joel Dehlin abruptly resigned from the company for a new job. Joel took the CTO job with Instructure in summer 2013, around the same time as Devlin Daley’s departure (Devlin was co-founder). Joel’s resignation comes as a surprise, especially given his prominent placement as the technology lead for the Canvas LMS. As recently as InstructureCon on June 27th, Joel gave the product update presentation.

The change became apparent by viewing the new Instructure leadership page (nice page design, btw), as I noticed that Joel was not included. I contacted Devin Knighton, Director of Public Relations for Instructure, who confirmed that the resignation was unexpected and was Joel’s decision. I am not sure how significant this resignation is for the company. What we do know is that Joel has not been replaced as CTO, but that Jared Stein (VP of Research and Education), Trey Bean (VP of Product), David Burggraaf (VP of Engineering), and Zach Willy (Chief Architect) will cover the CTO responsibilities in the near term. I would have more details, but Devin is on family vacation, and I did not want to push for him to send me an official email.

We’ll keep you posted if we find out more information (assuming it is newsworthy).

Update: Corrected second paragraph on VP of Product and VP of Engineering per Devin Knighton comment below.

The post Instructure’s CTO Joel Dehlin Abruptly Resigns appeared first on e-Literate.

George EP Box

Greg Pavlik - Mon, 2014-07-07 15:22
"Essentially, all models are wrong. Some models are useful."

Designing a Naturally Conversational User Experience for the User Interface

Usable Apps - Mon, 2014-07-07 14:03

By Georgia Price and Karen Scipi

Think about the software applications you like most. Why do you like them? How do they make you feel? What is your experience like when you use them? The most successful user interfaces—those that delight users—focus equally on the intersection of visual, interaction, and language design.

Visual and interaction design get a lot of play in the enterprise software development environment. Yet language design directly impacts a user’s ability to complete tasks. The use and arrangement of general words, specialized terms, and phrases on the UI promote a naturally conversational voice and tone and inform and induce user actions.

Simply put, the words, terms, and phrases that we promote on a UI either facilitate or hinder the user experience and either delight or frustrate the user.

As Oracle Applications User Experience language designers, we took this message on the road last month as featured speakers at the Society for Technical Communications Summit, where we presented two papers: Designing Effective User Interface Content and The Unadorned Truth About Terminology Management: Initiatives, Practices, and Melodrama.

Society for Technical Communication Summit logo

If attendance is any indication, our message resonated with many. More than 115 people gathered to hear us talk about how designing language for the UI is just as important when building effective, simplified user experiences as creating the right interactions and choosing the right images, icons, colors, and fonts. Dozens lined up after our talks to ask questions and to learn more, making us realize that many others who build software applications  are also grappling with how to design language to enable more simplified user experiences.

Perhaps we can pique your interest! Over the coming weeks, we'll share our thoughts and experiences on language design. Stay tuned to the Usable Apps blog to learn more about what language design is and how we use words, terms, and phrases, as well as voice and tone, to help build simplified user experiences and easy-to-understand UIs.

KeePass 2.27 Released

Tim Hall - Mon, 2014-07-07 13:41

I just noticed that KeePass 2.27 has been released.

I was introduced to KeePass at my current job and now I use it for everything at home too. You can read how I use KeePass here.

Happy upgrading…

Cheers

Tim…

KeePass 2.27 Released was first posted on July 7, 2014 at 8:41 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Pro-active AWR Data Mining to Find Change in SQL Execution Plan

Pythian Group - Mon, 2014-07-07 11:11

Many times we have been called for the poor performance of a database and it has been narrowed down to a  SQL statement. Subsequent analysis have shown that the execution plan has been changed and a wrong execution plan was being used.

Resolution normally, is to fix the execution plan in 11g by running

variable x number
begin
:x :=
    dbms_spm.load_plans_from_cursor_cache(
    sql_id=>'&sql_id',
    plan_hash_value=>&plan_hash,
    fixed=>'YES');
end;
/

or for 10g, SQL_PROFILE is created as mentioned in Carlos Sierra’s blog .

A pro-active approach can be to mine AWR data for any SQL execution plan changes.

Following query from dba_hist_sqlstat can retrieve the list of SQL IDs whose plans have changed. It orders the SQL IDs,so that those SQL IDs for which maximum gains can be achieved by fixing plan, are listed first.

 
spool sql_with_more_than_1plan.txt
set lines 220 pages 9999 trimspool on
set numformat 999,999,999
column plan_hash_value format 99999999999999
column min_snap format 999999
column max_snap format 999999
column min_avg_ela format 999,999,999,999,999
column avg_ela format 999,999,999,999,999
column ela_gain format 999,999,999,999,999
select sql_id,
       min(min_snap_id) min_snap,
       max(max_snap_id) max_snap,
       max(decode(rw_num,1,plan_hash_value)) plan_hash_value,
       max(decode(rw_num,1,avg_ela)) min_avg_ela,
       avg(avg_ela) avg_ela,
       avg(avg_ela) - max(decode(rw_num,1,avg_ela)) ela_gain,
       -- max(decode(rw_num,1,avg_buffer_gets)) min_avg_buf_gets,
       -- avg(avg_buffer_gets) avg_buf_gets,
       max(decode(rw_num,1,sum_exec))-1 min_exec,
       avg(sum_exec)-1 avg_exec
from (
  select sql_id, plan_hash_value, avg_buffer_gets, avg_ela, sum_exec,
         row_number() over (partition by sql_id order by avg_ela) rw_num , min_snap_id, max_snap_id
  from
  (
    select sql_id, plan_hash_value , sum(BUFFER_GETS_DELTA)/(sum(executions_delta)+1) avg_buffer_gets,
    sum(elapsed_time_delta)/(sum(executions_delta)+1) avg_ela, sum(executions_delta)+1 sum_exec,
    min(snap_id) min_snap_id, max(snap_id) max_snap_id
    from dba_hist_sqlstat a
    where exists  (
       select sql_id from dba_hist_sqlstat b where a.sql_id = b.sql_id
         and  a.plan_hash_value != b.plan_hash_value
         and  b.plan_hash_value > 0)
    and plan_hash_value > 0
    group by sql_id, plan_hash_value
    order by sql_id, avg_ela
  )
  order by sql_id, avg_ela
  )
group by sql_id
having max(decode(rw_num,1,sum_exec)) > 1
order by 7 desc
/
spool off
clear columns
set numformat 9999999999

The sample output for this query will look like

SQL_ID        MIN_SNAP MAX_SNAP PLAN_HASH_VALUE          MIN_AVG_ELA              AVG_ELA             ELA_GAIN     MIN_EXEC     AVG_EXEC
------------- -------- -------- --------------- -------------------- -------------------- -------------------- ------------ ------------
ba42qdzhu5jb0    65017    67129      2819751536       11,055,899,019       90,136,403,552       79,080,504,532           12            4
2zm7y3tvqygx5    65024    67132       362220407       14,438,575,143       34,350,482,006       19,911,906,864            1            3
74j7px7k16p6q    65029    67134      1695658241       24,049,644,247       30,035,372,306        5,985,728,059           14            7
dz243qq1wft49    65030    67134      3498253836        1,703,657,774        7,249,309,870        5,545,652,097            1            2

MIN_SNAP and MAX_SNAP are the minimum/maximum snap id where the SQL statement occurs

PLAN_HASH_VALUE is the hash_value of the plan with the best elapsed time

ELA_GAIN is the estimated improvement in elapsed time by using this plan compared to the average execution time.

Using the output of the above query, sql execution plans can be fixed, after proper testing.  This method can help DBAs pin-point and resolve problems with SQL execution plans, faster.

Categories: DBA Blogs

Salt Stack for Remote Parallel Execution of Commands

Pythian Group - Mon, 2014-07-07 11:08

There are many scenarios when a SysAdmin has to do a “box walk” of the entire infrastructure to execute a command across many servers. This is universally accepted as one of the less glamorous parts of our job. The larger the infrastructure, the longer these box walks take, and the greater chance that human error will occur.

Even giving this task to a junior resource, as is often the case, is not sustainable as the infrastructure grows, and does not represent the best value to the business in terms of resource utilization. Additionally, too much of this type of “grind” work can demoralize even the most enthusiastic team member.

Thankfully the days of having to do these box walks are over. Thanks to configuration management and infrastructure automation tools, the task has been automated and no longer requires the investment in time by a human SysAdmin that it once did. These tools allow you, at a very high level, to off load this repetitive work to the computer, with the computer doing the heavy lifting for you.

 

Introducing Salt Stack

Salt Stack is a distributed remote execution system used to execute commands and query data on remote nodes, either individually or by arbitrary selection criteria. Salt Stack is also a configuration management system in it’s own right but this post will be focusing on Salt from a “Command and Control” point of view.

Salt has 2 main components, the “salt master” (server) and the “salt minions” (clients). Once the minions are accepted by the master, then further execution of commands can come directly from the central salt master server.

Once you have installed your packages the minion needs to be configured to know where its master is. This can be accomplished through a DNS or hosts-file entry or by setting the variable in the /etc/salt/minion config.


master: XXX.XXX.XXX.XXX

Where “XXX.XXX.XXX.XXX” is the IP Address of your master server. Once that is done, and the salt-minion service has been started the minion will generate and ship an SSL key back to the master to ensure all communication is secure.

The master must accept the key from the minion before any control can begin.


# Listing the Keys

[root@ip-10-154-193-216 ~]# salt-key -L
Accepted Keys:
Unaccepted Keys:
ip-10-136-76-163.ec2.internal
Rejected Keys:

# Adding The Key

[root@ip-10-154-193-216 ~]# salt-key -A
The following keys are going to be accepted:
Unaccepted Keys:
ip-10-136-76-163.ec2.internal
Proceed? [n/Y] y
Key for minion ip-10-136-76-163.ec2.internal accepted.

# Nailed It! Now the Master can control the Minion!

[root@ip-10-154-193-216 ~]# salt-key -L
Accepted Keys:
ip-10-136-76-163.ec2.internal
Unaccepted Keys:
Rejected Keys:

Note: Not Shown – I added a 2nd Minion

Now that your master has minions the fun begins. From your master you can now query information from your minions such as disk space:


[root@ip-10-154-193-216 ~]# salt '*' disk.percent

ip-10-136-76-163.ec2.internal:
----------
/:
15%
/dev/shm:
0%
ip-10-147-240-208.ec2.internal:
----------
/:
14%
/dev/shm:
0%

And you can also execute remote commands such as finding out service status, and restarting services.


[root@ip-10-154-193-216 ~]# salt '*' cmd.run "service crond status"

ip-10-136-76-163.ec2.internal:
crond (pid 1440) is running...
ip-10-147-240-208.ec2.internal:
crond (pid 1198) is running...

[root@ip-10-154-193-216 ~]# salt '*' cmd.run "service crond restart"
ip-10-136-76-163.ec2.internal:
Stopping crond: [ OK ]
Starting crond: [ OK ]
ip-10-147-240-208.ec2.internal:
Stopping crond: [ OK ]
Starting crond: [ OK ]

These are only the most basic use cases for what Salt Stack can do, but even from these examples it is clear that salt can become a powerful tool which can reduce the potential for human error and increase the efficiency of your SysAdmin Team.

By Implementing Configuration Management and Infrastructure Automation tools such as Salt Stack you can free up the time of your team members to work on higher quality work which delivers more business value.

Salt Stack (depending on your setup) can be deployed in minutes. On RHEL/CentOS/Amazon Linux using the EPEL repo I was able to be up and running with Salt in about 5 minute on the 3 nodes I used for the examples in this post. Salt can be deployed using another configuration management tool, it can be baked into your provisioning environment, or into base images. If all else fails, (ironically) you can do a box walk to install the package on your existing servers.

Even if you have another configuration management solution deployed, depending on what you are trying to accomplish using Salt for parallel command execution rather then the Config Management system can often prove a much simpler and lightweight solution.

Salt is also a great choice in tools for giving other teams access to execute commands on a subset of boxes without requiring them to have shell access to all of the servers. This allows those teams to get their job done without the SysAdmin team becoming a bottle neck.

Categories: DBA Blogs

Recurring Conversations: AWR Intervals (Part 1)

Doug Burns - Mon, 2014-07-07 07:36
I've seen plenty of blog posts and discussions over the years about the need to increase the default AWR retention period beyond the default value of 8 days. Experienced Oracle folk understand how useful it is to have a longer history of performance metrics to cover an entire workload period so that we can, for example, compare the acceptable performance of the last month end batch processes to the living hell of the current month end. You'll often hear a suggested minimum of 35-42 days and I could make good arguments for even more history for trending and capacity management.

That subject has been covered well enough, in my opinion. (To pick one example, this post and it's comments are around 5 years old.)  Diagnostics Pack customers should almost always increase the default AWR retention period for important systems, even allowing for any additional space required in the SYSAUX tablespace.

However, I've found myself talking about the best default AWR snapshot *interval* several times over recent months and years and realising that I'm slightly out of step with the prevailing wisdom on the subject, so let's talk about intervals.

I'll kick off by saying that I think people should stick to the default 1 hour interval, rather than the 15 or 30 minute intervals that most of my peers seem to want. Let me explain why.

Initially I was influenced by some of the performance guys working in Oracle and I remember being surprised by their insistence that one hour is a good interval, which is why they picked it. Hold on, though - doesn't everyone know that a 1 hour AWR report smoothes out detail too much?

Then I got into some discussions about Adaptive Thresholds and it started to make more sense. If you want to compare performance metrics over time and trigger alerts automatically based on apparently unusual performance events or workload profiles, then comparing specific hours today to specific hours a month ago makes more sense than getting down to 15 minute intervals which would be far too sensitive to subtle changes. Adaptive Thresholds would become barking mad if the interval granularity was too fine. But when nobody used Adaptive Thresholds too much even though they seemed like a good idea (sorry JB ;-)) this argument started to make less sense to me.

However, I still think that there are very solid reasons to stick to 1 hour and they make more sense when you understand all of the metrics and analysis tools at your disposal and treat them as a box of tools appropriate to different problems.

Let's go back to why people think that a 1 hour interval is too long. The problem with AWR, Statspack and bstat/estat is that they are system-wide reporting tools that capture the difference (or deltas) between the values of various metrics over a given interval. There are at least a couple of problems with that that come to mind.

1) Although a bit of a simplification, almost all of the metrics are system-wide which makes them a poor data source for analysing an individual users performance experience or an individual batch job because systems generally have a mixture of different activities running concurrently. (Benchmarks and load tests are notable exceptions.)

2) Problem 1 becomes worse when you are looking at *all* of the activity that occurred over a given period of time (the AWR Interval), condensed into a single data set or report. The longer the AWR period you report on, the more useless the data becomes. What use is an AWR report covering a one week period? So much has happened during that time and we might only be interested in what was happening at 2:13 am this morning.

In other words, AWR reports combine a wide activity scope (everything on the system) with a wide time scope (hours or days if generated without thought). Intelligent performance folks reduce the impact of the latter problem by narrowing the time scope and reducing the snapshot interval so that if a problem has just happened or is happening right now, they can focus on the right 15 minutes of activity1.

Which makes complete sense in the Statspack world they grew up in, but makes a lot less sense since Oracle 10g was released in 2004! These days there are probably better tools for what you're trying to achieve.

But, as this post is already getting pretty long, I'll leave that for Part 2.

1The natural endpoint to this narrowing of time scope is when people use tools like Swingbench for load testing and select the option to generate AWR snapshots immediately before and after the test they're running. Any AWR report of that interval will only contain the relevant information if the test is the only thing running on the system. At last year's Openworld, Graham Wood and I also covered the narrowing of the Activity scope by, for example, running the AWR SQL report (awrrpt.sql) to limit the report to a single SQL statement of interest. It's easy for people to forget - it's a *suite* of tools and worth knowing the full range so that you pick the appropriate one for the problem at hand.

Adaptive Learning Market Acceleration Program (ALMAP) Summer Meeting Notes

Michael Feldstein - Mon, 2014-07-07 05:04

I recently attended the ALMAP Summer Meeting. ALMAP is a program funded by the Gates Foundation, with the goals described in this RFP webinar presentation from March 2013:

We believe that well implemented personalized & adaptive learning has the potential to dramatically improve student outcomes

Our strategy to accelerate the adoption of Adaptive Learning in higher education is to invest in market change drivers… …resulting in strong, healthy market growth

As the program is in its mid stage (without real results to speak of yet), I’ll summarize Tony Bates style with summary of program and some notes at the end. Consider this my more-than-140-character response to Glenda Morgan:

@PhilOnEdTech was the agenda of the Gates Summit online at all?

— Glenda Morgan (@morganmundum) June 30, 2014

Originally planned for 10 institutions, the Gates Foundation funded 14 separate grantees at a level of ~$100,000 each. The courses must run for 3 sequential semesters with greater than 500 students total (per school), and the program will take 24 months total (starting June 2013). The awards were given to the following schools:

Gates has also funded SRI International to provide independent research on the results of each grant.

The concept of accelerator as used by the Gates Foundation is to push adaptive learning past the innovator’s adoption category into the majority category (see RFP webinar).

ALMAP accelerator

The meeting was organized around quick updates from most of the grantees along with panels of their partner software providers (Knewton, ALEKS, CogBooks, Cerego, OLI, ASSISTments, Smart Sparrow), faculty, and several local students. Here is a summary of the meeting agenda.

ALMAP Agenda

Notes

Adaptive Learning is becoming a hotter topic in higher education recently, and I expect that we will hear more from ALMAP as the results come in. In the meantime, here are some preliminary notes from the meeting (some are my own, some are group discussions that struck me as very important).

  • Despite the potential importance of this funding program, I can only find one full article (outside of Gates publications) about the program. Campus Technology had an article in April titled “The Great Adaptive Learning Experiment”. David Wiley referred to the program in his take on the risks of adaptive learning. Scientific American (among a few others) described ALMAP in one paragraph of a larger story on Adaptive Learning.
  • We really need a taxonomy to describe Adaptive Learning and Personalized Learning as both terms are moving into buzzword and marketing-speak territory. During the break out groups, it seemed there was unanimous agreement on this problem of a lack of precise terminology. While the Gates Foundation also funded two white papers on Adaptive Learning, I did not hear the ALMAP participants using the embedded taxonomy (see below) to improve language usage. I’m not sure why. I provided a short start in this post before EDUCAUSE, but I think Michael and I will do some more analysis on the field and terminology soon. Michael also has a post that was published in the American Federation of Teachers publication AFT On Campus, titled “What Faculty Should Know About Adaptive Learning”, that is worth reading.
  • The above problem (lack of accepted taxonomy, different meanings of adaptive), along with faculty flexibility in determining how to use the software, will make the research challenging, at least in terms of drawing conclusions across the full set of experiments. SRI has its work cut out for them.
  • There appears to be a divide in the vendor space between publisher models, where the content is embedded with the platform, and a platform-only model, where content is provided from external sources. Examples of the former include ALEKS, Adapt Courseware and OLI. Examples of the latter include ASSISTments, Smart Sparrow, CogBooks, Cerego. Cerego might be the only example where they provide “starter” content but also allow the user to provide or integrate their own content. Credit to Neil Heffernan from WPI and ASSISTments for this observation over drinks.
  • Programs of this type (pushing innovation and driving for changes in behavior) should not be judged by the first semester of implementation, when faculty are figuring out how to work out the new approach. Real results should be judged starting in the second semester, and one attendee even recommended to avoid results publication until the third semester. This is the primary reason I am choosing to not even describe the individual programs or early results yet.
  • Kudos to the Gates Foundation for including a student panel (like 20MM Evolve and upcoming WCET conference). Below are a few tweets I sent during this panel.

Student on panel: Profs matter a lot – could tell the ones who don't like teaching. Ones who love teaching are contagious, her best classes.

— Phil Hill (@PhilOnEdTech) June 27, 2014

Conversely, fac who use tech poorly – don't understand, no instructions, no effort to use well – have very negative impact on students

— Phil Hill (@PhilOnEdTech) June 27, 2014

Whether it's from prof or from adaptive sw (or both), student panel wants clear instructions on assignments, timely feedback

— Phil Hill (@PhilOnEdTech) June 27, 2014

Expect to hear more from e-Literate as well as e-Literate TV not only on the ALMAP awardees and their progress, but also from the general field of personalized and adaptive learning.

Below is the taxonomy provided as part of the Gates-funded white paper from Education Growth Advisors.

AL Whitepaper Taxonomy

 

Update: I did not mention the elephant in the room for adaptive learning – whether software will replace faculty – because it was not an elephant in this room; however, this is an important question in general.

@ricetopher Good point. Unclear if gates funded automation would eliminate teachers… Are we becoming the machine? @PhilOnEdTech

— Whitney Kilgore (@whitneykilgore) July 7, 2014

At the ALMAP meeting, I believe that most grantees had faculty members present. From these faculty members (including a panel specifically on faculty experiences), there were discussions about changing roles (“role is facilitator, coach, lifeguard in a sense”), the fact that faculty were requested to participate rather than initiate the change, and the challenge of getting students to come to class for hybrid models. One faculty member mentioned that the adaptive software allow more instruction on real writing and less on skill-and-drill activities.

But the way the grantees implemented adaptive learning software was not based on replacing faculty, at least for this program.

The post Adaptive Learning Market Acceleration Program (ALMAP) Summer Meeting Notes appeared first on e-Literate.

Benefits of Single Tenant Deployments

Asif Momen - Mon, 2014-07-07 04:54
While presenting at a database event, I had a question from one of the attendees on benefits of running Oracle databases in Single Tenant Configuration.  I thought this would be a nice if I post it on my blog as it would benefit others too.
From Oracle documentation, “The multitenant architecture enables an Oracle database to function as a multitenant container database (CDB) that includes zero, one, or many customer-created pluggable databases (PDBs). A PDB is a portable collection of schemas, schema objects, and non-schema objects that appears to an Oracle Net client as a non-CDB. All Oracle databases before Oracle Database 12c were non-CDBs”.
Following are the benefits of running databases in Single Tenant Configuration:
  1. Alignment with Oracle’s new multi-tenant architecture
  2. Cost saving. You save on license fee as single tenant deployments do not attract Multi-tenant option license fee. License is applicable should you have two or more PDBs.
  3. Upgrade/patch your single PDB from 12.1.0.1 to 12.x easily with reduced downtime
  4. Secure separation of duties (between CDBA & DBA)
  5. Easier PDB cloning

I would recommend running all your production and non-production databases in single-tenant configuration (if you are not planning for consolidation using multi-tenant option) once you upgrade them to Oracle Database 12c. I expect to see single tenant deployments become the default deployment model for the customers.