Skip navigation.

Feed aggregator

New Version Of XPLAN_ASH Utility

Randolf Geist - Sun, 2014-12-21 16:40
A new version 4.2 of the XPLAN_ASH utility is available for download.

As usual the latest version can be downloaded here.

There were no too significant changes in this release, mainly some new sections related to I/O figures were added.

One thing to note is that some of the sections in recent releases may require a linesize larger than 700, so the script's settings have been changed to 800. If you use corresponding settings for CMD.EXE under Windows for example you might have to adjust accordingly to prevent ugly line wrapping.

Here are the notes from the change log:

- New sections "Concurrent activity I/O Summary based on ASH" and "Concurrent activity I/O Summary per Instance based on ASH" to see the I/O activity summary for concurrent activity

- Bug fixed: When using MONITOR as source for searching for the most recent SQL_ID executed by a given SID due to some filtering on date no SQL_ID was found. This is now fixed

- Bug fixed: In RAC GV$ASH_INFO should be used to determine available samples

- The "Parallel Execution Skew ASH" indicator is now weighted - so far any activity level per plan line and sample below the actual DOP counted as one, and the same if the activity level was above
The sum of the "ones" was then set relative to the total number of samples the plan line was active to determine the "skewness" indicator

Now the actual difference between the activity level and the actual DOP is calculated and compared to the number of total samples active times the actual DOP
This should give a better picture of the actual impact the skew has on the overall execution

- Most queries now use a NO_STATEMENT_QUEUING hint for environments where AUTO DOP is enabled and the XPLAN_ASH queries could get queued otherwise

- The physical I/O bytes on execution plan line level taken from "Real-Time SQL Monitoring" has now the more appropriate heading "ReadB" and "WriteB", I never liked the former misleading "Reads"/"Writes" heading

Installing VirtualBox on Mint with a CentOS Guest

The Anti-Kyte - Sun, 2014-12-21 12:48

Christmas is almost upon us. Black Friday has been followed by Small Business Saturday and Cyber Monday.
The rest of the month obviously started on Skint Tuesday.
Fortunately for all us geeks, Santa Claus is real. He’s currently posing as Richard Stallman.
I mean, look at the facts. He’s got the beard, he likes to give stuff away for free, and he most definitely has a “naughty” list.

Thanks to Santa Stallman and others like him, I can amuse myself in the Holidays without putting any more strain on my Credit Card.

My main machine is currently running Mint 17 with the Cinnamon desktop. Whilst I’m very happy with this arrangement, I would like to play with other Operating Systems, but without all the hassle of installing/uninstalling etc.
Now, I do have Virtualbox on a Windows partition, but I would rather indulge my OS promiscuity from the comfort of Linux… sorry Santa – GNU/Linux.

So what I’m going to cover here is :

  • Installing VirtualBox on a Debian-based distro
  • Installing CentOS as a Guest Operating System
  • Installing VirtualBox Guest Additions Drivers on CentOS

I’ve tried to stick to the command-line for the installation steps for VirtaulBox so they should be generic to any Debian based host.

Terminology

Throughout this post I’ll be referring to the Host OS and the Guest OS, as well as Guest Additions. These terms can be defined as :

  • Host OS – the Operating System of the physical machine that Virtualbox is running on ( Mint in my case)
  • Guest OS – the Operating System of the virtual machine that is running in VirtualBox (CentOS here)
  • Guest Additions – drivers that are installed on the Guest OS to enable file sharing, viewport resizing etc
Options for getting VirtualBox

Before I get into the installation steps it’s probably worth explaining why I chose the method I did for getting VirtualBox in the first place.
You can get VirtualBox from a repository, instructions for which are on the VirtualBox site itself. However, the version currently available ( 4.3.12 at the time of writing) does not play nicely with Red Hat based guests when it comes to Guest Additions. This issue is fixed in the latest version of Virtualbox (4.3.20) which can be downloaded directly from the site. Therefore, this is the approach I ended up taking.

Right, now that’s out of the way…

Installing VirtualBox Step 1 – Prepare the Host

Before we download VirtualBox, we need to ensure that the dkms package is installed and up to date. So, fire up good old terminal and type :

sudo apt-get install dkms

Running this, I got :

Reading package lists... Done
Building dependency tree       
Reading state information... Done
dkms is already the newest version.
0 to upgrade, 0 to newly install, 0 to remove and 37 not to upgrade.

One further step is to make sure that your system is up-to-date. For Debian based distros, this should do the job :

sudo apt-get update
Step 2 – Get the software

Now, head over to the VirtualBox Downloads Page and select the appropriate file.

NOTE – you will have the choice of downloading either the i386 or the AMD64 versions.
The difference is simply that i386 is 32-bit and AMD64 is 64-bit.

In my case, I’m running a 64-bit version of Mint (which is based on Ubuntu), so I selected :

Ubuntu 13.04( “Raring Ringtail”)/ 13.10(“Saucy Salamander”)/14.04(“Trusty Tahr”)/14.10(“Utopic Unicorn”) – the AMD64 version.

NOTE – if you’re not sure whether you’re running on 32 or 64-bit, simply type the following in a terminal session :

uname -i

If this comment returns x86_64 then you’re running a 64-bit version of your OS. If it returns i686, then you’re running a 32-bit version.

A short time later, you’ll find that Santa has descended the chimney that is your browser and in the Downloads folder that is your living room you have present. Run…

ls -lh $HOME/Downloads/virtualbox*

… and you’ll find the shiny new :

-rw-r--r-- 1 mike mike 63M Dec  5 16:22 /home/mike/Downloads/virtualbox-4.3_4.3.20-96996~Ubuntu~raring_amd64.deb
Step 3 – Installation

To virtually unwrap this virtual present….

cd $HOME/Downloads
sudo dpkg -i virtualbox-4.3_4.3.20-96996~Ubuntu~raring_amd64.deb

On running this the output should be similar to :

(Reading database ... 148385 files and directories currently installed.)
Preparing to unpack virtualbox-4.3_4.3.20-96996~Ubuntu~raring_amd64.deb ...
Stopping VirtualBox kernel modules ...done.
Unpacking virtualbox-4.3 (4.3.20-96996~Ubuntu~raring) over (4.3.12-93733~Ubuntu~raring) ...
Setting up virtualbox-4.3 (4.3.20-96996~Ubuntu~raring) ...
Installing new version of config file /etc/init.d/vboxdrv ...
addgroup: The group `vboxusers' already exists as a system group. Exiting.
Stopping VirtualBox kernel modules ...done.
Uninstalling old VirtualBox DKMS kernel modules ...done.
Trying to register the VirtualBox kernel modules using DKMS ...done.
Starting VirtualBox kernel modules ...done.
Processing triggers for ureadahead (0.100.0-16) ...
Processing triggers for hicolor-icon-theme (0.13-1) ...
Processing triggers for shared-mime-info (1.2-0ubuntu3) ...
Processing triggers for gnome-menus (3.10.1-0ubuntu2) ...
Processing triggers for desktop-file-utils (0.22-1ubuntu1) ...
Processing triggers for mime-support (3.54ubuntu1) ...

Note As this was not my first attempt at installing VirtualBox, there are some feedback lines here that you probably won’t get.

Anyway, once completed, you should have a new VirtualBox icon somewhere in your menu.
In my case (Cinnamon desktop on Mint 17, remember), it’s appeared in the Administration Menu :

vbox_menu

As part of the installation, a group called vboxusers has now been created.
You’ll want to add yourself to this group so that you can access the shared folders, which is something I’ll come onto in a bit. For now though…

sudo usermod -a -G vboxusers username

… where username is your user.

Now, finally, we’ve set it up and can start playing. Click on the menu icon. Alternatively, if you can’t find the icon, or if you just prefer the terminal, the following command should have the same effect :

VirtualBox

Either way, you should now see this :

vbox_welcome

One present unwrapped, assembled and ready to play with…and you don’t even need to worry about cleaning up the discarded wrapping paper.

Installing the CentOS Guest

I fancy having a play with a Red Hat-based distro for a change. CentOS fits the bill perfectly.
Additionally, I happen to have an iso lying around on a cover disk.
If you’re not so lucky, you can get the latest version of CentOS (currently 7) from the website here.

I’ve created a directory called isos and put the CentOS iso there :

ls -lh CentOS*
-rw------- 1 mike mike 687M Jul  9 22:53 CentOS-7.0-1406-x86_64-livecd.iso

Once again, I’ve downloaded the 64-bit version, as can be seen from the x86-64 in the filename.

Now for the installation.

Open VirtualBox and click New :

In the Name and operating system window enter :

Name : CentOS7
Type : Linux
Version Red Hat(64 bit)

vb_new

In the Memory Size Window :

Settings here depend on the resources available to the host machine and what you want to use the VM for.
In my case, my host machine has 8GB RAM.
Also, I want to install Oracle XE on this VM.
Given that, I’m going to allocate 2GB to this image :

vb_new3

In the Hard Drive Window :

I’ve got plenty of space available so I’ll just accept the default to Create a virtual hard drive of 8GB now.

Hard Drive File Type :

Accept the default ( VDI (VirtualBox Disk Image))

and hit Next…

Storage on physical hard drive :

I’ll leave this as the default – Dynamically allocated
Click Next…

File location and size :

I’ve left the size at the default…

vbnew_4

I now have a new VirtualBox image :
The vdi file created to act as the VM’s hard drive is in my home directory under VirtualBox VMs/CentOS7

summary

Now to point it at the iso file we want to use.

Hit Start and ….

fisrt_start

choose_iso

You should now see the chosen .iso file identified as the startup disk :

iso

Now hit start….
live_cd_desktop

Don’t worry too much about the small viewport for now. Guest Additions should resolve that issue once we get it installed.
You probably do need to be aware of the fact that you can transfer the mouse pointer between the Guest and Host by holding down the right CTRL key on your keyboard and left-clicking the mouse.
This may well take a bit of getting used to at first.

Anyway, once you’re guest knows where your mouse is, the first thing is to actually install CentOS into the VDI. At the moment, remember, we’re just running a Live Image.

So, click the Install to Hard Drive icon on the CentOS desktop and follow the prompts as normal.

At the end of the installation, make sure that you’ve ejected your virtual CD from the drive.
To do this :

  1. Get the Host to recapture the mouse (Right CTRL + left-click)
  2. Go to the VirtualBox Menu on the VDI and select Devices/CD/DVD Devices/Remove disk from virtual drive

eject_cd

Now re-start CentOS.

Once it comes back, we’re ready to round things off by…

Installing Guest Additions

It’s worth noting that when CentOS starts, Networking is disconnected by default. To enable, simply Click the Network icon on the toolbar at the top of the screen and switch it on :

enable

We need to make sure that the packages are up to date on CentOS in the same way as we did for the Host at the start of all this so…

sudo yum update

Depending on how recent the iso file you used is, this could take a while !

We also need to install further packages for Guest Additions to work…

sudo yum install gcc
sudo yum install kerenel-devel-2.10.0-123.9.3.el.x86_64

Note It’s also recommended that dkms is installed on “Fedora” (i.e. Red Hat) based Guests. However when I ran …

sudo yum install dkms

I got an error saying “No package dkms available”.
So, I’ve decided to press on regardless…

In the VirtualBox Devices Menu, select Insert Guest Additions CD Image

You should then see a CD icon on your desktop :

guest_additions

The CD should autorun on load.

You’ll see a Virtual Box Guest Additions Installation Terminal Window come up that looks something like this :

Verifying archive integrity... All good.
Uncompressing VirtualBox 4.3.20 Guest Additions for Linux............
VirtualBox Guest Additions installer
Removing installed version 4.3.12 of VirtualBox Guest Additions...
Copying additional installer modules ...
Installing additional modules ...
Removing existing VirtualBox non-DKMS kernel modules       [  OK  ]
Building the VirtualBox Guest Additions kernel modules
Building the main Guest Additions module                   [  OK  ]
Building the shared folder support module                  [  OK  ]
Building the OpenGL support module                         [  OK  ]
Doing non-kernel setup of the Guest Additions              [  OK  ]
Starting the VirtualBox Guest Additions                    [  OK  ]
Installing the Window System drivers
Installing X.Org Server 1.15 modules                       [  OK  ]
Setting up the Window System to use the Guest Additions    [  OK  ]
You may need to restart the hal service and the Window System (or just restart
the guest system) to enable the Guest Additions.

Installing graphics libraries and desktop services componen[  OK  ]

Eject the CD and re-start the Guest.

Now, you should see CentOS in it’s full-screen glory.

Tweaks after installing Guest Additions

First off, let’s make things run a bit more smoothly on the Guest :

On the Host OS in VirtualBox Manager, highlight the CentOS7 image and click on Settings.
Go to Display.

Here, we can increase the amount of Video Memory from the default 12MB to 64MB.
We can also check Enable 3D Acceleration :

vbox_video

Next, in the General Section, click on the Advanced Tab and set the following :

Shared Clipboard : Bidirectional
Drag’n’Drop : Bidirectional

vbox_clipboard

You should now be able to cut-and-paste from Guest to host and vice-versa.

Shared Folders

At some point you’re likely to want to either put files onto or get files from your Guest OS.

To do this :

On the Host

I’ve created a folder to share on my Host system :

mkdir $HOME/Desktop/vbox_shares/centos

Now, in VirtualBox Manager, back in the Settings for CentOS, open the Shared Folders section.

Click the Add icon

add_share

Select the folder and make it Auto-mount

add_share2

On the Guest

In earlier versions of VirtualBox, getting the shared folders to mount was, well, a bit of messing about.
Happily, things are now quite a bit easier.

As we’ve set the shared folder to Auto-mount, it’s mounted on the Guest on

/media/sf_sharename

…where sharename is the name of the share we assigned to it on the Host. So, the shared folder I created exists as :

/media/sf_centos

In order to gain full access to this folder, we simply need to add our user to the vboxsf group that was created when Guest Additions was installed :

sudo usermod -a -G vboxsf username

…where username is your user on the Guest OS.

Note – you’ll need to logout and login again for this change to take effect, but once you do, you should have access to the shared folder.

Right, that should keep me out of trouble (and debt) for a while, as well as offering a distraction from all the things I know I shouldn’t eat…but always do.
That reminds me, where did I leave my nutcracker ?


Filed under: Linux, VirtualBox Tagged: centos 7 guest, copy and paste from clipboard, guest additions, how to tell if your linux os is 32-bit or 64-bit, mint 17 host, shared folders, uname -i, VirtualBox

Digital Delivery "Badge"

Bradley Brown - Sun, 2014-12-21 00:44
At InteliVideo we have come to understand that we need to do everything we can to help our clients sell more digital content. It seems obvious that consumers want to watch videos on devices like their phones, tablets, laptops, and TVs, but it's not so obvious to the everyone. They have been using DVDs for a number of years - and likely VHS tapes before that. We believe it’s important for your customers to understand why they would want to purchase a digital product rather than a physical product (i.e. a DVD).
Better buttons drive sales.  Across all our apps and clients we know we are going to need to really nail our asset delivery process with split tests and our button and banner catalog.  We've simplified the addition of a badge on a client's page. They effectively have to add 4 lines of HTML in order to add our digital delivery badge.
Our clients can use any of the images that InteliVideo provides or we’re happy to provide an editable image file (EPS format) so they can make their own image.  Here are some of our badges that we created:
Screenshot 2014-12-16 19.39.25.png
On our client's web page, it looks something like this:
Screenshot 2014-12-17 14.01.11.png
The image above (Watch Now on Any Device) is the important component.  This is the component that our clients are placing somewhere on their web page(s).  When this is clicked, the existing page will be dimmed and the lightbox will popup and display the “Why Digital” message:
Screenshot 2014-12-17 16.31.54.png
What do your client's customers need to know about in order to help you sell more?

Log Buffer #402, A Carnival of the Vanities for DBAs

Pakistan's First Oracle Blog - Sat, 2014-12-20 18:39
This Log Buffer edition hits the ball out of park by smashing yet another record of surfacing with a unique collection of blog posts from various database technologies. Enjoy!!!

Oracle:

EM12c and the Optimizer Statistics Console.
SUCCESS and FAILURE Columns in DBA_STMT_AUDIT_OPTS.
OBIEE and ODI on Hadoop : Next-Generation Initiatives To Improve Hive Performance.
Oracle 12.1.0.2 Bundle Patching.
Performance Issues with the Sequence NEXTVAL Call.

SQL Server:

GUIDs GUIDs everywhere, but how is my data unique?
Questions About T-SQL Transaction Isolation Levels You Were Too Shy to Ask.
Introduction to Advanced Transact SQL Stairway and Using the CROSS JOIN Operator.
Introduction to Azure SQL Database Scalability.
What To Do When the Import and Export Wizard Fails.

MySQL:

Orchestrator 1.2.9 GA released.
Making HAProxy 1.5 replication lag aware in MySQL.
Monitor MySQL Performance Interactively With VividCortex.
InnoDB’s multi-versioning handling can be Achilles’ heel.
Memory summary tables in Performance Schema in MySQL 5.7.

Also published here.
Categories: DBA Blogs

What an App Cost?

Bradley Brown - Sat, 2014-12-20 17:59
People will commonly ask me this question, which has a very wide range as the answer.  You can get an app build on oDesk for nearly free - i.e. $2000 or less.  Will it provide the functionality you need?  It might!  Do you need a website that does the same thing?  Do you need a database (i.e. something beyond the app) to store your data for your customers?

Our first round of apps at InteliVideo cost us $2,000-10,000 to develop each of them.  We spent a LOT of money on the backend server code.  Our first versions were pretty fragile (i.e. broke fairly easily) and we're very sexy.  We decided that we needed to revamp our apps from stem to stern...APIs to easy branding to UI.

Here's a look at our prior version.  Our customers (people who buy videos) aren't typically buying from more than 1 of our clients - yet.  But in the prior version I saw a list of all of the products I had purchased.  It's not a very sexy UI - just a simple list of videos:


When I drilled into a specific product, again I see a list of videos within the product:

I can download or play a video in a product:


Here's what it looks like for The Dailey Method:



Here's the new version demonstrating the branding for Chris Burandt.  I've purchased a yearly subscription that currently includes 73 videos.  I scroll (right not down) through those 73 videos here:


Or if I click on the title, I get to see a list of the videos in more detail:


Notice the colors (branding) is shown everywhere here.  I scrolled up to look through those videos:


Here's a specific video that talked about a technique to set your sled unstuck:


Here's what the app looks like when I'm a The Dailey Method customer.  Again, notice the branding everywhere:

Looking at a specific video and it's details:


We built a native app for iOS (iPad, iPhone, iPod), Android, Windows and Mac that has all of the same look, feel, functionality, etc.  This was a MAJOR undertaking!

The good news is that if you want to start a business and build an MVP (Minimally Viable Product) to see if there is actually a market for your product, you don't have to spend hundreds of thousands to do so...but you might have to later!


e-Literate Top 20 Posts For 2014

Michael Feldstein - Sat, 2014-12-20 12:17

I typically don’t write year-end reviews or top 10 (or 20) lists, but I need to work on our consulting company finances. At this point, any distraction seems more enjoyable than working in QuickBooks.

We’ve had a fun year at e-Literate, and one recent change is that we are now more willing break stories when appropriate. We typically comment on ed tech stories a few days after the release, providing analysis and commentary, but there are several cases where we felt a story needed to go public. In such cases (e.g. Unizin creation, Cal State Online demise, management changes at Instructure and Blackboard) we tend to break the news objectively, providing mostly descriptions and explanations, allowing others to provide commentary.

The following list is based on Jetpack stats on WordPress, which does not capture people who read posts through RSS feeds (we send out full articles through the feed). So the stats have a bias towards people who come to e-Literate for specific articles rather than our regular readers. We also tend to get longer-term readership of articles over many months, so this list also has a bias for articles posted a while ago.

With that in mind, here are the top 20 most read articles on e-Literate in terms of page views for the past 12 months along with publication date.

  1. Can Pearson Solve the Rubric’s Cube? (Dec 2013) – This article proves that people are willing to read a 7,000 word post published on New Year’s Eve.
  2. A response to USA Today article on Flipped Classroom research (Oct 2013) – This article is our most steady one, consistently getting around 100 views per day.
  3. Unizin: Indiana University’s Secret New “Learning Ecosystem” Coalition (May 2014) – This is the article where we broke the story about Unizin, based largely on a presentation at Colorado State University.
  4. Blackboard’s Big News that Nobody Noticed (Jul 2014) – This post commented on the Blackboard users’ conference and some significant changes that got buried in the keynote and much of the press coverage.
  5. Early Review of Google Classroom (Jul 2014) – Meg Tufano got pilot access to the new system and allowed me to join the testing; this article mostly shares Meg’s findings.
  6. Why Google Classroom won’t affect institutional LMS market … yet (Jun 2014) – Before we had pilot access to the system, this article described the likely market affects from Google’s new system.
  7. Competency-Based Education: An (Updated) Primer for Today’s Online Market (Dec 2013) – Given the sudden rise in interest in CBE, this article updated a 2012 post explaining the concept.
  8. The Resilient Higher Ed LMS: Canvas is the only fully-established recent market entry (Feb 2014) – Despite all the investment in ed tech and market entries, this article noted how stable the LMS market is.
  9. Why VCs Usually Get Ed Tech Wrong (Mar 2014) – This post combined references to “selling Timex knockoffs in Times Square” with a challenge to the application of disruptive innovation.
  10. New data available for higher education LMS market (Nov 2013) – This article called out the Edutechnica and ListEdTech sites with their use of straight data (not just sampling surveys) to clarify the LMS market.
  11. InstructureCon: Canvas LMS has different competition now (Jun 2014) – This was based on the Instructure users’ conference and the very different attitude from past years.
  12. Dammit, the LMS (Nov 2014) – This rant called out how the LMS market is largely following consumer demand from faculty and institutions.
  13. Why Unizin is a Threat to edX (May 2014) – This follow-on commentary tried to look at what market effects would result from Unizin introduction.
  14. State of the Anglosphere’s Higher Education LMS Market: 2013 Edition (Nov 2013) – This was last year’s update of the LMS squid graphic.
  15. Google Classroom: Early videos of their closest attempt at an LMS (Jun 2014) – This article shared early YouTube videos showing people what the new system actually looked like.
  16. State of the US Higher Education LMS Market: 2014 Edition (Oct 2014) – This was this year’s update of the LMS squid graphic.
  17. About Michael – How big is Michael’s fan club?
  18. What is a Learning Platform? (May 2012) – The old post called out and helped explain the general move from monolithic systems to platforms.
  19. What Faculty Should Know About Adaptive Learning (Dec 2013) – This was a reprint of invited article for American Federation of Teachers.
  20. Instructure’s CTO Joel Dehlin Abruptly Resigns (Jul 2014) – Shortly after the Instructure users’ conference, Joel resigned from the company.

Well that was more fun that financial reporting!

The post e-Literate Top 20 Posts For 2014 appeared first on e-Literate.

Exadata Patching Introduction

The Oracle Instructor - Sat, 2014-12-20 10:24

These I consider the most important points about Exadata Patching:

Where is the most recent information?

MOS Note 888828.1 is your first read whenever you think about Exadata Patching

What is to patch with which utility?

Exadata Patching

Expect quarterly bundle patches for the storage servers and the compute nodes. The other components (Infiniband switches, Cisco Ethernet Switch, PDUs) are less frequently patched and not on the picture therefore.

The storage servers have their software image (which includes Firmware, OS and Exadata Software)  exchanged completely with the new one using patchmgr. The compute nodes get OS (and Firmware) updates with dbnodeupdate.sh, a tool that accesses an Exadata yum repository. Bundle patches for the Grid Infrastructure and for the Database Software are being applied with opatch.

Rolling or non-rolling?

This the sensitive part! Technically, you can always apply the patches for the storage servers and the patches for compute node OS and Grid Infrastructure rolling, taking down only one server at a time. The RAC databases running on the Database Machine will be available during the patching. Should you do that?

Let’s focus on the storage servers first: Rolling patches are recommended only if you have ASM diskgroups with high redundancy or if you have a standby site to failover to in case. In other words: If you have a quarter rack without a standby site, don’t use rolling patches! That is because the DBFS_DG diskgroup that contains the voting disks cannot have high redundancy in a quarter rack with just three storage servers.

Okay, so you have a half rack or bigger. Expect one storage server patch to take about two hours. That summarizes to 14 hours (for seven storage servers) patching time with the rolling method. Make sure that management is aware about that before they decide about the strategy.

Now to the compute nodes: If the patch is RAC rolling applicable, you can do that regardless of the ASM diskgroup redundancy. If a compute node gets damaged during the rolling upgrade, no data loss will happen. On a quarter rack without a standby site, you put availability at risk because only two compute nodes are there and one could fail while the other is just down.

Why you will want to have a Data Guard Standby Site

Apart from the obvious reason for Data Guard – Disaster Recovery – there are several benefits associated to the patching strategy:

You can afford to do rolling patches with ASM diskgroups using normal redundancy and with RAC clusters that have only two nodes.

You can apply the patches on the standby site first and test it there – using the snapshot standby database functionality (and using Database Replay if you licensed Real Application Testing)

A patch set can be applied on the standby first and the downtime for end users can be reduced to the time it takes to do a switchover

A release upgrade can be done with a (Transient) Logical Standby, reducing again the downtime to the time it takes to do a switchover

I suppose this will be my last posting in 2014, so Happy Holidays and a Happy New Year to all of you :-)


Tagged: exadata
Categories: DBA Blogs

PeopleTools 8.54 Feature: Support for Oracle Database Materialized Views

Javier Delgado - Fri, 2014-12-19 17:04
One of the new features of PeopleTools 8.54 is the support of Oracle Database Materialized Views. In a nutshell, Materialized Views can be seen as a snapshot of a given view. When you query a Materialized View, the data is not necessarily accessed online, but instead it is retrieved from the latest snapshot. This can greatly contribute to improve query performance, particularly for complex SQLs or Pivot Grids.

Materialized Views Features
Apart from the performance benefits associated with them, one of the most interesting features of Materialized Views is how the data refresh is handled. Oracle Database supports two ways of refreshing data:


  • On Commit: data is refreshed whenever a commit takes place in any of the underlying tables. In a way, this method is equivalent to maintaining through triggers a staging table (the Materialized View) whenever the source table changes, but all this complexity is hidden from the developer. Unfortunately, this method is only available with join-based or single table aggregate views.

Although it has the benefit of almost retrieving online information, normally you would use the On Commit for views based on tables that do not change very often. As every time a commit is made, the information is refreshed in the Materialized View, the insert, update and delete performance on the source tables will be affected.Hint: You would normally use On Commit method for views based on Control tables, not Transactional tables.
  • On Demand: data is refreshed on demand. This option is valid for all types of views, and implies that the Materialized View data is only refreshed when requested by the administrator. PeopleTools 8.54 include a page named Materialized View Maintenance where the on demand refreshes can be configured to be run periodically.




In case you choose the On Demand method, the data refresh can actually be done following two different methods:


  • Fast, which just refreshed the rows in the Materialized View affected by the changes made to the source records.


  • Full, which fully recalculated the Materialized View contents. This method is preferable when large volume changes between refreshes are usually performed against the source records. Also, this option is required after certain types of updates on the source records (ie: INSERT statements using the APPEND hint). Finally, this method is required when one of the source records is also a Materialized View and has been refreshed using the Full method. 


How can we use them in PeopleTools?
Before PeopleTools 8.54, Materialized Views could be used as an Oracle Database feature, but the DBA would need to be responsible of editing the Application Designer build scripts to include the specific syntax for this kind of views. On top of that, the DBA would need to schedule the data refresh directly from the database.

PeopleTools 8.54 introduces support within PeopleSoft tools. In first place, Application Designer will now show new options for View records:



We have already seen what Refresh Mode and Refresh Method mean. The Build Options indicate to Application Designer whether the Materialized View date needs to be calculated upon its build is executed or if it could be delayed until the first refresh is requested from the Materialized View Maintenance page.

This page is used to determine when to refresh the Materialized Views. The refresh can be executed for multiple views at once and scheduled using the usual PeopleTools Process Scheduler recurrence features. Alternatively, the Refresh Interval [seconds] may be used to indicate the database that this view needs to be refreshed every n seconds.

Limitations
The main disadvantage of using Materialized Views is that they are specific to Oracle Database. They will not work if you are using any other platform, in which case the view acts like a normal view, which keeps a similar functional behaviour, but without all the performance advantages of Materialized Views.

Conclusions
All in all, Materialized Views provide a very interesting feature to improve the system performance, while keeping the information reasonably up to date. Personally, I wish I've had this feature available for many of the reports I've built in all these years... :-)

Consumer Security for the season and Today's World

PeopleSoft Technology Blog - Fri, 2014-12-19 13:41

Just to go beyond my usual security sessions, I was asked recently to talk to a local business and consumer group about personal cyber security. Here is the document I used for the session and you might find some useful tips.

Protecting your online shopping experience

- check retailer returns policy

- use a credit card rather than debit card, or check the protection on the debit card

- use a temporary/disposable credit card e.g. ShopSafe from Bank of America

- use a low limit credit card - with protection, e.g. AMEX green card

- check your account for random small amount charges and charitable contributions

- set spending and "card not present" alerts

Protecting email

- don't use same passwords for business and personal accounts

- use a robust email service provider

- set junk/spam threshold in your email client

- only use web mail for low risk accounts (see Note below)

- don't click on links in the email, DON’T click on links in email – no matter who you think sent it

Protecting your computer

- if you depend on a computer/laptop/tablet for business, ONLY use it for business

- don't share your computer with anyone, including your children

- if you provide your children with a computer/laptop, refresh them from "recovery disks" on a periodic basis

- teach children value of backing up important data

- if possible have your children only use their laptops/devices in family rooms where the activity can be passively observed

- use commercial, paid subscription, antivirus/anti malware on all devices (see Note below)

- carry and use a security cable when traveling or away from your office

Protecting your smart phone/tablet

- don't share your device

- make sure you have a secure lock phrase/PIN and set the idle timeout

- don't recharge it using the USB port on someone else's laptop/computer

- ensure the public Wi-Fi which you use is a trusted Wi-Fi (also - see Note below)

- store your data in the cloud, preferably not (or not only) the phone/tablet

- don't have the device "remember" your password, especially for sensitive accounts

- exercise caution when downloading software e.g. games/apps, especially "free" software (see Note below)

Protect your social network

- don't mix business and personal information in your social media account

- use separate passwords for business and personal social media accounts

- ensure you protect personal information from the casual user

- check what information is being shared about you or photos tagged by your "friends"

- don't share phone numbers or personal/business contact details,
e.g. use the "ask me for my ..." feature

General protection and the “Internet of Things”

- be aware of cyber stalking

- be aware of surreptitious monitoring
e.g. “Google Glass” and smart phone cameras

- consider “nanny” software, especially for children’s devices

- be aware of “click bait” – e.g. apparently valid “news” stories which are really sponsored messages

- be aware of ATM “skimming”, including self serve gas pumps

- be aware of remotely enabled camera and microphone (laptop, smart phone, tablet)

Note: Remember, if you’re not paying for the product, you ARE the product

Important! PeopleTools Requirements for PeopleSoft Interaction Hub

PeopleSoft Technology Blog - Fri, 2014-12-19 12:04

The PeopleSoft Interaction Hub* follows a release model of continuous delivery.  With this release model, Oracle delivers new functionality and enhancements on the latest release throughout the year, without requiring application upgrades to new releases.

PeopleTools is a critical enabler for delivering new functionality and enhancements for releases on continuous delivery.  The PeopleTools adoption policy for applications on continuous release is designed to provide reasonable options for customers to stay current on PeopleTools while adopting newer PeopleTools features. The basic policy is as follows:

Interaction Hub customers must upgrade to a PeopleTools release no later than 24 months after that PeopleTools release becomes generally available.  

For example, PeopleTools 8.53 was released in February 2013. Therefore, customers who use Interaction Hub will be required to upgrade to PeopleTools 8.53 (or newer, such as PeopleTools 8.54) no later than February 2015 (24 months after the General Availability date of PeopleTools 8.53). As of February 2015, product maintenance and new features may require PeopleTools 8.53. 

Customers should start planning their upgrades if they are on PeopleTools releases that are more than 24 months old.  See the Lifetime Support Summary for PeopleSoft Releases (doc id 1348959.1) in My Oracle Support for details on PeopleTools support policies.

* The PeopleSoft Interaction Hub is the latest branding of the product.  These support guidelines also apply to the same product under the names PeopleSoft Enterprise Portal, PeopleSoft Community Portal, and PeopleSoft Applications Portal.


Do You Really Need a Content Delivery Network (CDN)?

Bradley Brown - Fri, 2014-12-19 10:39
When I first heard about Amazon's offering called CloudFront I really didn't understand what it offered and who would want to use it.  I don't think they initially called it a content delivery network (CDN), but I could be wrong about that.  Maybe it was just something I didn't think I needed at that time.

Amazon states it well today (as you might expect).  The offering "gives developers and businesses an easy way to distribute content to end users with low latency, and high data transfer speeds."

So when you hear the word "content" what is it that you think about?  What is content?  First off, it's digital content.  So...website pages?  That's what I initially thought of.  But it's really any digital content.  Audio books, videos, PDFs - files of any time, any size.

When it comes to distributing this digital content, why would you need to do this with low latency and/or high transfer speeds?  Sure, this is important if your website traffic scales up from 1-10 concurrent viewers to millions overnight.  How realistic is that for your business?  What about the other types of content - such as videos?  Yep, now I'm referring to what we do at InteliVideo!

A CDN allows you to scale up to any number of customers viewing or downloading your content concurrently.  Latency can be translated to "slowness" when you're trying to download a video when you're in Japan because it's moving the file across the ocean.  The way that Amazon handles this is that they move the file across the ocean using their fast pipes (high speed internet) between their data centers and then the customer downloads the file effectively directly from Japan.

Imagine that you have this amazing set of videos that you want to bundle up and sell to millions of people.  You don't know when your sales will go viral, but when it happens you want to be ready!  So how do you implement a CDN for your videos, audios, and other content?  Leave that to us!

So back to the original question.  Do you really need a content delivery network?  Well...what if you could get all of the benefits of having one without having to lift a finger?  Would you do it then?  Of course you would!  That's exactly what we do for you.  We make it SUPER simple - i.e. it's done 100% automatically for our clients and their customers.  Do you really need a CDN?  It depends on how many concurrent people are viewing your content and where they are located.

For my Oracle training classes that I offer through BDB Software, I have customers from around the world, which I personally find so cool!  Does BDB Software need a CDN?  It absolutely makes for a better customer experience and I have to do NOTHING to get this benefit!

“Innovation in Managing the Chaos of Everyday Project Management” is now on YouTube

If you missed Fishbowl’s recent webinar on our new Enterprise Information Portal for Project Management, you can now view a recording of it on YouTube.

 

Innovation in Managing the Chaos of Everyday Project Management discusses our strategy for leveraging the content management and collaboration features of Oracle WebCenter to enable project-centric organizations to build and deploy a project management portal. This solution was designed especially for groups like E & C firms and oil and gas companies, who need applications to be combined into one portal for simple access.

If you’d like to learn more about the Enterprise Information Portal for Project Management, visit our website or email our sales team at sales@fishbowlsolutions.com.

The post “Innovation in Managing the Chaos of Everyday Project Management” is now on YouTube appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Log Buffer #402, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-12-19 09:15

This Log Buffer edition hits the ball out of park by smashing yet another record of surfacing with a unique collection of blog posts from various database technologies. Enjoy!!!

Oracle:

EM12c and the Optimizer Statistics Console.

SUCCESS and FAILURE Columns in DBA_STMT_AUDIT_OPTS.

OBIEE and ODI on Hadoop : Next-Generation Initiatives To Improve Hive Performance.

Oracle 12.1.0.2 Bundle Patching.

Performance Issues with the Sequence NEXTVAL Call.

SQL Server:

GUIDs GUIDs everywhere, but how is my data unique?

Questions About T-SQL Transaction Isolation Levels You Were Too Shy to Ask.

Introduction to Advanced Transact SQL Stairway and Using the CROSS JOIN Operator.

Introduction to Azure SQL Database Scalability.

What To Do When the Import and Export Wizard Fails.

MySQL:

Orchestrator 1.2.9 GA released.

Making HAProxy 1.5 replication lag aware in MySQL.

Monitor MySQL Performance Interactively With VividCortex.

InnoDB’s multi-versioning handling can be Achilles’ heel.

Memory summary tables in Performance Schema in MySQL 5.7.

Categories: DBA Blogs

What Do Oracle Audit Vault Collection Agents Do?

The Oracle Audit Vault is installed on a server, and collector agents are installed on the hosts running the source databases.  These collector agents communicate with the audit vault server. 

If the collection agents are not active, no audit data is lost, as long as the source database continues to collect the audit data.  When the collection agent is restarted, it will capture the audit data that the source database had collected during the time the collection agent was inactive.

There are three types of agent collectors for Oracle databases.  There are other collectors for third-party database vendors such as SAP Sybase, Microsoft SQL-Server, and IBM DB2.

Audit Value Collectors for Oracle Databases*

Audit Trail Type

How Enabled

Collector Name

Database audit trail

For standard audit records: AUDIT_TRAIL initialization parameter set to: DB or DB, EXTENDED.

For fine-grained audit records: The audit trail parameter of DBMS_FGA.ADD_POLICY procedure is set to: DBMS_FGA.DB or DBMS_FGA.DB + DBMS_FGA.EXTENDED.

DBAUD

Operating system audit trail

For standard audit records: AUDIT_TRAIL initialization parameter is set to: OSXML, or XML, EXTENDED.

For syslog audit trails, AUDIT_TRAIL is set to OS and the AUDIT_SYS_OPERATIONS parameter is set to TRUE.  In addition, the AUDIT_SYSLOG_LEVEL parameter must be set.

For fine-grained audit records: The audit_trail parameter of the DBMS_FGA.ADD_POLICY procedure is set to DBMS_FGA.XML or DBMS_FGA.XML + DBMS_FGA.EXTENDED.

OSAUD

Redo log files

The table that you want to audit must be eligible.  See "Creating Capture Rules for Redo Log File Auditing" for more information.

REDO

 *Note if using Oracle 12c; the assumption is that Mixed Mode Unified Auditing is being used

If you have questions, please contact us at mailto:info@integrigy.com

Reference Tags: AuditingOracle Audit VaultOracle Database
Categories: APPS Blogs, Security Blogs

OBIEE Enterprise Security

Rittman Mead Consulting - Fri, 2014-12-19 05:35

The Rittman Mead Global Services team have recently been involved in a number of security architecture implementations and produced a security model which meets a diverse set of requirements.  Using our experience and standards we have been able to deliver a robust model that addresses the common questions we routinely receive around security, such as :

“Whats considerations do I need to make when exposing Oracle BI to the outside world?”

or

“How can I make a flexible security model which is robust enough to meet the demands of my organisation but easy to maintain?”

The first question is based on a standard enterprise security model where the Oracle BI server is exposed by a web host, enabling SSL and tightening up access security.  This request can be complex to achieve but is something that we have implemented many times now.

The second question is much harder to answer, but our experience has led us to develop a multi-dimensional inheritance security model, with numerous clients that has yielded excellent results.

What is a Multi-dimensional Inheritance Security Model?

The wordy title is actually a simple concept that incorporates 5 key areas:

  • Easy to setup and maintain
  • Flexible
  • Durable
  • Expandable
  • Be consistent throughout the product

While there numerous ways of implementing a security model in Oracle BI, by sticking to the key concepts above, we ensure we get it right.  The largest challenge we face in BI is the different types of security required, and all three need to work in harmony:

  • Application security
  • Content security
  • Data security
Understanding the organisation makeup

The first approach is to consider the makeup of a common organisation and build our security around it.

1

This diagram shows different Departments (Finance, Marketing, Sales) whose data is specific to them, so normally the departmental users should only see their own data that is relevant to them.  In contrast the IT department who are developing the system need visibility across all data and so do the Directors.

 

What types of users do I have?

Next is to consider the types of users we have:

  1. BI Consumer: This will be the most basic and common user who needs to access the system for information.
  2. BI Analyst: As an Analyst the user will be expected to generate more bespoke queries and need ways to represent them. They will also need an area to save these reports.
  3. BI Author: The BI Author will be able to create content and publish that content for the BI Consumers and BI Analysts.
  4. BI Department Admin: The BI Department Admin will be responsible for permissions for their department as well as act as a focal point user.
  5. BI Developer: The BI Developer can be thought of as the person(s) who creates models in the RPD and will need additional access to the system for testing of their models. They might also be responsible for delivering Answers Requests or Dashboards in order to ‘Prove’ the model they created.
  6. BI Administrator:  The Administrator will be responsible for the running of the BI system and will have access to every role.  Most Administrator Task will not require Skills in SQL/Data Warehouse and is generally separated from the BI Developer role.

The types of users here are a combination of every requirement we have seen and might not be required by every client.  The order they are in shows the implied inheritance, so the BI Analyst inherits permissions and privileges from the BI Consumer and so on.

What Types do I need?

Depending on the size of organization determines what types of user groups are required. By default Oracle ships with:

  1. BI Consumer
  2. BI Author
  3. BI Administrator

Typically we would recommend inserting the BI Analyst into the default groups:

  1. BI Consumer
  2. BI Analyst
  3. BI Author
  4. BI Administrator

This works well when there is a central BI team who develop content for the whole organization. The structure would look like this:

2

 

For larger organizations where dashboard development and permissions is handled across multiple BI teams then the BI Administrator group can be used.  Typically we see the BI team as a central Data Warehouse team who deliver the BI model (RPD) to the multiple BI teams.  In a large Organization the administration of Oracle BI should be handled by someone who isn’t the BI Developer, the structure could look like:

3

 

 

Permissions on groups

Each of the groups will require different permissions, at a high level the permissions would be:

 

Name Permissions BI Consumer
  • View Dashboards
  • Save User Selections
  • Subscribe to Ibots
BI Analyst
  • Access to Answers and standard set of views
  • Some form of storage
  • Access to Subject areas
BI Author
  • Access to Create/Modify Dashboards
  • Save Predefined Sections
  • Access to Action Links
  • Access to Dashboard Prompts
  • Access to BI Publisher
BI Department Admin
  • Ability to apply permissions and manage the Web Catalog
BI Developer
  • Advance access to answers
  • Access to all departments
BI Administrator
  • Everything

 

Understanding the basic security mechanics in 10g and 11g

In Oracle BI 10g the majority of the security is handled in the Oracle BI Server.  This would normally be done through initialisation blocks, which would authenticate the user from a LDAP server, then run a query against a database tables to populate the user into ‘Groups’ used in the RPD and ‘Web Groups’ used in the presentation server.  These groups would have to match in each level; Database, Oracle BI Server and Oracle BI Presentation Server.

With the addition of Enterprise Manager and Weblogic the security elements in Oracle BI 11g radically changed.  Authenticating the user is in the Oracle BI server is no longer the recommended way and is limited in Linux. While the RPD Groups and Presentation Server Web Groups still exist they don’t need to be used.  Users are now authenticated against Weblogic.  This can be done by using Weblogic’s own users and groups or by plugging it into a choice of LDAP servers.  The end result will be Groups and Users that exist in Weblogic.  The groups then need to be mapped to Application Roles in Enterprise Manager, which can be seen by the Oracle BI Presentation Services and Oracle BI Server.  It is recommended to create a one to one mapping for each group.

4

 

What does all this look like then?

Assuming this is for an SME size organization where the Dashboard development (BI Author) is done by the central BI team the groups would like:

 

5

 

The key points are:

  • The generic BI Consumer/Analyst groups give their permissions to the department versions
  • No users should be in the generic BI Consumer/Analyst groups
  • Only users from the BI team should be in the generic BI Author/Administrator group
  • New departments can be easily added
  • the lines denote the inheritance of permissions and privileges

 

Whats next – The Web Catalog?

The setup of the web catalog is very important to ensure that it does not get unwieldy, so it needs to reflect the security model and we would recommend setting up some base folders which look like:

6

 

Each department has their own folder and 4 sub folders. The permissions applied to each department’s root folder is BI Administrators so full control is possible across the top.  This is also true for every folder below however they will have additional explicit permissions described to ensure that the department cannot create any more than the four sub folders.

  • The Dashboard folder is where the dashboards go and the departments BI Developers group will have Full control and the departments BI consumer will have read . This will allow the departments BI Developers to create dashboards,  the departments BI Administrators to apply permissions and the departments consumers and analysts the ability to view.
  • The same permissions are applied to the Dashboard Answers folder to the same effect.
  • The Development Answers folder has Full control given to the departments BI Developers and no access to for the departments BI Analysts or BI Consumers. This folder is mainly for the departments BI Developers to store answers when in the process of development.
  • The Analyst folder is where the departments BI Analysts can save Answers. Therefore they will need full control of this folder.

I hope this article gives some insight into Security with Oracle BI.  Remember that our Global Services products offer a flexible support model where you can harness our knowledge to deliver your projects in a cost effective manner.

Categories: BI & Warehousing

Elephants and Tigers - V8 of the Website

Bradley Brown - Thu, 2014-12-18 21:54
It's amazing how much work goes into a one page website these days!  We've been working on our new version of our website (which is basically one page) for the last month or so.  The content is "easy" part on one hand and the look and feel / experience is the time consuming part.  To put it another way, it's all about the entire experience, not just the text/content.

Since we're a video company, it's important that they first page show some video...which required production and editing.  We're hunting elephants, so we need to tell the full story of the implementations that we've done for our large clients.  What all can you sell on our platform?  A video?  Audio books?  Movies?  TV Shows?  What else?  We needed to talk about our onboarding process for the big guys.  What's the shopping cart integration look like?  We have an entirely new round of apps coming out soon, so we need to show those off.  We need to answer that question of "What do our apps look like?"    Everybody wants analytics right?  You want to know who watched what - for how long, when and where!  What about all of the ways you can monetize - subscriptions (SVOD), transactional (TVOD) - rentals and purchases, credit-based purchases, and more.  What about those enterprises who need to restrict (or allow) viewing based on location?
Yes, it's quite a story that we've learned over the past few years.  Enterprises (a.k.a. Elephants) need it all.  We're "enterprise guys" after all.  It's natural for us to hunt Elephants.
Let's walk through this step-by-step.  In some ways it's like producing movie.  A lot of moving parts, a lot of post editing and ultimately comes down to the final cut.
What is that you want to deliver?  Spoken word?  TV Shows?  Training?  Workouts?  Maybe you want to jump right into why digital, how to customize or other topics...

Let's talk about why go digital?  Does it seem obvious to you?  It's not obvious to everyone.  Companies are still selling a lot of DVDs.

Any device, anywhere, any time!  That's how your customers want the content.

We have everything from APIs to Single Sign On, and SO much more...we are in fact an enterprise solution.


It's time to talk about the benefits.  We have these awesome apps that we've spent a fortune developing and allowing our clients to have full branding experience as you see here for UFC FIT.


We integrate to most of our large customers existing shopping carts.  We simply receive an instant payment notification from them to authorize a new customer.


I'm a data guy at heart, so we track everything about who's watching what, where they are watching from and so much more.  Our analytics reporting shows you this data.  Ultimately this leads to strategic upsell to existing customers.  It's always easier to sell someone who's already purchased over a new customer.


What website would be complete without a full list of client testimonials?


If you can dream up a way to monetize your content, we can implement it.  Credit based subscription systems to straight out purchase...we have it all!

What if you want to sell through affiliates?  How about selling the InteliVideo platform as an affiliate?  Our founders came from ClickBank, so we understand Affiliate payments and how to process them.


Do you need a step-by-step guide to our implementation process?  Well...if so, here you have it!  It's as simple as 5 steps.  For some customers this is a matter of hours and for others it's months.  The first step is simply signing up for an InteliVideo account at: http://intelivideo.com/sign-up/ 

We handle payment processing for you if you would like.  But...most big companies have already negotiated their merchant processing rates AND they typically already have a shopping cart.  So we integrate as needed.

Loading up your content is pretty easy with our platform.  Then again, we have customers with as few as one product and others with thousands of products and 10s of thousands of assets (videos, audio files, etc.).  Most of our big customers simply send us a drive.  We have a bulk upload process where you give us your drive and all of the metadata (descriptions) and the mapping of each...and we load it all up for you.

Our customers can use our own sales pages and/or membership area...or we have a template engine that allows for comprehensive redesign of the entire look and feel.  Out of the box implementations are simple...


Once our clients sign off on everything and our implementation team does as well...it's time to buy your media, promote your products and start selling.  We handle the delivery.


For those who would like to sign up or need more information, what website would be complete without a contact me page?  There are other pages (like our blog, about us, etc), but this page has a lot of information.  It's a story.  At the bottom of the page there is a "Small Business" link, which takes you to the prior version of our website...for small businesses.


As I said at the beginning of this blog post...it's amazing how much thought goes into a new web page!  We're very excited about our business.  Hopefully this post helped you think through how you want to tell the stories about your business.  How should you focus on your elephants and tigers?  How often should you update your website?  Go forth and crush it!
This new version of our website should be live in the next day or so...as always, I'd love to hear your feedback!

Helix Education puts their competency-based LMS up for sale

Michael Feldstein - Thu, 2014-12-18 17:05

Back in September I wrote about the Helix LMS providing an excellent view into competency-based education and how learning platforms would need to be designed differently for this mode. The traditional LMS – based on a traditional model using grades, seat time and synchronous cohort of students – is not easily adapted to serve CBE needs such as the following:

  1. Explicit learning outcomes with respect to the required skills and concomitant proficiency (standards for assessment)
  2. A flexible time frame to master these skills
  3. A variety of instructional activities to facilitate learning
  4. Criterion-referenced testing of the required outcomes
  5. Certification based on demonstrated learning outcomes
  6. Adaptable programs to ensure optimum learner guidance

In a surprise move, Helix Education is putting the LMS up for sale.  Helix Education provided e-Literate the following statement to explain the changes, at least from a press release perspective.

With a goal of delivering World Class technologies and services, a change we are making is with Helix LMS. After thoughtful analysis and discussion, we have decided to divest (sell) Helix LMS. We believe that the best way for Helix to have a positive impact on Higher Education is to:

  • Be fully committed and invest properly in core “upstream” technologies and services that help institutions aggregate, analyze and act upon data to improve their ability to find, enroll and retain students and ensure their success
  • Continue to build and share our thought leadership around TEACH – program selection, instructional design and faculty engagement for CBE, on-campus, online and hybrid delivery modes.
  • Be LMS neutral and support whichever platform our clients prefer. In fact, we already have experience in building CBE courses in the top three LMS solutions.

There are three aspects of this announcement that are quite interesting to me.

Reversal of Rebranding

Part of the surprise is that Helix rebranded the company based on their acquisition of the LMS – this was not just a simple acquisition of a learning platform – and just over a year after this event Helix Education is reversing course, selling the Helix LMS and going LMS-neutral. From the earlier blog post [emphasis added]:

In 2008 Altius Education, started by Paul Freedman, worked with Tiffin University to create a new entity called Ivy Bridge College. The goal of Ivy Bridge was to help students get associate degrees and then transfer to a four-year program. Altius developed the Helix LMS specifically for this mission. All was fine until the regional accrediting agency shut down Ivy Bridge with only three months notice.

The end result was that Altius sold the LMS and much of the engineering team to Datamark in 2013. Datamark is an educational services firm with a focus on leveraging data. With the acquisition of the Helix technology, Datamark could expand into the teaching and learning process, leading them to rebrand as Helix Education – a sign of the centrality of the LMS to the company’s strategy. Think of Helix Education now as an OSP (a la carte services that don’t require tuition revenue sharing) with an emphasis on CBE programs.

Something must have changed in their perception of the market to cause this change in direction. My guess is that they are getting pushback from schools who insist on keeping their institutional LMS, even with the new CBE programs. Helix states they have worked with “top three LMS solutions”, but as seen in the demo (read the first post for more details), capabilities such as embedding learning outcomes throughout a course and providing a flexible time frame work well outside the core design assumptions of a traditional LMS. I have yet to see an elegant design for CBE with a traditional LMS. I’m open to being convinced otherwise, but count me as skeptical.

Upstream is Profitable

The general move sounds like the main component is the moving “upstream” element. To be more accurate, it’s more a matter of staying “upstream” and choosing to not move downstream. It’s difficult, and not always profitable, to deal with implementing academic programs. Elements built on enrollment and retention are quite honestly much more profitable. Witness the recent sale of the enrollment consulting firm Royall & Company for $850 million.

The Helix statement describes their TEACH focus as one of thought leadership. To me this sounds like the core business will be on enrollment, retention and data analysis while they focus academic efforts not on direct implementation products and services, but on white papers and presentations.

Meaning for Market

Helix Education was not the only company building CBE-specific learning platforms to replace the traditional LMS. FlatWorld Knowledge built a platform that is being used at Brandman University. LoudCloud Systems built a new CBE platform FASTrak – and they already have a traditional LMS (albeit one designed with a modern architecture). Perhaps most significantly, the CBE pioneers Western Governors University and Southern New Hampshire University’s College for America (CfA) built custom platforms based on CRM technology (i.e. Salesforce) based on their determination that the traditional LMS market did not suit their specific needs. CfA even spun off their learning platform as a new company – Motivis Learning.

If Helix Education is feeling the pressure to be LMS-neutral, does that mean that these other companies are or will be facing the same? Or, is Helix Education’s decision really based on company profitability and capabilities that are unique to their specific situation?

The other side of the market effect will be determined by which company buys the Helix LMS. Will a financial buyer (e.g. private equity) choose to create a standalone CBE platform company? Will a traditional LMS company buy the Helix LMS to broaden their reach in the quickly-growing CBE space (350 programs in development in the US)? Or will an online service provider and partial competitor of Helix Education buy the LMS? It will be interesting to see which companies bid on this product line and who wins.

Overall

If I find out more about what this change in direction means for Helix Education or for competency-based programs in general, I’ll share in future posts.

The post Helix Education puts their competency-based LMS up for sale appeared first on e-Literate.

OBIEE and ODI on Hadoop : Next-Generation Initiatives To Improve Hive Performance

Rittman Mead Consulting - Thu, 2014-12-18 16:26

The other week I posted a three-part series (part 1, part 2 and part 3) on going beyond MapReduce for Hadoop-based ETL, where I looked at a typical Apache Pig dataflow-style ETL process and showed how Apache Tez and Apache Spark can potentially make these processes run faster and make better use of in-memory processing. I picked Pig as a data processing environment as the multi-step data transformations creates translate into lots of separate MapReduce jobs in traditional Hadoop ETL environments, but run as a single DAG (directed acyclic graph) under Tez and Spark and can potentially use memory to pass intermediate results between steps, rather than writing all those intermediate datasets to disk.

But tools such as OBIEE and ODI use Apache Hive to interface with the Hadoop world, not Pig, so its improvements to Hive that will have the biggest immediate impact on the tools we use today. And what’s interesting is the developments and work thats going on around Hive in this area, with four different “next-generation Hive” initiatives going on that could end-up making OBIEE and ODI on Hadoop run faster:

  • Hive-on-Tez (or “Stinger”), principally championed by Hortonworks, along with Stinger.next which will enable ACID transactions in HiveQL
  • Hive-on-Spark, a more limited port of Hive to run on Spark and backed by Cloudera amongst others
  • Spark SQL within Apache Spark, which enables SQL queries against Spark RDDs (and Hive tables), and exposes a HiveServer2-compatible Thrift Server for JDBC access
  • Vendor initiatives that build on Hive but are mainly around integration with their RDBMS engines, for example Oracle Big Data SQL

Vendor initiatives like Oracle’s Big Data SQL and Cloudera Impala have the benefit of working now (and are supported), but usually come with some sort of penalty for not working directly within the Hive framework. Oracle’s Big Data SQL, for example, can read data from Hive (very efficiently, using Exadata SmartScan-type technology) but then can’t write-back to Hive, and currently pulls all the Hive data into Oracle if you try and join Oracle and Hive data together. Cloudera’s Impala, on the other hand, is lightening-fast and works directly on the Hadoop platform, but doesn’t support the same ecosystem of SerDes and storage handlers that Hive supports, taking away one of the key flexibility benefits of working with Hive. 

So what about the attempts to extend and improve Hive, or include Hive interfaces and compatibility in Spark? In most cases an ETL routine written as a series of Hive statements isn’t going to be as fast or resource-efficient as a custom Spark program, but if we can make Hive run faster or have a Spark application masquerade as a Hive database, we could effectively give OBIEE and ODI on Hadoop a “free” platform performance upgrade without having to change the way they access Hadoop data. So what are these initiatives about, and how usable are they now with OBIEE and ODI?

Probably the most ambitious next-generation Hive project is the Stinger initiative. Backed by Hortonworks and based on the Apache Tez framework that runs on Hadoop 2.0 and YARN. Stinger aimed first to port Hive to run on Tez (which runs MapReduce jobs but enables them to potentially run as a single DAG), and then add ACID transaction capabilities so that you can UPDATE and DELETE from a Hive table as well as INSERT and SELECT, using a transaction model that allows you to roll-back uncommitted changes (diagram from the Hortonworks Stinger.next page)

NewImage

Tez is more of a set of developer APIs rather than the full data discovery / data analysis platform that Spark aims to provide, but it’s a technology that’s available now as part of Hortonworks HDP2.2 platform and as I showed in the blog post a few days ago, an existing Pig script that you run as-is on a Tez environment typically runs twice as fast as when its using MapReduce to move data around (with usual testing caveats applying, YMMV etc). Hive should be the same as well, giving us the ability to take Hive transformation scripts and run them unchanged except for specifying Tez at the start as the execution engine.

NewImage

Hive on Tez is probably the first of these initiatives we’ll see working with ODI and OBIEE, as ODI has just been certified for Hortonworks HDP2.1, and the new HDP2.2 release is the one that comes with Tez as an option for Pig and Hive query execution. I’m guessing ODI will need to have its Hive KMs updated to add a new option to select Tez or MapReduce as the underlying Hive execution engine, but otherwise I can see this working “out of the box” once ODI support for HDP2.2 is announced.

Going back to the last of the three blog posts I wrote on going beyond MapReduce, many in the Hadoop industry back Spark as the successor to MapReduce rather than Tez as its a more mature implementation that goes beyond the developer-level APIs that Tez aims to provide to make Pig and Hive scripts run faster. As we’ll see in a moment Spark comes with its own SQL capabilities and a Hive-compatible JDBC interface, but the other “swap-out-the-execution-engine” initiative to improve Hive is Hive on Spark, a port of Hive that allows Spark to be used as Hive’s execution engine instead of just MapReduce.

Hive on Spark is at an earlier stage in development than Hive on Tez with the first demo being given at the recent Strata + Hadoop World New York, and specific builds of Spark and versions of Hive needed to get it running. Interestingly though, a post went on the Cloudera Blog a couple of days ago announcing an Amazon AWS AMI machine image that you could use to test Hive on Spark, which though it doesn’t come with a full CDH or HDP installation or features such as a HiveServer JDBC interface, comes with a small TPC-DS dataset and some sample queries that we can use to get a feeling for how it works. I used the AMI image to create an Amazon AWS m3.large instance and gave it a go.

By default, Hive in this demo environment is configured to use Spark as the underlying execution engine. Running a couple of the TPC-DS queries first using this Spark engine, and then switching back to MapReduce by running the command “set hive.execution.engine=mr” within the Hive CLI, I generally found queries using Spark as the execution engine ran 2-3x faster than the MapReduce ones.

NewImage

You can’t read too much into this timing as the demo AMI is really only to show off the functional features (Hive using Spark as the execution engine) and no work on performance optimisation has been done, but it’s encouraging even at this point that it’s significantly faster than the MapReduce version.

Long-term the objective is to have both Tez and Spark available as options as execution engines under Hive, along with MapReduce, as the diagram below from a presentation by Cloudera’s Szenon Ho shows; the advantage of building on Hive like this rather than creating your own new SQL-on-Hadoop engine is that you can make use of the library of SerDes, storage handlers and so on that you’d otherwise need to recreate for any new tool.

NewImage

The third major SQL-on-Hadoop initiative I’ve been looking at is Spark SQL within Apache Spark. Unlike Hive on Spark which aims to swap-out the compiler and execution engine parts of Hive but otherwise leave the rest of the product unchanged, Apache Spark as a whole is a much more freeform, flexible data query and analysis environment that’s aimed more at analysts that business users looking to query their dataset using SQL. That said, Spark has some cool SQL and Hive integration features that make it an interesting platform for doing data analysis and ETL.

In my Spark ETL example the other day, I loaded log data and some reference data into RDDs and then filtered and transformed them using a mix of Scala functions and Spark SQL queries. Running on top of the set of core Spark APIs, Spark SQL allows you to register temporary tables within Spark that map onto RDDs, and give you the option of querying your data using either familiar SQL relational operators, or the more functional programming-style Scala language

NewImage

You can also create connections to the Hive metastore though, and create Hive tables within your Spark application for when you want to persist results to a table rather than work with the temporary tables that Spark SQL usually creates against RDDs. In the code example below, I create a HiveContext as opposed to the sqlContext that I used in the example on the previous blog, and then use that to create a table in my Hive database, running on a Hortonworks HDP2.1 VM with Spark 1.0.0 pre-built for Hadoop 2.4.0:

scala> val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
scala> hiveContext.hql("CREATE TABLE posts_hive (post_id int, title string, postdate string, post_type string, author string, post_name string, generated_url string) row format delimited fields terminated by '|' stored as textfile")
scala> hiveContext.hql("LOAD DATA INPATH '/user/root/posts.psv' INTO TABLE posts_hive")

If I then go into the Hive CLI, I can see this new table listed there alongside the other ones:

hive> show tables;
OK
dummy
posts
posts2
posts_hive
sample_07
sample_08
src
testtable2
Time taken: 0.536 seconds, Fetched: 8 row(s)

What’s even more interesting is that Spark also comes with a HiveServer2-compatible Thrift Server, making it possible for tools such as ODI that connect to Hive via JDBC to run Hive queries through Spark, with the Hive metastore providing the metadata but Spark running as the execution engine.

NewImage

This is subtly different to Hive-on-Spark as Hive’s metastore, support for SerDes and storage handlers runs under the covers but Spark provides you with a full programmatic environment, making it possible to just expose Hive tables through the Spark layer, or mix and match data from RDDs, Hive tables and other sources before storing and then exposing the results through the Hive SQL interface. For example then, you could use Oracle SQL*Developer 4.1 with the Cloudera Hive JDBC drivers to connect to this Spark SQL Thrift Server and query the tables just like any other Hive source, but crucially the Hive execution is being done by Spark, rather than MapReduce as would normally happen.

NewImage

Like Hive-on-Spark, Spark SQL and Hive support within Spark SQL are at early stages, with Spark SQL not yet being supported by Cloudera whereas the core Spark API is. From the work I’ve done with it, it’s not yet possible to expose Spark SQL temporary tables through the HiveServer2 Thrift Server interface, and I can’t see a way of creating Hive tables out of RDDs unless you stage the RDD data to a file in-between. But it’s clearly a promising technology and if it becomes possible to seamlessly combine RDD data and Hive data, and expose Spark RDDs registered as tables through the HiveServer2 JDBC interface it could make Spark a very compelling platform for BI and data analyst-type applications. Oracle’s David Allen, for example, blogged about using Spark and the Spark SQL Thrift Server interface to connect ODI to Hive through Spark, and I’d imagine it’d be possible to use the Cloudera HiveServer2 ODBC drivers along with the Windows version of OBIEE 11.1.1.7 to connect to Spark in this way too – if I get some spare time over the Christmas break I’ll try and get an example working.

Categories: BI & Warehousing

Amazon Echo, The Future or Fad?

Oracle AppsLab - Thu, 2014-12-18 16:10

AmazonEcho

Last November Amazon announced a new kind of device. Part speaker, part personal assistant and it called it Amazon Echo. If you saw the announcement you might have also see their quirky infomercial.

The parodies came hours after their announcement, and they were funny. But dismissing this just as a Siri/Cortana/Google Now copycat might miss the potential of this “always listening” device. To be fair this is not the first device that can do this. I have a Moto X that has an alway-on chip waiting for a wake word (“OK Google”), Google Glass glass does the same thing (“OK Glass.”) But the fact that I don’t have to hold the device, be near it, or push a button (Siri) makes this cylinder kind of magical.

It is also worth noting that NONE of these devices are really “always-listening-and-sending-all-your-conversations-to-the-NSA,” in fact the “always listening” part is local. Once you say the wake word then I guess you better make sure don’t spill the beans for the next few seconds, which is the period that the device will listen and do a STT (speech-t0-text) operation on the Cloud.

We can all start seeing through Amazon and why this good for them. Right off the bat you can buy songs with a voice command. You can also add “stuff” to your shopping list. Which also reminds me of a similar product Amazon had last year, Amazon Dash  which unfortunately is only for selected markets. The fact is that Amazon wants us to buy more from them, and for some of us that is awesome, right? Prime, two day shipping, drone delivery, etc.

I have been eyeing these “always listening” devices for a while. The Ubi ($300) and Ivee ($200) were my two other choices. Both have had mixed reviews and both of them are still absent on the promise of an SDK or API. Amazon Echo doesn’t have an SDK yet, but they placed a link to show the Echo team your interest in developing apps for it.

The promise of a true artificial intelligence assistant or personal contextual assistant (PCA) is coming soon to a house or office near you. Which brings me to my true interest in Amazon Echo. The possibility of creating a “Smart Office” where the assistant will anticipate my day-to-day tasks, setup meetings, remind me of upcoming events, analyze and respond email and conversations, all tied to my Oracle Cloud of course.  The assistant will also control physical devices in my house/office “Alexa, turn on the lights,” “Alexa, change the temperature to X,” etc.

All in all, it has been fun to request holiday songs around the kitchen and dinning room (“Alexa, play Christmas music.”) My kids are having a hay day trying to ask the most random questions. My wife, on the other side, is getting tired of the constant interruption of music, but I guess it’s the novelty. We shall see if my kids are still friendly to Alexa in the coming months.

In my opinion, people dismissing Amazon Echo, will be the same people that said: “Why do I need a music player on my phone, I already have ALL my music collection in my iPod” (iPhone naysayers circa 2007), “Why do I need a bigger iPhone? That `pad thing is ridiculously huge!” (iPad naysayers circa 2010.) And now I have already heard “Why do I want a device that is always connected and listening, I already have Siri/Cortana/Google Now” (Amazon Echo naysayers circa 2014.)

Agree, disagree?  Let me know.Possibly Related Posts:

FBI warns organizations of dangerous malware

Chris Foot - Thu, 2014-12-18 14:40

Transcript

Hi, welcome to RDX! The Federal Bureau of Investigation recently sent a five-page document to businesses, warning them of a particularly destructive type of malware. It is believed the program was the same one used to infiltrate Sony's databases.

The FBI report detailed the malware's capabilities. Apparently, the software overrides all information on computer hard drives, including the master boot record. This could prevent servers from accessing critical software, such as operating systems or enterprise applications.

Database data can be lost or corrupted for many reasons. Regardless if the data loss was due to a hardware failure, human error or the deliberate act of a cybercriminal, database backups ensure that critical data can be quickly restored.  RDX's backup and recovery experts are able to design well-thought out strategies that help organizations protect their databases from any type of unfortunate event.

Thanks for watching!

The post FBI warns organizations of dangerous malware appeared first on Remote DBA Experts.