Skip navigation.

Feed aggregator

Digital Delivery "Badge"

Bradley Brown - 14 hours 6 min ago
At InteliVideo we have come to understand that we need to do everything we can to help our clients sell more digital content. It seems obvious that consumers want to watch videos on devices like their phones, tablets, laptops, and TVs, but it's not so obvious to the everyone. They have been using DVDs for a number of years - and likely VHS tapes before that. We believe it’s important for your customers to understand why they would want to purchase a digital product rather than a physical product (i.e. a DVD).
Better buttons drive sales.  Across all our apps and clients we know we are going to need to really nail our asset delivery process with split tests and our button and banner catalog.  We've simplified the addition of a badge on a client's page. They effectively have to add 4 lines of HTML in order to add our digital delivery badge.
Our clients can use any of the images that InteliVideo provides or we’re happy to provide an editable image file (EPS format) so they can make their own image.  Here are some of our badges that we created:
Screenshot 2014-12-16 19.39.25.png
On our client's web page, it looks something like this:
Screenshot 2014-12-17 14.01.11.png
The image above (Watch Now on Any Device) is the important component.  This is the component that our clients are placing somewhere on their web page(s).  When this is clicked, the existing page will be dimmed and the lightbox will popup and display the “Why Digital” message:
Screenshot 2014-12-17 16.31.54.png
What do your client's customers need to know about in order to help you sell more?

Log Buffer #402, A Carnival of the Vanities for DBAs

Pakistan's First Oracle Blog - 20 hours 10 min ago
This Log Buffer edition hits the ball out of park by smashing yet another record of surfacing with a unique collection of blog posts from various database technologies. Enjoy!!!

Oracle:

EM12c and the Optimizer Statistics Console.
SUCCESS and FAILURE Columns in DBA_STMT_AUDIT_OPTS.
OBIEE and ODI on Hadoop : Next-Generation Initiatives To Improve Hive Performance.
Oracle 12.1.0.2 Bundle Patching.
Performance Issues with the Sequence NEXTVAL Call.

SQL Server:

GUIDs GUIDs everywhere, but how is my data unique?
Questions About T-SQL Transaction Isolation Levels You Were Too Shy to Ask.
Introduction to Advanced Transact SQL Stairway and Using the CROSS JOIN Operator.
Introduction to Azure SQL Database Scalability.
What To Do When the Import and Export Wizard Fails.

MySQL:

Orchestrator 1.2.9 GA released.
Making HAProxy 1.5 replication lag aware in MySQL.
Monitor MySQL Performance Interactively With VividCortex.
InnoDB’s multi-versioning handling can be Achilles’ heel.
Memory summary tables in Performance Schema in MySQL 5.7.

Also published here.
Categories: DBA Blogs

What an App Cost?

Bradley Brown - 20 hours 51 min ago
People will commonly ask me this question, which has a very wide range as the answer.  You can get an app build on oDesk for nearly free - i.e. $2000 or less.  Will it provide the functionality you need?  It might!  Do you need a website that does the same thing?  Do you need a database (i.e. something beyond the app) to store your data for your customers?

Our first round of apps at InteliVideo cost us $2,000-10,000 to develop each of them.  We spent a LOT of money on the backend server code.  Our first versions were pretty fragile (i.e. broke fairly easily) and we're very sexy.  We decided that we needed to revamp our apps from stem to stern...APIs to easy branding to UI.

Here's a look at our prior version.  Our customers (people who buy videos) aren't typically buying from more than 1 of our clients - yet.  But in the prior version I saw a list of all of the products I had purchased.  It's not a very sexy UI - just a simple list of videos:


When I drilled into a specific product, again I see a list of videos within the product:

I can download or play a video in a product:


Here's what it looks like for The Dailey Method:



Here's the new version demonstrating the branding for Chris Burandt.  I've purchased a yearly subscription that currently includes 73 videos.  I scroll (right not down) through those 73 videos here:


Or if I click on the title, I get to see a list of the videos in more detail:


Notice the colors (branding) is shown everywhere here.  I scrolled up to look through those videos:


Here's a specific video that talked about a technique to set your sled unstuck:


Here's what the app looks like when I'm a The Dailey Method customer.  Again, notice the branding everywhere:

Looking at a specific video and it's details:


We built a native app for iOS (iPad, iPhone, iPod), Android, Windows and Mac that has all of the same look, feel, functionality, etc.  This was a MAJOR undertaking!

The good news is that if you want to start a business and build an MVP (Minimally Viable Product) to see if there is actually a market for your product, you don't have to spend hundreds of thousands to do so...but you might have to later!


e-Literate Top 20 Posts For 2014

Michael Feldstein - Sat, 2014-12-20 12:17

I typically don’t write year-end reviews or top 10 (or 20) lists, but I need to work on our consulting company finances. At this point, any distraction seems more enjoyable than working in QuickBooks.

We’ve had a fun year at e-Literate, and one recent change is that we are now more willing break stories when appropriate. We typically comment on ed tech stories a few days after the release, providing analysis and commentary, but there are several cases where we felt a story needed to go public. In such cases (e.g. Unizin creation, Cal State Online demise, management changes at Instructure and Blackboard) we tend to break the news objectively, providing mostly descriptions and explanations, allowing others to provide commentary.

The following list is based on Jetpack stats on WordPress, which does not capture people who read posts through RSS feeds (we send out full articles through the feed). So the stats have a bias towards people who come to e-Literate for specific articles rather than our regular readers. We also tend to get longer-term readership of articles over many months, so this list also has a bias for articles posted a while ago.

With that in mind, here are the top 20 most read articles on e-Literate in terms of page views for the past 12 months along with publication date.

  1. Can Pearson Solve the Rubric’s Cube? (Dec 2013) – This article proves that people are willing to read a 7,000 word post published on New Year’s Eve.
  2. A response to USA Today article on Flipped Classroom research (Oct 2013) – This article is our most steady one, consistently getting around 100 views per day.
  3. Unizin: Indiana University’s Secret New “Learning Ecosystem” Coalition (May 2014) – This is the article where we broke the story about Unizin, based largely on a presentation at Colorado State University.
  4. Blackboard’s Big News that Nobody Noticed (Jul 2014) – This post commented on the Blackboard users’ conference and some significant changes that got buried in the keynote and much of the press coverage.
  5. Early Review of Google Classroom (Jul 2014) – Meg Tufano got pilot access to the new system and allowed me to join the testing; this article mostly shares Meg’s findings.
  6. Why Google Classroom won’t affect institutional LMS market … yet (Jun 2014) – Before we had pilot access to the system, this article described the likely market affects from Google’s new system.
  7. Competency-Based Education: An (Updated) Primer for Today’s Online Market (Dec 2013) – Given the sudden rise in interest in CBE, this article updated a 2012 post explaining the concept.
  8. The Resilient Higher Ed LMS: Canvas is the only fully-established recent market entry (Feb 2014) – Despite all the investment in ed tech and market entries, this article noted how stable the LMS market is.
  9. Why VCs Usually Get Ed Tech Wrong (Mar 2014) – This post combined references to “selling Timex knockoffs in Times Square” with a challenge to the application of disruptive innovation.
  10. New data available for higher education LMS market (Nov 2013) – This article called out the Edutechnica and ListEdTech sites with their use of straight data (not just sampling surveys) to clarify the LMS market.
  11. InstructureCon: Canvas LMS has different competition now (Jun 2014) – This was based on the Instructure users’ conference and the very different attitude from past years.
  12. Dammit, the LMS (Nov 2014) – This rant called out how the LMS market is largely following consumer demand from faculty and institutions.
  13. Why Unizin is a Threat to edX (May 2014) – This follow-on commentary tried to look at what market effects would result from Unizin introduction.
  14. State of the Anglosphere’s Higher Education LMS Market: 2013 Edition (Nov 2013) – This was last year’s update of the LMS squid graphic.
  15. Google Classroom: Early videos of their closest attempt at an LMS (Jun 2014) – This article shared early YouTube videos showing people what the new system actually looked like.
  16. State of the US Higher Education LMS Market: 2014 Edition (Oct 2014) – This was this year’s update of the LMS squid graphic.
  17. About Michael – How big is Michael’s fan club?
  18. What is a Learning Platform? (May 2012) – The old post called out and helped explain the general move from monolithic systems to platforms.
  19. What Faculty Should Know About Adaptive Learning (Dec 2013) – This was a reprint of invited article for American Federation of Teachers.
  20. Instructure’s CTO Joel Dehlin Abruptly Resigns (Jul 2014) – Shortly after the Instructure users’ conference, Joel resigned from the company.

Well that was more fun that financial reporting!

The post e-Literate Top 20 Posts For 2014 appeared first on e-Literate.

Exadata Patching Introduction

The Oracle Instructor - Sat, 2014-12-20 10:24

These I consider the most important points about Exadata Patching:

Where is the most recent information?

MOS Note 888828.1 is your first read whenever you think about Exadata Patching

What is to patch with which utility?

Exadata Patching

Expect quarterly bundle patches for the storage servers and the compute nodes. The other components (Infiniband switches, Cisco Ethernet Switch, PDUs) are less frequently patched and not on the picture therefore.

The storage servers have their software image (which includes Firmware, OS and Exadata Software)  exchanged completely with the new one using patchmgr. The compute nodes get OS (and Firmware) updates with dbnodeupdate.sh, a tool that accesses an Exadata yum repository. Bundle patches for the Grid Infrastructure and for the Database Software are being applied with opatch.

Rolling or non-rolling?

This the sensitive part! Technically, you can always apply the patches for the storage servers and the patches for compute node OS and Grid Infrastructure rolling, taking down only one server at a time. The RAC databases running on the Database Machine will be available during the patching. Should you do that?

Let’s focus on the storage servers first: Rolling patches are recommended only if you have ASM diskgroups with high redundancy or if you have a standby site to failover to in case. In other words: If you have a quarter rack without a standby site, don’t use rolling patches! That is because the DBFS_DG diskgroup that contains the voting disks cannot have high redundancy in a quarter rack with just three storage servers.

Okay, so you have a half rack or bigger. Expect one storage server patch to take about two hours. That summarizes to 14 hours (for seven storage servers) patching time with the rolling method. Make sure that management is aware about that before they decide about the strategy.

Now to the compute nodes: If the patch is RAC rolling applicable, you can do that regardless of the ASM diskgroup redundancy. If a compute node gets damaged during the rolling upgrade, no data loss will happen. On a quarter rack without a standby site, you put availability at risk because only two compute nodes are there and one could fail while the other is just down.

Why you will want to have a Data Guard Standby Site

Apart from the obvious reason for Data Guard – Disaster Recovery – there are several benefits associated to the patching strategy:

You can afford to do rolling patches with ASM diskgroups using normal redundancy and with RAC clusters that have only two nodes.

You can apply the patches on the standby site first and test it there – using the snapshot standby database functionality (and using Database Replay if you licensed Real Application Testing)

A patch set can be applied on the standby first and the downtime for end users can be reduced to the time it takes to do a switchover

A release upgrade can be done with a (Transient) Logical Standby, reducing again the downtime to the time it takes to do a switchover

I suppose this will be my last posting in 2014, so Happy Holidays and a Happy New Year to all of you :-)


Tagged: exadata
Categories: DBA Blogs

PeopleTools 8.54 Feature: Support for Oracle Database Materialized Views

Javier Delgado - Fri, 2014-12-19 17:04
One of the new features of PeopleTools 8.54 is the support of Oracle Database Materialized Views. In a nutshell, Materialized Views can be seen as a snapshot of a given view. When you query a Materialized View, the data is not necessarily accessed online, but instead it is retrieved from the latest snapshot. This can greatly contribute to improve query performance, particularly for complex SQLs or Pivot Grids.

Materialized Views Features
Apart from the performance benefits associated with them, one of the most interesting features of Materialized Views is how the data refresh is handled. Oracle Database supports two ways of refreshing data:


  • On Commit: data is refreshed whenever a commit takes place in any of the underlying tables. In a way, this method is equivalent to maintaining through triggers a staging table (the Materialized View) whenever the source table changes, but all this complexity is hidden from the developer. Unfortunately, this method is only available with join-based or single table aggregate views.

Although it has the benefit of almost retrieving online information, normally you would use the On Commit for views based on tables that do not change very often. As every time a commit is made, the information is refreshed in the Materialized View, the insert, update and delete performance on the source tables will be affected.Hint: You would normally use On Commit method for views based on Control tables, not Transactional tables.
  • On Demand: data is refreshed on demand. This option is valid for all types of views, and implies that the Materialized View data is only refreshed when requested by the administrator. PeopleTools 8.54 include a page named Materialized View Maintenance where the on demand refreshes can be configured to be run periodically.




In case you choose the On Demand method, the data refresh can actually be done following two different methods:


  • Fast, which just refreshed the rows in the Materialized View affected by the changes made to the source records.


  • Full, which fully recalculated the Materialized View contents. This method is preferable when large volume changes between refreshes are usually performed against the source records. Also, this option is required after certain types of updates on the source records (ie: INSERT statements using the APPEND hint). Finally, this method is required when one of the source records is also a Materialized View and has been refreshed using the Full method. 


How can we use them in PeopleTools?
Before PeopleTools 8.54, Materialized Views could be used as an Oracle Database feature, but the DBA would need to be responsible of editing the Application Designer build scripts to include the specific syntax for this kind of views. On top of that, the DBA would need to schedule the data refresh directly from the database.

PeopleTools 8.54 introduces support within PeopleSoft tools. In first place, Application Designer will now show new options for View records:



We have already seen what Refresh Mode and Refresh Method mean. The Build Options indicate to Application Designer whether the Materialized View date needs to be calculated upon its build is executed or if it could be delayed until the first refresh is requested from the Materialized View Maintenance page.

This page is used to determine when to refresh the Materialized Views. The refresh can be executed for multiple views at once and scheduled using the usual PeopleTools Process Scheduler recurrence features. Alternatively, the Refresh Interval [seconds] may be used to indicate the database that this view needs to be refreshed every n seconds.

Limitations
The main disadvantage of using Materialized Views is that they are specific to Oracle Database. They will not work if you are using any other platform, in which case the view acts like a normal view, which keeps a similar functional behaviour, but without all the performance advantages of Materialized Views.

Conclusions
All in all, Materialized Views provide a very interesting feature to improve the system performance, while keeping the information reasonably up to date. Personally, I wish I've had this feature available for many of the reports I've built in all these years... :-)

Consumer Security for the season and Today's World

PeopleSoft Technology Blog - Fri, 2014-12-19 13:41

Just to go beyond my usual security sessions, I was asked recently to talk to a local business and consumer group about personal cyber security. Here is the document I used for the session and you might find some useful tips.

Protecting your online shopping experience

- check retailer returns policy

- use a credit card rather than debit card, or check the protection on the debit card

- use a temporary/disposable credit card e.g. ShopSafe from Bank of America

- use a low limit credit card - with protection, e.g. AMEX green card

- check your account for random small amount charges and charitable contributions

- set spending and "card not present" alerts

Protecting email

- don't use same passwords for business and personal accounts

- use a robust email service provider

- set junk/spam threshold in your email client

- only use web mail for low risk accounts (see Note below)

- don't click on links in the email, DON’T click on links in email – no matter who you think sent it

Protecting your computer

- if you depend on a computer/laptop/tablet for business, ONLY use it for business

- don't share your computer with anyone, including your children

- if you provide your children with a computer/laptop, refresh them from "recovery disks" on a periodic basis

- teach children value of backing up important data

- if possible have your children only use their laptops/devices in family rooms where the activity can be passively observed

- use commercial, paid subscription, antivirus/anti malware on all devices (see Note below)

- carry and use a security cable when traveling or away from your office

Protecting your smart phone/tablet

- don't share your device

- make sure you have a secure lock phrase/PIN and set the idle timeout

- don't recharge it using the USB port on someone else's laptop/computer

- ensure the public Wi-Fi which you use is a trusted Wi-Fi (also - see Note below)

- store your data in the cloud, preferably not (or not only) the phone/tablet

- don't have the device "remember" your password, especially for sensitive accounts

- exercise caution when downloading software e.g. games/apps, especially "free" software (see Note below)

Protect your social network

- don't mix business and personal information in your social media account

- use separate passwords for business and personal social media accounts

- ensure you protect personal information from the casual user

- check what information is being shared about you or photos tagged by your "friends"

- don't share phone numbers or personal/business contact details,
e.g. use the "ask me for my ..." feature

General protection and the “Internet of Things”

- be aware of cyber stalking

- be aware of surreptitious monitoring
e.g. “Google Glass” and smart phone cameras

- consider “nanny” software, especially for children’s devices

- be aware of “click bait” – e.g. apparently valid “news” stories which are really sponsored messages

- be aware of ATM “skimming”, including self serve gas pumps

- be aware of remotely enabled camera and microphone (laptop, smart phone, tablet)

Note: Remember, if you’re not paying for the product, you ARE the product

Important! PeopleTools Requirements for PeopleSoft Interaction Hub

PeopleSoft Technology Blog - Fri, 2014-12-19 12:04

The PeopleSoft Interaction Hub* follows a release model of continuous delivery.  With this release model, Oracle delivers new functionality and enhancements on the latest release throughout the year, without requiring application upgrades to new releases.

PeopleTools is a critical enabler for delivering new functionality and enhancements for releases on continuous delivery.  The PeopleTools adoption policy for applications on continuous release is designed to provide reasonable options for customers to stay current on PeopleTools while adopting newer PeopleTools features. The basic policy is as follows:

Interaction Hub customers must upgrade to a PeopleTools release no later than 24 months after that PeopleTools release becomes generally available.  

For example, PeopleTools 8.53 was released in February 2013. Therefore, customers who use Interaction Hub will be required to upgrade to PeopleTools 8.53 (or newer, such as PeopleTools 8.54) no later than February 2015 (24 months after the General Availability date of PeopleTools 8.53). As of February 2015, product maintenance and new features may require PeopleTools 8.53. 

Customers should start planning their upgrades if they are on PeopleTools releases that are more than 24 months old.  See the Lifetime Support Summary for PeopleSoft Releases (doc id 1348959.1) in My Oracle Support for details on PeopleTools support policies.

* The PeopleSoft Interaction Hub is the latest branding of the product.  These support guidelines also apply to the same product under the names PeopleSoft Enterprise Portal, PeopleSoft Community Portal, and PeopleSoft Applications Portal.


Do You Really Need a Content Delivery Network (CDN)?

Bradley Brown - Fri, 2014-12-19 10:39
When I first heard about Amazon's offering called CloudFront I really didn't understand what it offered and who would want to use it.  I don't think they initially called it a content delivery network (CDN), but I could be wrong about that.  Maybe it was just something I didn't think I needed at that time.

Amazon states it well today (as you might expect).  The offering "gives developers and businesses an easy way to distribute content to end users with low latency, and high data transfer speeds."

So when you hear the word "content" what is it that you think about?  What is content?  First off, it's digital content.  So...website pages?  That's what I initially thought of.  But it's really any digital content.  Audio books, videos, PDFs - files of any time, any size.

When it comes to distributing this digital content, why would you need to do this with low latency and/or high transfer speeds?  Sure, this is important if your website traffic scales up from 1-10 concurrent viewers to millions overnight.  How realistic is that for your business?  What about the other types of content - such as videos?  Yep, now I'm referring to what we do at InteliVideo!

A CDN allows you to scale up to any number of customers viewing or downloading your content concurrently.  Latency can be translated to "slowness" when you're trying to download a video when you're in Japan because it's moving the file across the ocean.  The way that Amazon handles this is that they move the file across the ocean using their fast pipes (high speed internet) between their data centers and then the customer downloads the file effectively directly from Japan.

Imagine that you have this amazing set of videos that you want to bundle up and sell to millions of people.  You don't know when your sales will go viral, but when it happens you want to be ready!  So how do you implement a CDN for your videos, audios, and other content?  Leave that to us!

So back to the original question.  Do you really need a content delivery network?  Well...what if you could get all of the benefits of having one without having to lift a finger?  Would you do it then?  Of course you would!  That's exactly what we do for you.  We make it SUPER simple - i.e. it's done 100% automatically for our clients and their customers.  Do you really need a CDN?  It depends on how many concurrent people are viewing your content and where they are located.

For my Oracle training classes that I offer through BDB Software, I have customers from around the world, which I personally find so cool!  Does BDB Software need a CDN?  It absolutely makes for a better customer experience and I have to do NOTHING to get this benefit!

Log Buffer #402, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-12-19 09:15

This Log Buffer edition hits the ball out of park by smashing yet another record of surfacing with a unique collection of blog posts from various database technologies. Enjoy!!!

Oracle:

EM12c and the Optimizer Statistics Console.

SUCCESS and FAILURE Columns in DBA_STMT_AUDIT_OPTS.

OBIEE and ODI on Hadoop : Next-Generation Initiatives To Improve Hive Performance.

Oracle 12.1.0.2 Bundle Patching.

Performance Issues with the Sequence NEXTVAL Call.

SQL Server:

GUIDs GUIDs everywhere, but how is my data unique?

Questions About T-SQL Transaction Isolation Levels You Were Too Shy to Ask.

Introduction to Advanced Transact SQL Stairway and Using the CROSS JOIN Operator.

Introduction to Azure SQL Database Scalability.

What To Do When the Import and Export Wizard Fails.

MySQL:

Orchestrator 1.2.9 GA released.

Making HAProxy 1.5 replication lag aware in MySQL.

Monitor MySQL Performance Interactively With VividCortex.

InnoDB’s multi-versioning handling can be Achilles’ heel.

Memory summary tables in Performance Schema in MySQL 5.7.

Categories: DBA Blogs

What Do Oracle Audit Vault Collection Agents Do?

The Oracle Audit Vault is installed on a server, and collector agents are installed on the hosts running the source databases.  These collector agents communicate with the audit vault server. 

If the collection agents are not active, no audit data is lost, as long as the source database continues to collect the audit data.  When the collection agent is restarted, it will capture the audit data that the source database had collected during the time the collection agent was inactive.

There are three types of agent collectors for Oracle databases.  There are other collectors for third-party database vendors such as SAP Sybase, Microsoft SQL-Server, and IBM DB2.

Audit Value Collectors for Oracle Databases*

Audit Trail Type

How Enabled

Collector Name

Database audit trail

For standard audit records: AUDIT_TRAIL initialization parameter set to: DB or DB, EXTENDED.

For fine-grained audit records: The audit trail parameter of DBMS_FGA.ADD_POLICY procedure is set to: DBMS_FGA.DB or DBMS_FGA.DB + DBMS_FGA.EXTENDED.

DBAUD

Operating system audit trail

For standard audit records: AUDIT_TRAIL initialization parameter is set to: OSXML, or XML, EXTENDED.

For syslog audit trails, AUDIT_TRAIL is set to OS and the AUDIT_SYS_OPERATIONS parameter is set to TRUE.  In addition, the AUDIT_SYSLOG_LEVEL parameter must be set.

For fine-grained audit records: The audit_trail parameter of the DBMS_FGA.ADD_POLICY procedure is set to DBMS_FGA.XML or DBMS_FGA.XML + DBMS_FGA.EXTENDED.

OSAUD

Redo log files

The table that you want to audit must be eligible.  See "Creating Capture Rules for Redo Log File Auditing" for more information.

REDO

 *Note if using Oracle 12c; the assumption is that Mixed Mode Unified Auditing is being used

If you have questions, please contact us at mailto:info@integrigy.com

Reference Tags: AuditingOracle Audit VaultOracle Database
Categories: APPS Blogs, Security Blogs

OBIEE Enterprise Security

Rittman Mead Consulting - Fri, 2014-12-19 05:35

The Rittman Mead Global Services team have recently been involved in a number of security architecture implementations and produced a security model which meets a diverse set of requirements.  Using our experience and standards we have been able to deliver a robust model that addresses the common questions we routinely receive around security, such as :

“Whats considerations do I need to make when exposing Oracle BI to the outside world?”

or

“How can I make a flexible security model which is robust enough to meet the demands of my organisation but easy to maintain?”

The first question is based on a standard enterprise security model where the Oracle BI server is exposed by a web host, enabling SSL and tightening up access security.  This request can be complex to achieve but is something that we have implemented many times now.

The second question is much harder to answer, but our experience has led us to develop a multi-dimensional inheritance security model, with numerous clients that has yielded excellent results.

What is a Multi-dimensional Inheritance Security Model?

The wordy title is actually a simple concept that incorporates 5 key areas:

  • Easy to setup and maintain
  • Flexible
  • Durable
  • Expandable
  • Be consistent throughout the product

While there numerous ways of implementing a security model in Oracle BI, by sticking to the key concepts above, we ensure we get it right.  The largest challenge we face in BI is the different types of security required, and all three need to work in harmony:

  • Application security
  • Content security
  • Data security
Understanding the organisation makeup

The first approach is to consider the makeup of a common organisation and build our security around it.

1

This diagram shows different Departments (Finance, Marketing, Sales) whose data is specific to them, so normally the departmental users should only see their own data that is relevant to them.  In contrast the IT department who are developing the system need visibility across all data and so do the Directors.

 

What types of users do I have?

Next is to consider the types of users we have:

  1. BI Consumer: This will be the most basic and common user who needs to access the system for information.
  2. BI Analyst: As an Analyst the user will be expected to generate more bespoke queries and need ways to represent them. They will also need an area to save these reports.
  3. BI Author: The BI Author will be able to create content and publish that content for the BI Consumers and BI Analysts.
  4. BI Department Admin: The BI Department Admin will be responsible for permissions for their department as well as act as a focal point user.
  5. BI Developer: The BI Developer can be thought of as the person(s) who creates models in the RPD and will need additional access to the system for testing of their models. They might also be responsible for delivering Answers Requests or Dashboards in order to ‘Prove’ the model they created.
  6. BI Administrator:  The Administrator will be responsible for the running of the BI system and will have access to every role.  Most Administrator Task will not require Skills in SQL/Data Warehouse and is generally separated from the BI Developer role.

The types of users here are a combination of every requirement we have seen and might not be required by every client.  The order they are in shows the implied inheritance, so the BI Analyst inherits permissions and privileges from the BI Consumer and so on.

What Types do I need?

Depending on the size of organization determines what types of user groups are required. By default Oracle ships with:

  1. BI Consumer
  2. BI Author
  3. BI Administrator

Typically we would recommend inserting the BI Analyst into the default groups:

  1. BI Consumer
  2. BI Analyst
  3. BI Author
  4. BI Administrator

This works well when there is a central BI team who develop content for the whole organization. The structure would look like this:

2

 

For larger organizations where dashboard development and permissions is handled across multiple BI teams then the BI Administrator group can be used.  Typically we see the BI team as a central Data Warehouse team who deliver the BI model (RPD) to the multiple BI teams.  In a large Organization the administration of Oracle BI should be handled by someone who isn’t the BI Developer, the structure could look like:

3

 

 

Permissions on groups

Each of the groups will require different permissions, at a high level the permissions would be:

 

Name Permissions BI Consumer
  • View Dashboards
  • Save User Selections
  • Subscribe to Ibots
BI Analyst
  • Access to Answers and standard set of views
  • Some form of storage
  • Access to Subject areas
BI Author
  • Access to Create/Modify Dashboards
  • Save Predefined Sections
  • Access to Action Links
  • Access to Dashboard Prompts
  • Access to BI Publisher
BI Department Admin
  • Ability to apply permissions and manage the Web Catalog
BI Developer
  • Advance access to answers
  • Access to all departments
BI Administrator
  • Everything

 

Understanding the basic security mechanics in 10g and 11g

In Oracle BI 10g the majority of the security is handled in the Oracle BI Server.  This would normally be done through initialisation blocks, which would authenticate the user from a LDAP server, then run a query against a database tables to populate the user into ‘Groups’ used in the RPD and ‘Web Groups’ used in the presentation server.  These groups would have to match in each level; Database, Oracle BI Server and Oracle BI Presentation Server.

With the addition of Enterprise Manager and Weblogic the security elements in Oracle BI 11g radically changed.  Authenticating the user is in the Oracle BI server is no longer the recommended way and is limited in Linux. While the RPD Groups and Presentation Server Web Groups still exist they don’t need to be used.  Users are now authenticated against Weblogic.  This can be done by using Weblogic’s own users and groups or by plugging it into a choice of LDAP servers.  The end result will be Groups and Users that exist in Weblogic.  The groups then need to be mapped to Application Roles in Enterprise Manager, which can be seen by the Oracle BI Presentation Services and Oracle BI Server.  It is recommended to create a one to one mapping for each group.

4

 

What does all this look like then?

Assuming this is for an SME size organization where the Dashboard development (BI Author) is done by the central BI team the groups would like:

 

5

 

The key points are:

  • The generic BI Consumer/Analyst groups give their permissions to the department versions
  • No users should be in the generic BI Consumer/Analyst groups
  • Only users from the BI team should be in the generic BI Author/Administrator group
  • New departments can be easily added
  • the lines denote the inheritance of permissions and privileges

 

Whats next – The Web Catalog?

The setup of the web catalog is very important to ensure that it does not get unwieldy, so it needs to reflect the security model and we would recommend setting up some base folders which look like:

6

 

Each department has their own folder and 4 sub folders. The permissions applied to each department’s root folder is BI Administrators so full control is possible across the top.  This is also true for every folder below however they will have additional explicit permissions described to ensure that the department cannot create any more than the four sub folders.

  • The Dashboard folder is where the dashboards go and the departments BI Developers group will have Full control and the departments BI consumer will have read . This will allow the departments BI Developers to create dashboards,  the departments BI Administrators to apply permissions and the departments consumers and analysts the ability to view.
  • The same permissions are applied to the Dashboard Answers folder to the same effect.
  • The Development Answers folder has Full control given to the departments BI Developers and no access to for the departments BI Analysts or BI Consumers. This folder is mainly for the departments BI Developers to store answers when in the process of development.
  • The Analyst folder is where the departments BI Analysts can save Answers. Therefore they will need full control of this folder.

I hope this article gives some insight into Security with Oracle BI.  Remember that our Global Services products offer a flexible support model where you can harness our knowledge to deliver your projects in a cost effective manner.

Categories: BI & Warehousing

Elephants and Tigers - V8 of the Website

Bradley Brown - Thu, 2014-12-18 21:54
It's amazing how much work goes into a one page website these days!  We've been working on our new version of our website (which is basically one page) for the last month or so.  The content is "easy" part on one hand and the look and feel / experience is the time consuming part.  To put it another way, it's all about the entire experience, not just the text/content.

Since we're a video company, it's important that they first page show some video...which required production and editing.  We're hunting elephants, so we need to tell the full story of the implementations that we've done for our large clients.  What all can you sell on our platform?  A video?  Audio books?  Movies?  TV Shows?  What else?  We needed to talk about our onboarding process for the big guys.  What's the shopping cart integration look like?  We have an entirely new round of apps coming out soon, so we need to show those off.  We need to answer that question of "What do our apps look like?"    Everybody wants analytics right?  You want to know who watched what - for how long, when and where!  What about all of the ways you can monetize - subscriptions (SVOD), transactional (TVOD) - rentals and purchases, credit-based purchases, and more.  What about those enterprises who need to restrict (or allow) viewing based on location?
Yes, it's quite a story that we've learned over the past few years.  Enterprises (a.k.a. Elephants) need it all.  We're "enterprise guys" after all.  It's natural for us to hunt Elephants.
Let's walk through this step-by-step.  In some ways it's like producing movie.  A lot of moving parts, a lot of post editing and ultimately comes down to the final cut.
What is that you want to deliver?  Spoken word?  TV Shows?  Training?  Workouts?  Maybe you want to jump right into why digital, how to customize or other topics...

Let's talk about why go digital?  Does it seem obvious to you?  It's not obvious to everyone.  Companies are still selling a lot of DVDs.

Any device, anywhere, any time!  That's how your customers want the content.

We have everything from APIs to Single Sign On, and SO much more...we are in fact an enterprise solution.


It's time to talk about the benefits.  We have these awesome apps that we've spent a fortune developing and allowing our clients to have full branding experience as you see here for UFC FIT.


We integrate to most of our large customers existing shopping carts.  We simply receive an instant payment notification from them to authorize a new customer.


I'm a data guy at heart, so we track everything about who's watching what, where they are watching from and so much more.  Our analytics reporting shows you this data.  Ultimately this leads to strategic upsell to existing customers.  It's always easier to sell someone who's already purchased over a new customer.


What website would be complete without a full list of client testimonials?


If you can dream up a way to monetize your content, we can implement it.  Credit based subscription systems to straight out purchase...we have it all!

What if you want to sell through affiliates?  How about selling the InteliVideo platform as an affiliate?  Our founders came from ClickBank, so we understand Affiliate payments and how to process them.


Do you need a step-by-step guide to our implementation process?  Well...if so, here you have it!  It's as simple as 5 steps.  For some customers this is a matter of hours and for others it's months.  The first step is simply signing up for an InteliVideo account at: http://intelivideo.com/sign-up/ 

We handle payment processing for you if you would like.  But...most big companies have already negotiated their merchant processing rates AND they typically already have a shopping cart.  So we integrate as needed.

Loading up your content is pretty easy with our platform.  Then again, we have customers with as few as one product and others with thousands of products and 10s of thousands of assets (videos, audio files, etc.).  Most of our big customers simply send us a drive.  We have a bulk upload process where you give us your drive and all of the metadata (descriptions) and the mapping of each...and we load it all up for you.

Our customers can use our own sales pages and/or membership area...or we have a template engine that allows for comprehensive redesign of the entire look and feel.  Out of the box implementations are simple...


Once our clients sign off on everything and our implementation team does as well...it's time to buy your media, promote your products and start selling.  We handle the delivery.


For those who would like to sign up or need more information, what website would be complete without a contact me page?  There are other pages (like our blog, about us, etc), but this page has a lot of information.  It's a story.  At the bottom of the page there is a "Small Business" link, which takes you to the prior version of our website...for small businesses.


As I said at the beginning of this blog post...it's amazing how much thought goes into a new web page!  We're very excited about our business.  Hopefully this post helped you think through how you want to tell the stories about your business.  How should you focus on your elephants and tigers?  How often should you update your website?  Go forth and crush it!
This new version of our website should be live in the next day or so...as always, I'd love to hear your feedback!

Helix Education puts their competency-based LMS up for sale

Michael Feldstein - Thu, 2014-12-18 17:05

Back in September I wrote about the Helix LMS providing an excellent view into competency-based education and how learning platforms would need to be designed differently for this mode. The traditional LMS – based on a traditional model using grades, seat time and synchronous cohort of students – is not easily adapted to serve CBE needs such as the following:

  1. Explicit learning outcomes with respect to the required skills and concomitant proficiency (standards for assessment)
  2. A flexible time frame to master these skills
  3. A variety of instructional activities to facilitate learning
  4. Criterion-referenced testing of the required outcomes
  5. Certification based on demonstrated learning outcomes
  6. Adaptable programs to ensure optimum learner guidance

In a surprise move, Helix Education is putting the LMS up for sale.  Helix Education provided e-Literate the following statement to explain the changes, at least from a press release perspective.

With a goal of delivering World Class technologies and services, a change we are making is with Helix LMS. After thoughtful analysis and discussion, we have decided to divest (sell) Helix LMS. We believe that the best way for Helix to have a positive impact on Higher Education is to:

  • Be fully committed and invest properly in core “upstream” technologies and services that help institutions aggregate, analyze and act upon data to improve their ability to find, enroll and retain students and ensure their success
  • Continue to build and share our thought leadership around TEACH – program selection, instructional design and faculty engagement for CBE, on-campus, online and hybrid delivery modes.
  • Be LMS neutral and support whichever platform our clients prefer. In fact, we already have experience in building CBE courses in the top three LMS solutions.

There are three aspects of this announcement that are quite interesting to me.

Reversal of Rebranding

Part of the surprise is that Helix rebranded the company based on their acquisition of the LMS – this was not just a simple acquisition of a learning platform – and just over a year after this event Helix Education is reversing course, selling the Helix LMS and going LMS-neutral. From the earlier blog post [emphasis added]:

In 2008 Altius Education, started by Paul Freedman, worked with Tiffin University to create a new entity called Ivy Bridge College. The goal of Ivy Bridge was to help students get associate degrees and then transfer to a four-year program. Altius developed the Helix LMS specifically for this mission. All was fine until the regional accrediting agency shut down Ivy Bridge with only three months notice.

The end result was that Altius sold the LMS and much of the engineering team to Datamark in 2013. Datamark is an educational services firm with a focus on leveraging data. With the acquisition of the Helix technology, Datamark could expand into the teaching and learning process, leading them to rebrand as Helix Education – a sign of the centrality of the LMS to the company’s strategy. Think of Helix Education now as an OSP (a la carte services that don’t require tuition revenue sharing) with an emphasis on CBE programs.

Something must have changed in their perception of the market to cause this change in direction. My guess is that they are getting pushback from schools who insist on keeping their institutional LMS, even with the new CBE programs. Helix states they have worked with “top three LMS solutions”, but as seen in the demo (read the first post for more details), capabilities such as embedding learning outcomes throughout a course and providing a flexible time frame work well outside the core design assumptions of a traditional LMS. I have yet to see an elegant design for CBE with a traditional LMS. I’m open to being convinced otherwise, but count me as skeptical.

Upstream is Profitable

The general move sounds like the main component is the moving “upstream” element. To be more accurate, it’s more a matter of staying “upstream” and choosing to not move downstream. It’s difficult, and not always profitable, to deal with implementing academic programs. Elements built on enrollment and retention are quite honestly much more profitable. Witness the recent sale of the enrollment consulting firm Royall & Company for $850 million.

The Helix statement describes their TEACH focus as one of thought leadership. To me this sounds like the core business will be on enrollment, retention and data analysis while they focus academic efforts not on direct implementation products and services, but on white papers and presentations.

Meaning for Market

Helix Education was not the only company building CBE-specific learning platforms to replace the traditional LMS. FlatWorld Knowledge built a platform that is being used at Brandman University. LoudCloud Systems built a new CBE platform FASTrak – and they already have a traditional LMS (albeit one designed with a modern architecture). Perhaps most significantly, the CBE pioneers Western Governors University and Southern New Hampshire University’s College for America (CfA) built custom platforms based on CRM technology (i.e. Salesforce) based on their determination that the traditional LMS market did not suit their specific needs. CfA even spun off their learning platform as a new company – Motivis Learning.

If Helix Education is feeling the pressure to be LMS-neutral, does that mean that these other companies are or will be facing the same? Or, is Helix Education’s decision really based on company profitability and capabilities that are unique to their specific situation?

The other side of the market effect will be determined by which company buys the Helix LMS. Will a financial buyer (e.g. private equity) choose to create a standalone CBE platform company? Will a traditional LMS company buy the Helix LMS to broaden their reach in the quickly-growing CBE space (350 programs in development in the US)? Or will an online service provider and partial competitor of Helix Education buy the LMS? It will be interesting to see which companies bid on this product line and who wins.

Overall

If I find out more about what this change in direction means for Helix Education or for competency-based programs in general, I’ll share in future posts.

The post Helix Education puts their competency-based LMS up for sale appeared first on e-Literate.

OBIEE and ODI on Hadoop : Next-Generation Initiatives To Improve Hive Performance

Rittman Mead Consulting - Thu, 2014-12-18 16:26

The other week I posted a three-part series (part 1, part 2 and part 3) on going beyond MapReduce for Hadoop-based ETL, where I looked at a typical Apache Pig dataflow-style ETL process and showed how Apache Tez and Apache Spark can potentially make these processes run faster and make better use of in-memory processing. I picked Pig as a data processing environment as the multi-step data transformations creates translate into lots of separate MapReduce jobs in traditional Hadoop ETL environments, but run as a single DAG (directed acyclic graph) under Tez and Spark and can potentially use memory to pass intermediate results between steps, rather than writing all those intermediate datasets to disk.

But tools such as OBIEE and ODI use Apache Hive to interface with the Hadoop world, not Pig, so its improvements to Hive that will have the biggest immediate impact on the tools we use today. And what’s interesting is the developments and work thats going on around Hive in this area, with four different “next-generation Hive” initiatives going on that could end-up making OBIEE and ODI on Hadoop run faster:

  • Hive-on-Tez (or “Stinger”), principally championed by Hortonworks, along with Stinger.next which will enable ACID transactions in HiveQL
  • Hive-on-Spark, a more limited port of Hive to run on Spark and backed by Cloudera amongst others
  • Spark SQL within Apache Spark, which enables SQL queries against Spark RDDs (and Hive tables), and exposes a HiveServer2-compatible Thrift Server for JDBC access
  • Vendor initiatives that build on Hive but are mainly around integration with their RDBMS engines, for example Oracle Big Data SQL

Vendor initiatives like Oracle’s Big Data SQL and Cloudera Impala have the benefit of working now (and are supported), but usually come with some sort of penalty for not working directly within the Hive framework. Oracle’s Big Data SQL, for example, can read data from Hive (very efficiently, using Exadata SmartScan-type technology) but then can’t write-back to Hive, and currently pulls all the Hive data into Oracle if you try and join Oracle and Hive data together. Cloudera’s Impala, on the other hand, is lightening-fast and works directly on the Hadoop platform, but doesn’t support the same ecosystem of SerDes and storage handlers that Hive supports, taking away one of the key flexibility benefits of working with Hive. 

So what about the attempts to extend and improve Hive, or include Hive interfaces and compatibility in Spark? In most cases an ETL routine written as a series of Hive statements isn’t going to be as fast or resource-efficient as a custom Spark program, but if we can make Hive run faster or have a Spark application masquerade as a Hive database, we could effectively give OBIEE and ODI on Hadoop a “free” platform performance upgrade without having to change the way they access Hadoop data. So what are these initiatives about, and how usable are they now with OBIEE and ODI?

Probably the most ambitious next-generation Hive project is the Stinger initiative. Backed by Hortonworks and based on the Apache Tez framework that runs on Hadoop 2.0 and YARN. Stinger aimed first to port Hive to run on Tez (which runs MapReduce jobs but enables them to potentially run as a single DAG), and then add ACID transaction capabilities so that you can UPDATE and DELETE from a Hive table as well as INSERT and SELECT, using a transaction model that allows you to roll-back uncommitted changes (diagram from the Hortonworks Stinger.next page)

NewImage

Tez is more of a set of developer APIs rather than the full data discovery / data analysis platform that Spark aims to provide, but it’s a technology that’s available now as part of Hortonworks HDP2.2 platform and as I showed in the blog post a few days ago, an existing Pig script that you run as-is on a Tez environment typically runs twice as fast as when its using MapReduce to move data around (with usual testing caveats applying, YMMV etc). Hive should be the same as well, giving us the ability to take Hive transformation scripts and run them unchanged except for specifying Tez at the start as the execution engine.

NewImage

Hive on Tez is probably the first of these initiatives we’ll see working with ODI and OBIEE, as ODI has just been certified for Hortonworks HDP2.1, and the new HDP2.2 release is the one that comes with Tez as an option for Pig and Hive query execution. I’m guessing ODI will need to have its Hive KMs updated to add a new option to select Tez or MapReduce as the underlying Hive execution engine, but otherwise I can see this working “out of the box” once ODI support for HDP2.2 is announced.

Going back to the last of the three blog posts I wrote on going beyond MapReduce, many in the Hadoop industry back Spark as the successor to MapReduce rather than Tez as its a more mature implementation that goes beyond the developer-level APIs that Tez aims to provide to make Pig and Hive scripts run faster. As we’ll see in a moment Spark comes with its own SQL capabilities and a Hive-compatible JDBC interface, but the other “swap-out-the-execution-engine” initiative to improve Hive is Hive on Spark, a port of Hive that allows Spark to be used as Hive’s execution engine instead of just MapReduce.

Hive on Spark is at an earlier stage in development than Hive on Tez with the first demo being given at the recent Strata + Hadoop World New York, and specific builds of Spark and versions of Hive needed to get it running. Interestingly though, a post went on the Cloudera Blog a couple of days ago announcing an Amazon AWS AMI machine image that you could use to test Hive on Spark, which though it doesn’t come with a full CDH or HDP installation or features such as a HiveServer JDBC interface, comes with a small TPC-DS dataset and some sample queries that we can use to get a feeling for how it works. I used the AMI image to create an Amazon AWS m3.large instance and gave it a go.

By default, Hive in this demo environment is configured to use Spark as the underlying execution engine. Running a couple of the TPC-DS queries first using this Spark engine, and then switching back to MapReduce by running the command “set hive.execution.engine=mr” within the Hive CLI, I generally found queries using Spark as the execution engine ran 2-3x faster than the MapReduce ones.

NewImage

You can’t read too much into this timing as the demo AMI is really only to show off the functional features (Hive using Spark as the execution engine) and no work on performance optimisation has been done, but it’s encouraging even at this point that it’s significantly faster than the MapReduce version.

Long-term the objective is to have both Tez and Spark available as options as execution engines under Hive, along with MapReduce, as the diagram below from a presentation by Cloudera’s Szenon Ho shows; the advantage of building on Hive like this rather than creating your own new SQL-on-Hadoop engine is that you can make use of the library of SerDes, storage handlers and so on that you’d otherwise need to recreate for any new tool.

NewImage

The third major SQL-on-Hadoop initiative I’ve been looking at is Spark SQL within Apache Spark. Unlike Hive on Spark which aims to swap-out the compiler and execution engine parts of Hive but otherwise leave the rest of the product unchanged, Apache Spark as a whole is a much more freeform, flexible data query and analysis environment that’s aimed more at analysts that business users looking to query their dataset using SQL. That said, Spark has some cool SQL and Hive integration features that make it an interesting platform for doing data analysis and ETL.

In my Spark ETL example the other day, I loaded log data and some reference data into RDDs and then filtered and transformed them using a mix of Scala functions and Spark SQL queries. Running on top of the set of core Spark APIs, Spark SQL allows you to register temporary tables within Spark that map onto RDDs, and give you the option of querying your data using either familiar SQL relational operators, or the more functional programming-style Scala language

NewImage

You can also create connections to the Hive metastore though, and create Hive tables within your Spark application for when you want to persist results to a table rather than work with the temporary tables that Spark SQL usually creates against RDDs. In the code example below, I create a HiveContext as opposed to the sqlContext that I used in the example on the previous blog, and then use that to create a table in my Hive database, running on a Hortonworks HDP2.1 VM with Spark 1.0.0 pre-built for Hadoop 2.4.0:

scala> val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
scala> hiveContext.hql("CREATE TABLE posts_hive (post_id int, title string, postdate string, post_type string, author string, post_name string, generated_url string) row format delimited fields terminated by '|' stored as textfile")
scala> hiveContext.hql("LOAD DATA INPATH '/user/root/posts.psv' INTO TABLE posts_hive")

If I then go into the Hive CLI, I can see this new table listed there alongside the other ones:

hive> show tables;
OK
dummy
posts
posts2
posts_hive
sample_07
sample_08
src
testtable2
Time taken: 0.536 seconds, Fetched: 8 row(s)

What’s even more interesting is that Spark also comes with a HiveServer2-compatible Thrift Server, making it possible for tools such as ODI that connect to Hive via JDBC to run Hive queries through Spark, with the Hive metastore providing the metadata but Spark running as the execution engine.

NewImage

This is subtly different to Hive-on-Spark as Hive’s metastore, support for SerDes and storage handlers runs under the covers but Spark provides you with a full programmatic environment, making it possible to just expose Hive tables through the Spark layer, or mix and match data from RDDs, Hive tables and other sources before storing and then exposing the results through the Hive SQL interface. For example then, you could use Oracle SQL*Developer 4.1 with the Cloudera Hive JDBC drivers to connect to this Spark SQL Thrift Server and query the tables just like any other Hive source, but crucially the Hive execution is being done by Spark, rather than MapReduce as would normally happen.

NewImage

Like Hive-on-Spark, Spark SQL and Hive support within Spark SQL are at early stages, with Spark SQL not yet being supported by Cloudera whereas the core Spark API is. From the work I’ve done with it, it’s not yet possible to expose Spark SQL temporary tables through the HiveServer2 Thrift Server interface, and I can’t see a way of creating Hive tables out of RDDs unless you stage the RDD data to a file in-between. But it’s clearly a promising technology and if it becomes possible to seamlessly combine RDD data and Hive data, and expose Spark RDDs registered as tables through the HiveServer2 JDBC interface it could make Spark a very compelling platform for BI and data analyst-type applications. Oracle’s David Allen, for example, blogged about using Spark and the Spark SQL Thrift Server interface to connect ODI to Hive through Spark, and I’d imagine it’d be possible to use the Cloudera HiveServer2 ODBC drivers along with the Windows version of OBIEE 11.1.1.7 to connect to Spark in this way too – if I get some spare time over the Christmas break I’ll try and get an example working.

Categories: BI & Warehousing

Amazon Echo, The Future or Fad?

Oracle AppsLab - Thu, 2014-12-18 16:10

AmazonEcho

Last November Amazon announced a new kind of device. Part speaker, part personal assistant and it called it Amazon Echo. If you saw the announcement you might have also see their quirky infomercial.

The parodies came hours after their announcement, and they were funny. But dismissing this just as a Siri/Cortana/Google Now copycat might miss the potential of this “always listening” device. To be fair this is not the first device that can do this. I have a Moto X that has an alway-on chip waiting for a wake word (“OK Google”), Google Glass glass does the same thing (“OK Glass.”) But the fact that I don’t have to hold the device, be near it, or push a button (Siri) makes this cylinder kind of magical.

It is also worth noting that NONE of these devices are really “always-listening-and-sending-all-your-conversations-to-the-NSA,” in fact the “always listening” part is local. Once you say the wake word then I guess you better make sure don’t spill the beans for the next few seconds, which is the period that the device will listen and do a STT (speech-t0-text) operation on the Cloud.

We can all start seeing through Amazon and why this good for them. Right off the bat you can buy songs with a voice command. You can also add “stuff” to your shopping list. Which also reminds me of a similar product Amazon had last year, Amazon Dash  which unfortunately is only for selected markets. The fact is that Amazon wants us to buy more from them, and for some of us that is awesome, right? Prime, two day shipping, drone delivery, etc.

I have been eyeing these “always listening” devices for a while. The Ubi ($300) and Ivee ($200) were my two other choices. Both have had mixed reviews and both of them are still absent on the promise of an SDK or API. Amazon Echo doesn’t have an SDK yet, but they placed a link to show the Echo team your interest in developing apps for it.

The promise of a true artificial intelligence assistant or personal contextual assistant (PCA) is coming soon to a house or office near you. Which brings me to my true interest in Amazon Echo. The possibility of creating a “Smart Office” where the assistant will anticipate my day-to-day tasks, setup meetings, remind me of upcoming events, analyze and respond email and conversations, all tied to my Oracle Cloud of course.  The assistant will also control physical devices in my house/office “Alexa, turn on the lights,” “Alexa, change the temperature to X,” etc.

All in all, it has been fun to request holiday songs around the kitchen and dinning room (“Alexa, play Christmas music.”) My kids are having a hay day trying to ask the most random questions. My wife, on the other side, is getting tired of the constant interruption of music, but I guess it’s the novelty. We shall see if my kids are still friendly to Alexa in the coming months.

In my opinion, people dismissing Amazon Echo, will be the same people that said: “Why do I need a music player on my phone, I already have ALL my music collection in my iPod” (iPhone naysayers circa 2007), “Why do I need a bigger iPhone? That `pad thing is ridiculously huge!” (iPad naysayers circa 2010.) And now I have already heard “Why do I want a device that is always connected and listening, I already have Siri/Cortana/Google Now” (Amazon Echo naysayers circa 2014.)

Agree, disagree?  Let me know.Possibly Related Posts:

FBI warns organizations of dangerous malware

Chris Foot - Thu, 2014-12-18 14:40

Transcript

Hi, welcome to RDX! The Federal Bureau of Investigation recently sent a five-page document to businesses, warning them of a particularly destructive type of malware. It is believed the program was the same one used to infiltrate Sony's databases.

The FBI report detailed the malware's capabilities. Apparently, the software overrides all information on computer hard drives, including the master boot record. This could prevent servers from accessing critical software, such as operating systems or enterprise applications.

Database data can be lost or corrupted for many reasons. Regardless if the data loss was due to a hardware failure, human error or the deliberate act of a cybercriminal, database backups ensure that critical data can be quickly restored.  RDX's backup and recovery experts are able to design well-thought out strategies that help organizations protect their databases from any type of unfortunate event.

Thanks for watching!

The post FBI warns organizations of dangerous malware appeared first on Remote DBA Experts.

Oracle Priority Support Infogram for 18-DEC-2014

Oracle Infogram - Thu, 2014-12-18 10:09

Exalogic
E-Business Suite on Exalogic and Exadata, from the Oracle Exalogic blog.
GoldenGate
From Maximum Availability Architecture, Oracle GoldenGate Active-Active Part 3.
Java
Node.js and io.js on Java, from the Java Platform Group, Product Management blog.
From JDeveloper PMs Blog: ADF meet smartphone, phone meet smartADF.
From Poonam Bajaj: Long Class-Unloading Pauses with JDK8
BI
Top 5 OTN Business Intelligence Articles for 2014, from ArchBeat.
Analytics
Updated for OBIA: FMW Lifetime Support Policy Document, from Business Analytics - Proactive Support.
HCM
Oracle HCM Cloud Applications - A New Continuous Learning Solution, from Oracle University.
MAF
How to debug the HTML in Oracle MAF Applications, from Grant Ronald’s Blog.
SOA
Samples and Demos for Oracle Managed File Transfer, from the SOA & BPM Partner Community Blog.
WebLogic
From WebLogic Partner Community EMEA, Alta UI A modern mobile and browser application design system
PaaS
WEBINAR: Oracle Public Cloud: Platform-as-a-Service – January 14, from the Oracle PartnerNetwork Strategy Blog.
Demantra
Demantra 12.2.4.1 Has Been Released. , from the Oracle Demantra blog.
EBS
From Oracle E-Business Suite Technology:
Updated: Using Third-Party Identity Managers with E-Business Suite Release 12
From Oracle E-Business Suite Support Blog:
An Insight Into Asset Impairment
Spares Management Planner's Dashboard Updates
Attention North American Payroll customers: The US and Canadian End of Year 2014 Phase 2 and Year Begin 2015 patches have been released!
Overshipped Quantities for RMA's in Order Management
Procurement Code Level added to the Approval Analyzer Output!

We're Nearing Year End and Here's a Procurement Activity I would Recommend

Oracle 12.1.0.2 Bundle Patching

Jason Arneil - Thu, 2014-12-18 09:37

I’ve spent a few days playing with patching 12.1.0.2 with the so called “Database Patch for Engineered Systems and Database In-Memory”. Lets skip over why these not necessarily related feature sets should be bundled together into effectively a Bundle Patch.

First I was testing going from 12.1.0.2.1 to BP2 or 12.1.0.2.2. Then as soon as I’d done that of course BP3 was released.

So this is our starting position with BP1:

GI HOME:

[oracle@rac2 ~]$ /u01/app/12.1.0/grid_1/OPatch/opatch lspatches
19392604;OCW PATCH SET UPDATE : 12.1.0.2.1 (19392604)
19392590;ACFS Patch Set Update : 12.1.0.2.1 (19392590)
19189240;DATABASE BUNDLE PATCH : 12.1.0.2.1 (19189240)

DB Home:

[oracle@rac2 ~]$ /u01/app/oracle/product/12.1.0.2/db_1/OPatch/opatch lspatches
19392604;OCW PATCH SET UPDATE : 12.1.0.2.1 (19392604)
19189240;DATABASE BUNDLE PATCH : 12.1.0.2.1 (19189240)

Simple enough, right? BP1 and the individual patch components within BP1 give you 12.1.0.2.1. Even I can follow this.

Lets try and apply BP2 to the above. We will use opatchauto for this, and to begin with we will run an analyze:

[root@rac2 ~]# /u01/app/12.1.0/grid_1/OPatch/opatchauto apply -analyze /tmp/BP2/19774304 -ocmrf /tmp/ocm.rsp 
OPatch Automation Tool
Copyright (c) 2014, Oracle Corporation.  All rights reserved.

OPatchauto version : 12.1.0.1.5
OUI version        : 12.1.0.2.0
Running from       : /u01/app/12.1.0/grid_1

opatchauto log file: /u01/app/12.1.0/grid_1/cfgtoollogs/opatchauto/19774304/opatch_gi_2014-12-18_13-35-17_analyze.log

NOTE: opatchauto is running in ANALYZE mode. There will be no change to your system.

Parameter Validation: Successful

Grid Infrastructure home:
/u01/app/12.1.0/grid_1
RAC home(s):
/u01/app/oracle/product/12.1.0.2/db_1

Configuration Validation: Successful

Patch Location: /tmp/BP2/19774304
Grid Infrastructure Patch(es): 19392590 19392604 19649591 
RAC Patch(es): 19392604 19649591 

Patch Validation: Successful

Analyzing patch(es) on "/u01/app/oracle/product/12.1.0.2/db_1" ...
Patch "/tmp/BP2/19774304/19392604" analyzed on "/u01/app/oracle/product/12.1.0.2/db_1" with warning for apply.
Patch "/tmp/BP2/19774304/19649591" analyzed on "/u01/app/oracle/product/12.1.0.2/db_1" with warning for apply.

Analyzing patch(es) on "/u01/app/12.1.0/grid_1" ...
Patch "/tmp/BP2/19774304/19392590" analyzed on "/u01/app/12.1.0/grid_1" with warning for apply.
Patch "/tmp/BP2/19774304/19392604" analyzed on "/u01/app/12.1.0/grid_1" with warning for apply.
Patch "/tmp/BP2/19774304/19649591" analyzed on "/u01/app/12.1.0/grid_1" with warning for apply.

SQL changes, if any, are analyzed successfully on the following database(s): TESTRAC

Apply Summary:

opatchauto ran into some warnings during analyze (Please see log file for details):
GI Home: /u01/app/12.1.0/grid_1: 19392590, 19392604, 19649591
RAC Home: /u01/app/oracle/product/12.1.0.2/db_1: 19392604, 19649591

opatchauto completed with warnings.

Well, that does not look promising. I have no “one-off” patches in this home to cause a conflict, it should be a simple BP1->BP2 patching without any issues.

Digging into the logs we find the following:

.
.
.
[18-Dec-2014 13:37:08]       Verifying environment and performing prerequisite checks...
[18-Dec-2014 13:37:09]       Patches to apply -> [ 19392590 19392604 19649591  ]
[18-Dec-2014 13:37:09]       Identical patches to filter -> [ 19392590 19392604  ]
[18-Dec-2014 13:37:09]       The following patches are identical and are skipped:
[18-Dec-2014 13:37:09]       [ 19392590 19392604  ]
.
.

Essentially out of the 3 patches in the home at BP1 only the Database Bundle Patch 19189240 is superseded by BP2. Maybe this annoys me more than it should. I like my patches applied by BP2 to end in 2. I also don’t like the fact the analyze throws a warning about this.

Lets patch:

[root@rac2 ~]# /u01/app/12.1.0/grid_1/OPatch/opatchauto apply /tmp/BP2/19774304 -ocmrf /tmp/ocm.rsp 
OPatch Automation Tool
Copyright (c) 2014, Oracle Corporation.  All rights reserved.

OPatchauto version : 12.1.0.1.5
OUI version        : 12.1.0.2.0
Running from       : /u01/app/12.1.0/grid_1

opatchauto log file: /u01/app/12.1.0/grid_1/cfgtoollogs/opatchauto/19774304/opatch_gi_2014-12-18_13-54-03_deploy.log

Parameter Validation: Successful

Grid Infrastructure home:
/u01/app/12.1.0/grid_1
RAC home(s):
/u01/app/oracle/product/12.1.0.2/db_1

Configuration Validation: Successful

Patch Location: /tmp/BP2/19774304
Grid Infrastructure Patch(es): 19392590 19392604 19649591 
RAC Patch(es): 19392604 19649591 

Patch Validation: Successful

Stopping RAC (/u01/app/oracle/product/12.1.0.2/db_1) ... Successful
Following database(s) and/or service(s)  were stopped and will be restarted later during the session: testrac

Applying patch(es) to "/u01/app/oracle/product/12.1.0.2/db_1" ...
Patch "/tmp/BP2/19774304/19392604" applied to "/u01/app/oracle/product/12.1.0.2/db_1" with warning.
Patch "/tmp/BP2/19774304/19649591" applied to "/u01/app/oracle/product/12.1.0.2/db_1" with warning.

Stopping CRS ... Successful

Applying patch(es) to "/u01/app/12.1.0/grid_1" ...
Patch "/tmp/BP2/19774304/19392590" applied to "/u01/app/12.1.0/grid_1" with warning.
Patch "/tmp/BP2/19774304/19392604" applied to "/u01/app/12.1.0/grid_1" with warning.
Patch "/tmp/BP2/19774304/19649591" applied to "/u01/app/12.1.0/grid_1" with warning.

Starting CRS ... Successful

Starting RAC (/u01/app/oracle/product/12.1.0.2/db_1) ... Successful

SQL changes, if any, are applied successfully on the following database(s): TESTRAC

Apply Summary:

opatchauto ran into some warnings during patch installation (Please see log file for details):
GI Home: /u01/app/12.1.0/grid_1: 19392590, 19392604, 19649591
RAC Home: /u01/app/oracle/product/12.1.0.2/db_1: 19392604, 19649591

opatchauto completed with warnings.

I do not like to see warnings when I’m patching. The log file for the apply is similar to the analyze, identical patches skipped.

Checking where we are with GI and DB patches now:

[oracle@rac2 ~]$ /u01/app/12.1.0/grid_1/OPatch/opatch lspatches
19649591;DATABASE BUNDLE PATCH : 12.1.0.2.2 (19649591)
19392604;OCW PATCH SET UPDATE : 12.1.0.2.1 (19392604)
19392590;ACFS Patch Set Update : 12.1.0.2.1 (19392590)

[oracle@rac2 ~]$ /u01/app/oracle/product/12.1.0.2/db_1/OPatch/opatch lspatches
19649591;DATABASE BUNDLE PATCH : 12.1.0.2.2 (19649591)
19392604;OCW PATCH SET UPDATE : 12.1.0.2.1 (19392604)

The only one changed is the DATABASE BUNDLE PATCH.

The one MOS document I effectively have on “speed dial” is 888828.1 and that showed up BP3 as being available 17th December. It also had the following warning:

Before install on top of 12.1.0.2.1DBBP or 12.1.0.2.2DBBP, first rollback patch 19392604 OCW PATCH SET UPDATE : 12.1.0.2.1

[root@rac2 ~]# /u01/app/12.1.0/grid_1/OPatch/opatchauto apply -analyze /tmp/BP3/20026159 -ocmrf /tmp/ocm.rsp 
OPatch Automation Tool
Copyright (c) 2014, Oracle Corporation.  All rights reserved.

OPatchauto version : 12.1.0.1.5
OUI version        : 12.1.0.2.0
Running from       : /u01/app/12.1.0/grid_1

opatchauto log file: /u01/app/12.1.0/grid_1/cfgtoollogs/opatchauto/20026159/opatch_gi_2014-12-18_14-13-58_analyze.log

NOTE: opatchauto is running in ANALYZE mode. There will be no change to your system.

Parameter Validation: Successful

Grid Infrastructure home:
/u01/app/12.1.0/grid_1
RAC home(s):
/u01/app/oracle/product/12.1.0.2/db_1

Configuration Validation: Successful

Patch Location: /tmp/BP3/20026159
Grid Infrastructure Patch(es): 19392590 19878106 20157569 
RAC Patch(es): 19878106 20157569 

Patch Validation: Successful
Command "/u01/app/12.1.0/grid_1/OPatch/opatch prereq CheckConflictAgainstOH -ph /tmp/BP3/20026159/19878106 -invPtrLoc /u01/app/12.1.0/grid_1/oraInst.loc -oh /u01/app/12.1.0/grid_1" execution failed

Log file Location for the failed command: /u01/app/12.1.0/grid_1/cfgtoollogs/opatch/opatch2014-12-18_14-14-50PM_1.log

Analyzing patch(es) on "/u01/app/oracle/product/12.1.0.2/db_1" ...
Patch "/tmp/BP3/20026159/19878106" analyzed on "/u01/app/oracle/product/12.1.0.2/db_1" with warning for apply.
Patch "/tmp/BP3/20026159/20157569" analyzed on "/u01/app/oracle/product/12.1.0.2/db_1" with warning for apply.

Analyzing patch(es) on "/u01/app/12.1.0/grid_1" ...
Command "/u01/app/12.1.0/grid_1/OPatch/opatch napply -phBaseFile /tmp/OraGI12Home2_patchList -local  -invPtrLoc /u01/app/12.1.0/grid_1/oraInst.loc -oh /u01/app/12.1.0/grid_1 -silent -report -ocmrf /tmp/ocm.rsp" execution failed: 
UtilSession failed: After skipping conflicting patches, there is no patch to apply.

Log file Location for the failed command: /u01/app/12.1.0/grid_1/cfgtoollogs/opatch/opatch2014-12-18_14-15-30PM_1.log

Following step(s) failed during analysis:
/u01/app/12.1.0/grid_1/OPatch/opatch prereq CheckConflictAgainstOH -ph /tmp/BP3/20026159/19878106 -invPtrLoc /u01/app/12.1.0/grid_1/oraInst.loc -oh /u01/app/12.1.0/grid_1
/u01/app/12.1.0/grid_1/OPatch/opatch napply -phBaseFile /tmp/OraGI12Home2_patchList -local  -invPtrLoc /u01/app/12.1.0/grid_1/oraInst.loc -oh /u01/app/12.1.0/grid_1 -silent -report -ocmrf /tmp/ocm.rsp


SQL changes, if any, are analyzed successfully on the following database(s): TESTRAC

Apply Summary:

opatchauto ran into some warnings during analyze (Please see log file for details):
RAC Home: /u01/app/oracle/product/12.1.0.2/db_1: 19878106, 20157569

Following patch(es) failed to be analyzed:
GI Home: /u01/app/12.1.0/grid_1: 19392590, 19878106, 20157569

opatchauto analysis reports error(s).

Looking at the log file we see patch 19392604 already in the home conflicts with patch 19878106 from BP3. 19392604 is the OCW PATCH SET UPDATE in BP1 (and BP2) while 19878106 is the Database Bundle Patch in BP3. We see the following in the log file:

Patch 19878106 has Generic Conflict with 19392604. Conflicting files are :
                             /u01/app/12.1.0/grid_1/bin/diskmon

That seems messy. It definitely annoys me that to apply BP3 I have to take additional steps of rolling back a pervious BP. I don’t recall having to do this with previous Bundle Patches, and I’ve applied a fair few of them.

I rolled the lot back with opatchauto rollback. Then applied BP3 ontop of the unpatched homes I was left with. But lets look at what BP3 on top of 12.1.0.2 gives you:

GI Home:

[oracle@rac1 ~]$ /u01/app/12.1.0/grid_1/OPatch/opatch lspatches
20157569;OCW Patch Set Update : 12.1.0.2.1 (20157569)
19878106;DATABASE BUNDLE PATCH: 12.1.0.2.3 (19878106)
19392590;ACFS Patch Set Update : 12.1.0.2.1 (19392590)

DB Home:

[oracle@rac1 ~]$ /u01/app/12.1.0/grid_1/OPatch/opatch lspatches
20157569;OCW Patch Set Update : 12.1.0.2.1 (20157569)
19878106;DATABASE BUNDLE PATCH: 12.1.0.2.3 (19878106)

So for BP2 we had patch 19392604 OCW PATCH SET UPDATE : 12.1.0.2.1 Now we still have a 12.1.0.2.1 OCW Patch Set Update with BP3 but it has a new patch number!

That really irritates.


Performance Issues with the Sequence NEXTVAL Call

Pythian Group - Thu, 2014-12-18 08:51

Is SELECTing from a sequence your Oracle Performance Problem? The answer to that question is: it might be!

You wouldn’t expect a sequence select to be a significant problem but recently we saw that it was—and in two different ways. The issue came to light when investigating a report performance issue on an Oracle 11.2.0.4 non-RAC database. Investigating the original report problem required an AWR analysis and a SQL trace (actually a 10046 level 12 trace – tracing the bind variables was of critical importance in troubleshooting the initial problem with the report).

 

First problem: if SQL_ID = 4m7m0t6fjcs5x appears in the AWR reports

SELECTing a sequence value using the NEXTVAL function is supposed to be a fairly lightweight process. The sequence’s last value is stored in memory and a certain definable number of values are pre-fetched and cached in memory (default is CACHE=20). However when those cached values are exhausted the current sequence value must be written to disk (so duplicate values aren’t given upon restarts after instance crashes). And that’s done via an update on the SYS.SEQ$ table. The resulting SQL_ID and statement for this recursive SQL is:

SQL_ID   = 4m7m0t6fjcs5x

SQL Text = update seq$ set increment$=:2, minvalue=:3, maxvalue=:4, cycle#=:5, order$=:6,
           cache=:7, highwater=:8, audit$=:9, flags=:10 where obj#=:1

 

This is recursive SQL and consequently it and the corresponding SQL_ID is consistent between databases and even between Oracle versions.

Hence seeing SQL_ID 4m7m0t6fjcs5x as one of the top SQL statements in the AWR report indicates a possible problem. In our case it was the #1 top statement in terms of cumulative CPU. The report would select a large number of rows and was using a sequence value and the NEXTVAL call to form a surrogate key.

So what can be done about this? Well like most SQL tuning initiatives one of the best ways to tune a statement is to run it less frequently. With SQL_ID 4m7m0t6fjcs5x that’s easy to accomplish by changing the sequence’s cache value.

In our case, seeing SQL_ID 4m7m0t6fjcs5x as the top SQL statement quickly lead us to check the sequence settings and saw that almost all sequences had been created with the NOCACHE option. Therefore no sequence values were being cached and an update to SEQ$ was necessary after every single NEXTVAL call. Hence the problem.

Caching sequence values adds the risk of skipped values (or a sequence gap due to the loss of the cached values) when the instance crashes. (Note, no sequence values are lost when the database is shutdown cleanly.)  However in this case, since the sequence is just being used as a surrogate key this was not a problem for the application.

Changing the sequences CACHE setting to 100 completely eliminated the problem, increased the overall report performance, and removed SQL_ID 4m7m0t6fjcs5x from the list of top SQL in AWR reports.

Lesson learned: if you ever see SQL_ID 4m7m0t6fjcs5x in any of the top SQL sections in an AWR or STATSPACK report, double check the sequence CACHE settings.

 

Next problem: significant overhead of tracing the sequence update

Part of investigating a bind variable SQL regression problem with the report required a SQL trace. The report was instrumented with:

alter session set events '10046 trace name context forever, level 12';

 

The tracing made the report run over six times longer. This caused the report to overrun it’s allocated execution window and caused other job scheduling and SLA problems.

Normally we’d expect some overhead of a SQL trace due to the synchronous writes to the trace file, but over a 500% increase was far more than expected. From the developer’s viewpoint the report was essentially just executing a single query. The reality is that it was slightly more complicated than that as the top level query accessed a view. Still the view was not overly complex and hence the developer believed that the report was query intensive. Not executing many queries: just the original top level call and the view SQL.

Again the issue is largely related to the sequence, recursive SQL from the sequence, and specifically statement 4m7m0t6fjcs5x.

Starting with an AWR SQL report of SQL_ID 4m7m0t6fjcs5x from two report executions, one with and one without SQL trace enabled showed:

Without tracing:
Elapsed Time (ms):      278,786
CPU Time (ms):          278,516
Executions:             753,956
Buffer Gets:          3,042,991
Disk Reads:                   0
Rows:                   753,956

With tracing:
Elapsed Time (ms):    2,362,227
CPU Time (ms):        2,360,111
Executions:             836,182
Buffer Gets:          3,376,096
Disk Reads:                   5
Rows:                   836,182

 

So when the report ran with tracing enabled it ran 4m7m0t6fjcs5x 836K times instead of 753K times during the previous non-traced run: a 10.9% increase due to underlying application data changes between the runs. Yet 2.36M ms vs 278K ms in both CPU and elapsed times: a 847% increase!

The question was then: could this really be due to the overhead of tracing or something else? And should all of those recursive SQL update statements materialize as CPU time in the AWR reports? To confirm this and prove it to the developers a simplified sequence performance test was performed on a test database:

The simplified test SQL was:

create sequence s;
declare
   x integer;
begin
   for i in 1 .. 5000000
   loop
      x := s.nextval;
   end loop;
end;
/

 

From AWR SQL reports on SQL_ID 4m7m0t6fjcs5x:

Without tracing:

Stat Name                                Statement   Per Execution % Snap
---------------------------------------- ---------- -------------- -------
Elapsed Time (ms)                            10,259            0.0     7.1
CPU Time (ms)                                 9,373            0.0     6.7
Executions                                  250,005            N/A     N/A
Buffer Gets                                 757,155            3.0    74.1
Disk Reads                                        0            0.0     0.0
Parse Calls                                       3            0.0     0.3
Rows                                        250,005            1.0     N/A


With tracing:

Stat Name                                Statement   Per Execution % Snap
---------------------------------------- ---------- -------------- -------
Elapsed Time (ms)                            81,158            0.3    20.0
CPU Time (ms)                                71,812            0.3    17.9
Executions                                  250,001            N/A     N/A
Buffer Gets                                 757,171            3.0    74.4
Disk Reads                                        0            0.0     0.0
Parse Calls                                       1            0.0     0.1
Rows                                        250,001            1.0     N/A

 

Same number of executions and buffer gets as would be expected but 7.66 times the CPU and 7.91 times the elapsed time just due to the SQL trace!  (Similar results to the 8.47 times increase we saw with the actual production database report execution.)

And no surprise, the resulting trace file is extremely large. As we would expect, since the sequence was created with the default CACHE value of 20 it’s recording each UPDATE with the set of binds followed by 20 NEXTVAL executions:

=====================
PARSING IN CURSOR #140264395012488 len=100 dep=0 uid=74 oct=47 lid=74 tim=1418680119565405 hv=152407152 ad='a52802e0' sqlid='dpymsgc4jb33h'
declare
   x integer;
begin
   for i in 1 .. 5000000
   loop
      x := s.nextval;
   end loop;
end;
END OF STMT
PARSE #140264395012488:c=0,e=256,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=0,tim=1418680119565401
=====================
PARSING IN CURSOR #140264395008592 len=26 dep=1 uid=74 oct=3 lid=74 tim=1418680119565686 hv=575612948 ad='a541eed8' sqlid='0k4rn80j4ya0n'
Select S.NEXTVAL from dual
END OF STMT
PARSE #140264395008592:c=0,e=64,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=3499163060,tim=1418680119565685
EXEC #140264395008592:c=0,e=50,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=3499163060,tim=1418680119565807
=====================
PARSING IN CURSOR #140264395000552 len=129 dep=2 uid=0 oct=6 lid=0 tim=1418680119566005 hv=2635489469 ad='a575c3a0' sqlid='4m7m0t6fjcs5x'
update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,cache=:7,highwater=:8,audit$=:9,flags=:10 where obj#=:1
END OF STMT
PARSE #140264395000552:c=0,e=66,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=4,plh=1935744642,tim=1418680119566003
BINDS #140264395000552:
 Bind#0
  oacdty=02 mxl=22(02) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=a52eb120  bln=22  avl=02  flg=09
  value=1
 Bind#1
  oacdty=02 mxl=22(02) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=a52eb132  bln=22  avl=02  flg=09
  value=1
 Bind#2
  oacdty=02 mxl=22(15) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=a52eb144  bln=22  avl=15  flg=09
  value=9999999999999999999999999999
 Bind#3
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=00 fl2=0001 frm=00 csi=00 siz=48 off=0
  kxsbbbfp=7f91d96ca6b0  bln=22  avl=01  flg=05
  value=0
 Bind#4
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=00 fl2=0001 frm=00 csi=00 siz=0 off=24
  kxsbbbfp=7f91d96ca6c8  bln=22  avl=01  flg=01
  value=0
 Bind#5
  oacdty=02 mxl=22(02) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=a52eb156  bln=22  avl=02  flg=09
  value=20
 Bind#6
  oacdty=02 mxl=22(05) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=a52eb168  bln=22  avl=05  flg=09
  value=5000021
 Bind#7
  oacdty=01 mxl=32(32) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=01 csi=178 siz=32 off=0
  kxsbbbfp=a52eb17a  bln=32  avl=32  flg=09
  value="--------------------------------"
 Bind#8
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=00 fl2=0001 frm=00 csi=00 siz=48 off=0
  kxsbbbfp=7f91d96ca668  bln=22  avl=02  flg=05
  value=8
 Bind#9
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=00 fl2=0001 frm=00 csi=00 siz=0 off=24
  kxsbbbfp=7f91d96ca680  bln=22  avl=04  flg=01
  value=86696
EXEC #140264395000552:c=1000,e=798,p=0,cr=1,cu=2,mis=0,r=1,dep=2,og=4,plh=1935744642,tim=1418680119566897
STAT #140264395000552 id=1 cnt=0 pid=0 pos=1 obj=0 op='UPDATE  SEQ$ (cr=1 pr=0 pw=0 time=233 us)'
STAT #140264395000552 id=2 cnt=1 pid=1 pos=1 obj=79 op='INDEX UNIQUE SCAN I_SEQ1 (cr=1 pr=0 pw=0 time=23 us cost=0 size=69 card=1)'
CLOSE #140264395000552:c=0,e=3,dep=2,type=3,tim=1418680119567042
FETCH #140264395008592:c=1000,e=1319,p=0,cr=1,cu=3,mis=0,r=1,dep=1,og=1,plh=3499163060,tim=1418680119567178
STAT #140264395008592 id=1 cnt=1 pid=0 pos=1 obj=86696 op='SEQUENCE  S (cr=1 pr=0 pw=0 time=1328 us)'
STAT #140264395008592 id=2 cnt=1 pid=1 pos=1 obj=0 op='FAST DUAL  (cr=0 pr=0 pw=0 time=1 us cost=2 size=0 card=1)'
CLOSE #140264395008592:c=0,e=1,dep=1,type=3,tim=1418680119567330
EXEC #140264395008592:c=0,e=19,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=3499163060,tim=1418680119567378
FETCH #140264395008592:c=0,e=14,p=0,cr=0,cu=0,mis=0,r=1,dep=1,og=1,plh=3499163060,tim=1418680119567425
CLOSE #140264395008592:c=0,e=1,dep=1,type=3,tim=1418680119567458
...
< Repeats #140264395008592 18 more times due to CACHE=20 >

 

From the trace, it’s apparent that not only is there the overhead of updating the SEQ$ table but maintaining the I_SEQ1 index as well. A tkprof on the test shows us the same information:

declare
   x int;
begin
   for i in 1..5000000 loop
      x := s.nextval;
   end loop;
end;

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          2          0           0
Execute      1    241.55     247.41          0     250003          0           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2    241.56     247.41          0     250005          0           1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 74

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  log file sync                                   1        0.01          0.01
  SQL*Net message to client                       1        0.00          0.00
  SQL*Net message from client                     1        0.00          0.00
********************************************************************************

SQL ID: 0k4rn80j4ya0n Plan Hash: 3499163060

Select S.NEXTVAL
from
 dual


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute 5000000     35.37      30.49          0          0          0           0
Fetch   5000000     50.51      45.81          0          0     250000     5000000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   10000001     85.88      76.30          0          0     250000     5000000

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 74     (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SEQUENCE  S (cr=1 pr=0 pw=0 time=910 us)
         1          1          1   FAST DUAL  (cr=0 pr=0 pw=0 time=2 us cost=2 size=0 card=1)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  latch free                                      1        0.00          0.00
********************************************************************************

SQL ID: 4m7m0t6fjcs5x Plan Hash: 1935744642

update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,
  cache=:7,highwater=:8,audit$=:9,flags=:10
where
 obj#=:1


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        0      0.00       0.00          0          0          0           0
Execute 250000     71.81      81.15          0     250003     507165      250000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   250000     71.81      81.15          0     250003     507165      250000

Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS   (recursive depth: 2)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  Disk file operations I/O                        1        0.00          0.00
  log file switch (checkpoint incomplete)         1        0.19          0.19
  log file switch completion                      4        0.20          0.75
********************************************************************************

So clearly we can see a lot of additional overhead when performing a SQL trace of the many calls to the sequence NEXTVAL function. Of course the overhead is due to recursive SQL and the synchronous write of the trace file. It just wasn’t obvious that a simple query could generate that much recursive DML and trace data.

 

Combining the two issues

The next question is what is the effect of the CACHE setting for the sequence as well as the different between a LEVEL 8 and LEVEL 12 trace. Using a similar PL/SQL test block but with only 100,000 executions on a lab database showed the following results measuring CPU time (in seconds):

Cache Size No Trace 10046 level 8 10046 level 12 0 31.94 58.71 94.57 20 7.53 15.29 20.13 100 4.85 13.36 13.50 1000 3.93 10.61 11.93 10000 3.70 10.96 12.20

Hence we can see that with even an extremely high CACHE setting for the sequence, the 10046 trace adds roughly 300% to 400% overhead for this one particular statement. And that the caching sweet-spot seems to be around 100.

 

Conclusions

We often take the Oracle sequence for granted assume that it’s an optimized and efficient internal structure—and for the most part it is. But depending on how it’s implemented, it can be problematic.

If we ever see SQL_ID 4m7m0t6fjcs5x as one of our worst performing SQL statements, we should double check the sequence configuration and usage. Was the CACHE value set low by design, or inadvertently? Is the risk of a sequence gap after an instance crash worth the overhead of a low CACHE value? Perhaps the settings need to be reconsidered and changed?

And a caution about enabling a SQL trace. It’s something we expect to add some overhead. But not 3x to 10x which may make the tracing process unreasonable.  Of course the tracing overhead will be dependent on the actual workload.  But for those that are sequence NEXTVAL heavy, don’t underestimate the underlying recursive SQL as the overhead can be significant—and much more than one may think.

Categories: DBA Blogs