Skip navigation.

Feed aggregator

<b>Contribution by Angela Golla,

Oracle Infogram - Mon, 2014-08-18 15:00
Contribution by Angela Golla, Infogram Deputy Editor

Oracle Advanced Customer Support Services

Seamless data availability, optimal application performance, and reduced IT risk are critical to business success. Oracle Advanced Customer Support Services delivers tailored, mission critical support services to help you maintain and maximize the performance of all mission critical Oracle systems.

Our partnership with Oracle Support and Oracle's engineering teams combined with a collaborative, long-term relationship with your IT team provide a highly integrated approach to helping you meet your complex IT requirements.

Choose from a portfolio of mission critical support services that you can tailor to your IT and business needs. Our engineers provide proactive and preventive support using advanced diagnostic tools to help you increase system availability, reduce risk, and accelerate ROI across the Oracle stack: applications, middleware, database, servers, and storage systems.

Learn more at the Oracle Advanced Customer Support Services portal.

 

OER and the Future of Knewton

Michael Feldstein - Mon, 2014-08-18 11:41

Jose Ferriera, the CEO of Knewton, recently published a piece on edSurge arguing that scaling OER cannot “break the textbook industry” because, according to him, it has low production values, no instructional design, and is not enterprise grade. Unsurprisingly, David Wiley disagrees. I also disagree, but for somewhat different reasons than David’s.

When talking about Open Educational Resources or, for that matter, open source software, it is important to distinguish between license and sustainability model, as well as distinguishing between current sustainability models and possible sustainability models. It all starts with a license. Specifically, it starts with a copyright license. Whether we are talking about Creative Commons or GPL, an open license grants copyright permission to anyone who wants it, provided that the people who want to reuse the content are willing to abide by the terms of the license. By granting blanket permission, the copyright owner of the resource chooses to give up certain (theoretical) revenue earning potential. If the resource is available for free, then why would you pay for it?

This raises a question for any resource that needs to be maintained and improved over time about how it will be supported. In the early days of open source, projects were typically supported through individual volunteers or small collections of volunteers, which limited the kinds and size of open source software projects that could be created. This is also largely the state of OER today. Much of it is built by volunteers. Sometimes it is grant funded, but there typically is not grant money to maintain and update it. Under these circumstances, if the project is of the type that can be adequately well maintained through committed volunteer efforts, then it can survive and potentially thrive. If not, then it will languish and potentially die.

But open resources don’t have to be supported through volunteerism. It is possible to build revenue models that can pay for their upkeep. For example, it is possible to charge for uses of materials other than those permitted by the open license. Khan Academy releases their videos under a Creative Commons Noncommercial Share-Alike (CC NC-SA) license. Everyday students and teachers can use it for free under normal classroom circumstances. But if a textbook publisher wants to bundle that content with copyrighted material and sell it for a fee, the license does not give them permission to do so. Khan Academy can (and, as far as I know, does) charge for commercial reuse of the content.

Another possibility is to sell services related to the content. In open source software, this is typically in the form of support and maintenance services. For education content, it might be access to testing or analytics software, or curriculum planning and implementation services. This is a non-exhaustive list. The point is that it is possible to generate revenue from open content. And revenue can pay for resources to support high production values, instructional design, and enterprise scaling, particularly when paired with grant funding and volunteer efforts. These other options don’t necessarily generate as much revenue as traditional copyright-based licensing, but that’s often a moot point. Business models based on open licenses generally get traction when the market for licensed product is beginning to commodify, meaning that companies are beginning to lose their ability to charge high prices for their copyrighted materials anyway.

That’s the revenue side. It’s also important to consider the cost side. On the one hand, the degree to which educational content needs high production values and “enterprise scaling” is arguable. Going back to Khan Academy for a moment, Sal Khan popularized the understanding that one need not have an expensive three-camera professional studio production to create educational videos that have reach and impact. That’s just one of the better known of many examples of OER that is considered high-quality even though it doesn’t have what publishing professionals traditionally have thought of as “high production values.” On the other hand, it is important to recognize that a big portion of textbook revenues go into sales and marketing, and for good reason. Despite multiple efforts by multiple parties to create portals through which faculty and students can find good educational resources, the adoption process in higher education remains badly broken. So far with a few exceptions, the only good way to get widespread adoption of curricular materials still seems to be to hire an army of sales reps to go knock on faculty doors. It is unclear when or how this will change.

This brings us to the hard truth of why the question of whether OER can “win” is harder than it seems. Neither the OER advocates nor the textbook publishers have a working economic model right now. The textbook publishers were very successful for many years but have grown unsustainable cost structures which they can no longer prop up through appeals to high production values and enterprise support. But the OER advocates have not yet cracked the sales and marketing nut or proven out revenue models that enable them to do what is necessary to drive adoption at scale. If everybody is losing, then nobody is winning. At least at the moment.

This is where Knewton enters the picture. As you read Jose’s perspective, it is important to keep in mind that his company has a dog in this fight. (To be fair at the risk of stating the obvious, so does David’s.) While Knewton is making noises about releasing a product that will enable end users to create adaptive content with any materials (including, presumably, OER), their current revenues come from textbook publishers and other educational content companies. Further, adaptive capabilities such as the ones Knewton offers add to the cost of an educational content product, both directly through the fees that the company charges and indirectly through the additional effort required to design, produce, and maintain adaptive products. To me, the most compelling argument David makes in favor of OER “winning” is that it is much easier to lower the price of educational materials than it is to increase their efficacy. So if you’re measuring the value of the product by standard deviations per dollar, then smart thing is to aim for the denominator (while hopefully not totally ignoring the numerator). The weak link in this argument is that it works best in a relatively rational and low-friction market that limits the need for non-product-development-related expenses such as sales and marketing. In other words, it works best in the antithesis of the conditions that exist today. Knewton, on the other hand, needs there to be enough revenue for curricular materials to pay for the direct and indirect costs of their platform. This is not necessarily a bad thing for education if Knewton-enhanced products can actually raise the numerator as much as or more than OER advocates can lower the denominator. But their perspective—both in terms of how they think about the question of value in curricular materials and in terms of how they need to build a business capable of paying back $105 million in venture capital investment—tilts toward higher costs that one hopes would result in commensurately higher value.

All of this analysis assumes that in David’s ratio of standard deviations per dollar, all that matters is the ratio itself, independently of the individual numbers that make it up. But that cannot be uniformly true. Some students cannot afford educational resources above a certain price no matter how effective they are. (I would love to lower my carbon footprint by buying a Tesla. Alas….) In other cases, getting the most effective educational resources possible is most important and the extra money is not a big issue. This comes down to not only how much the students themselves can afford to pay but also how education is funded and subsidized in general. So there are complex issues in play here regarding “value.” But on the first-order question of whether OER can “break the textbook industry,” my answer is, “it depends.”

The post OER and the Future of Knewton appeared first on e-Literate.

On the Road with Laerte

Pythian Group - Mon, 2014-08-18 09:21

For the month of October, Microsoft PowerShell MVP, Laerte Junior will be touring Brazil and Europe for various SQL Server-related speaking engagements.

“Thankfully, I am working at a company that fully supports their employees to speak and participate in community events.” Laerte says. “I can travel to Europe for 5 SQL Server conferences, and then go to the USA to attend the MVP Global Summit and SQL PASS Summit.”

While most European speaking sessions have been confirmed, we’ll be updating the schedule as the topics become available. You can follow Laerte on his personal blog at shellyourexperience.com.

Date Location Event Topic Speaking Schedule September 27, 2014 São Paulo, Brazil SQL Saturday #325 Criando suas próprias solouções usando PowerShell See speaking schedule October 1, 2014 Schelle, Belgium SQL Server Days Mastering PowerShell for SQL Server See speaking schedule October 2, 2014 Utrecht, Holland SQL Saturday #336 Full day pre-conference training session: Mastering PowerShell for SQL Server October 3/4 2014 Utrecht, Holland SQL Saturday #336 TBD See speaking schedule October 11, 2014 Sophia, Bulgaria SQL Saturday #311 Writing Your Solutions Using PowerShell See speaking schedule October 18, 2014 Oporto, Portugal SQL Saturday #341 Criando suas próprias solouções usando PowerShell See speaking schedule October 24, 2014 Barcelona, Spain SQL Saturday #338 TBD See speaking schedule

Will you be attending any of these sessions? If so, which ones?

Categories: DBA Blogs

What Is Oracle Elapsed Time And Wall Time With A Parallelism Twist


What Is Oracle Elapsed Time And Wall Time With A Parallelism Twist
In this post I'm focusing on Oracle Database SQL elapsed time, adding parallelism into the mix and then revisiting wall time. What initially seems simple can take some very interesting twists!

If you are into tuning Oracle Database systems, you care about time. And if you care about time, then you need to understand the most important time parameters: what they are, their differences, how they relate to each other and how to use them in your performance tuning work.

A couple weeks ago I wrote about Oracle DB Time, non-idle wait time, and server process CPU consumption (DB CPU) time. If you have not read that posting, HERE is the link. It must be a good read because it quickly become my most viewed post ever! In this posting, the focus is SQL elapsed time, parallelism, and again wall time. Enjoy!

Quick Review
In my previous related post, I covered non-idle wait time, DB CPU, and DB Time. Here is a very quick summary of each.

Non-Idle Wait Time occurs when an Oracle process is not consuming CPU, the session pauses (i.e., waits) and Oracle considers the wait time important for performance tuning. An example of a non-idle wait event is direct path read temp. An example of an idle wait event is SQL*Net message from client or pmon timer.

DB CPU is Oracle server/foreground/shadow process CPU consumption. This is not include Oracle background process CPU consumption.

DB Time is DB CPU plus Non-Idle Wait Time. Remember that DB Time does not include background process CPU consumption and Oracle Corporation determines which wait events are considered idle.

Elapsed Time
Elapsed Time (ET) is all DB Time related to a defined task. A "defined task" could be a SQL statement, group of SQL statements, pl/sql procedure, batch job, etc. It is whatever makes sense in your tuning situation.

The elapsed time for a SQL_ID can be found in v$sql. But be careful because this elapsed time is related to "all" the SQL_ID executions. Thankfully, there is an "executions" column in v$sql.
Elapsed time is displayed in a number of areas within an Oracle Database AWR and Statspack report. Looking at the above screen shot, the "top" elapsed time SQL has an elapsed time of 268561 seconds. This means that over the AWR report's snapshot interval, for all this SQL's executions, its total DB Time is 268561 seconds. Said another way, if we were to add up all this SQL's DB CPU and non-idle wait time for all its executions within the snapshot interval, the value should be 268561.

There is a lot of great information provided in the AWR and Statspack SQL reports. For example, because the elapsed time and the CPU time (DB CPU) is shown above, we can calculate the non-idle wait time for the "top" elapsed time SQL ID.

non idle wait time = elapsed time - cpu time
268465 = 268561 - 96

For the "top" elapsed SQL, its elapsed time 268561 and it's DB CPU is 96 therefore its non-idle wait time is 268465. Wow! This statement has tons of associated wait time compared to CPU consumption time.

But it gets even better! Because the total Elapsed Time and the total number Executions over the snapshot interval is displayed, we can determine the average elapsed time!

average elapsed time = total elapsed time / executions
746.03 = 268561 / 36

Do not be deceived! The average elapsed time is unlikely what the user is experiencing. Two possibility examples for this deception are skewed elapsed times and parallelism.

For most DBAs this is unexpected. It also causes performance perception problems yet solutions are available to understand what's really going on.

I've spent so much time researching this topic and seen it increase my consulting value, I've posted a number of blog entries on this subject. Plus I created an OraPub Online Institute seminar focused specifically on this subject. It's called Using Skewed Performance Data To Your Advantage. Check it out. I'm really proud how it turned out. I also have a couple of OSM scripts dedicated to this topic, sqlelget[11].sql.

Revisiting Wall Time With A Parallelism Twist
Now it's time to put this all together.

DB CPU is the Oracle server process CPU consumption.

Non-Idle Wait Time (NIWT) is the time when an Oracle process can not consume CPU and must pause and we care about this time.

DB Time is the Oracle server process CPU consumption and all non-idle wait time.

Elapsed Time (ET) is the sum (i.e., all) DB Time related to a task, such as a SQL_ID.

Wall Time is what we hope the user experiences. I'll assume there is no time gap between Oracle and the user, therefore the wall time will equal the user's experience.

Effective Parallelism is the effective number of Oracle parallel slaves or some other form of parallelism, such as designed-in application parallelism. (For simplicity, I'm only going to mention Oracle parallel query.) If Oracle parallel query is not involved, then the effective parallelism is one. If two parallel query slaves are involved, then the effective parallelism will be a little less than 2

Parallelism can reduce wall time because we can simultaneously "burn time" in multiple places. For example, 60 seconds of elapsed time with a process running serially, results in a wall time of 60 seconds. But if we have two parallel query slaves, while the elapsed time (i.e., all the DB Time) is still 60 seconds (plus some overhead time), the wall time will be around 30 seconds (plus some overhead time).

The math is really simple...that is until you factor in scalability (i.e., the overhead), which I won't. If you're interested, read the last chapter of my book, Forecasting Oracle Performance.

Let's simplify this by using some mathematical notation.

DB Time = DB CPU + NIWT

Elapsed Time = Sum of DB Time

Wall Time = Elapsed Time / Effective Parallelism

Pretty straightforward, eh? Below is a short video clip summarizing this from the OraPub Online Institute seminar, Tuning Oracle Using an AWR Report (based on an Oracle Time Based Analysis). (To be released September 2014.) If you can't see the video, click HERE watch it on YouTube.



Test You Knowledge
True or False? If the total elapsed time is 60 seconds and parallel query is not involved, the total wall time will also be 60 seconds. True

True or False? If the elapsed time per execution is 60 seconds and the wall time is 30 seconds, then parallel query is involved. True

True or False? Bonus question yet very important to understand: If the elapsed time per execution is 60 seconds and two PQ slaves are involved, then the wall time will be 30 seconds.

The last question is false because there is overhead when parallelizing. Parallelism is not free. Because of this, the wall time will hopefully drop to perhaps 35 seconds. That 5 seconds is the parallelization overhead.

Coming Up Next: Video Proof!
While the above may seem correct, I ran some SQL statements and captured the relevant time statistics. There is quite a bit of detail and I ran two different tests, so I'll post that in a week or two.

Thanks for reading,

Craig.
https://resources.orapub.com/OraPub_Online_Training_About_Oracle_Database_Tuning_s/100.htmYou can watch the seminar introductions for free on YouTube!If you enjoy my blog, subscribing will ensure you get a short-concise email about a new posting. Look for the form on this page.

P.S. If you want me to respond to a comment or you have a question, please feel free to email me directly at craig@orapub .com.




Categories: DBA Blogs

★ Database as a Storage (DBaaS) vs. Thick Database

Eddie Awad - Mon, 2014-08-18 08:30

A recent addition to my Oracle PL/SQL library is the book Oracle PL/SQL Performance Tuning Tips & Techniques by Michael Rosenblum and Dr. Paul Dorsey.

I agree with Steven Feuerstein’s review that “if you write PL/SQL or are responsible for tuning the PL/SQL code written by someone else, this book will give you a broader, deeper set of tools with which to achieve PL/SQL success”.

In the foreword of the book, Bryn Llewellyn writes:

The database module should be exposed by a PL/SQL API. And the details of the names and structures of the tables, and the SQL that manipulates them, should be securely hidden from the application server module. This paradigm is sometimes known as “thick database.” It sets the context for the discussion of when to use SQL and when to use PL/SQL. The only kind of SQL statement that the application server may issue is a PL/SQL anonymous block that invokes one of the API’s subprograms.

I subscribe to the thick database paradigm. The implementation details of how a transaction is processed and where the data is stored in the database should be hidden behind PL/SQL APIs. Java developers do not have to know how the data is manipulated or the tables where the data is persisted, they just have to call the API.

However, like Bryn, I have seen many projects where all calls to the database are implemented as SQL statements that directly manipulate the application’s database tables. The manipulation is usually done via an ORM framework such as Hibernate.

In the book, the authors share a particularly bad example of this design. A single request from a client machine generated 60,000 round-trips from the application server to the database. They explain the reason behind this large number:

Java developers who think of the database as nothing more than a place to store persistent copies of their classes use Getters and Setters to retrieve and/or update individual attributes of objects. This type of development can generate a round-trip for every attribute of every object in the database. This means that inserting a row into a table with 100 columns results in a single INSERT followed by 99 UPDATE statements. Retrieving this record from the database then requires 100 independent queries. In the application server.

Wow! That’s bad. Multiply this by a 100 concurrent requests and users will start complaining about a “slow database”. NoSQL to the rescue!

© Eddie Awad's Blog, 2014. | Permalink | 2 comments | Topic: Oracle | Tags: , ,

Related articles:

Partner Webcast - Oracle SOA Suite 12c: Connect 4 Cloud, Mobile, IoT with On-premise

The pace of new business projects continues to grow from increasing customer self-service to seamlessly connecting all your back office and in-the-field applications. At the same time increased...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Data growth inciting need for cloud databases

Chris Foot - Mon, 2014-08-18 01:23

To further reduce storage costs, organizations are storing their information in public cloud databases. 

Consistent development in cloud technology has made accessing data across a network easier than computer scientists of 20 years ago could have ever predicted. Due to this popularity, database administration services have trained themselves how to issue SQL Server queries across Microsoft Azure, and other cloud environments. 

Big data, services models evolving
TechTarget contributor John Moore noted that Database-as-a-Service (DBaaS) is becoming less about just providing storage and more about managing, optimizing and conducting performance diagnostics. Simply funneling data into a remote platform often causes disorganization – making it more difficult to find pertinent information and analyze it. 

Moore referenced a statistic produced by MarketsandMarkets, which predicts the cloud database and DBaaS market will grow at a compound annual growth rate of 67.3 percent over the next five years, reaching $14.05 billion by 2019. Outsourcing maintenance and support for cloud data stores reduces overhead and ensures database security remains intact. 

What knowledge is needed? 
In regard to hiring a separate company to manage cloud servers, it's important to acknowledge the types of information organizations are aiming to learn from. Most of the data is unstructured, which can only be accessed through Hadoop storage and NoSQL databases. 

Therefore, remote DBAs who are knowledgeable of both these languages and conducting administration via the cloud are essential. That being said, enterprises shouldn't ignore those with extensive knowledge of traditional programs such as SQL Server. 

The advantages of Azure and SQL Server
Because these two programs are both produced by Microsoft, natural compatibility between them is expected. Network World noted that putting SQL data in Azure can save enterprises anywhere between $20,000 to $50,000 in procuring physical data center equipment (servers, bandwidth, storage, etc.)

In order to ensure security, administrators simply need to configure SQL properly. The source acknowledged the following protective functions can be applied to cloud-hosted SQL databases:

  • Azure Security provides users with a "Trust" guide, in which Microsoft details how Azure complies with HIPAA, ISAE and several other data security laws.
  • Transparent Data Encryption enables DBAs to tokenize the contents within an entire database while providing them with a key only those who initiated the encryption task can use. 
  • Automatic protection involves Azure privatizing databases by default, meaning users actually have to configure the environment to allow the public or unauthorized patrons to view the information. 

Aside from these amenities, employing active database monitoring is the best way for organizations to keep cloud databases protected from malicious figures. 

The post Data growth inciting need for cloud databases appeared first on Remote DBA Experts.

Quadcopters and the Internet of Things

Oracle AppsLab - Sun, 2014-08-17 14:44
Jerry-rigged attachment of a 808 keychain camera to the underside of a Syma X1 quadcopter.

Low tech attachment of a 808 keychain camera to the underside of a Syma X1 quadcopter.

Editor’s note: Hey a new author! Here’s the first one, of many I hope, from Bill Kraus, who joined us back in February. Enjoy.

One of the best aspects of working in the emerging technologies team here in Oracle’s UX Apps group is that we have the opportunity to ‘play’ with new technology. This isn’t just idle dawdling, but rather play with a purpose – a hands-on exercise exploring new technologies and brainstorming on how such technologies can be incorporated into future enterprise user experiences.

Some of this technology, such as beacons and wearables,  have obvious applications. The relevancy of other technologies, such as quadcopters and drones, are more obtuse (not withstanding their possible use as a package delivery mechanism for an unnamed online retail behemoth).

Ponit Monroe

Video still taken from the quadcopter hundreds of feet above my home on Bainbridge Island, looking north to the Puget Sound and Point Monroe.

As an amateur wildlife and nature photographer, I’ve dabbled in everything from digiscoping to infrared imaging to light painting to underwater photography. I’ve also played with strapping lightweight keychain cameras to inexpensive quadcopters (yes, I know I could get a DJI Phantom and a GoPro, but at the moment I prefer to test my piloting skills on something that won’t make me shed tears – and incur the wrath of my spouse - if it crashes).

After telling my colleagues recently over lunch about my quadcopter adventures  (I already lost several in the trees and waters of the Puget Sound), Tony, Luis, and Osvaldo decided to purchase their own and we had a blast at our impromptu ‘flight school’ at Oracle. The guys did great, and Osvaldo’s copter even had a têt-à-tête with a hummingbird, who seemed a bit confused over just what was hovering before it.

Luis flying his quadcopter.

Luis flying his quadcopter in the hallway.

Osvaldo flying his quadcopter.

Osvaldo flying his quadcopter.

This is all loads of fun, but what do flying quadcopters have to do the Internet of Things? Well, just as a quadcopter allows a photographer to get a perspective previously thought impossible, mobile technology combined with embedded sensors and the cloud have allowed us to break the bonds of the desktop and view data in new ways. No longer do we interact with digital information at a single point in time and space, but rather we are now enveloped by it every waking (and non-waking) moment – and we have the ability to view this data from many different perspectives. How this massive flow of incoming data is converted into useful information will depend in large part on context (you knew I’d get that word in here somehow) – analogous to how the same subject can appear dramatically different depending on the photographer’s (quadcopter assisted) point-of-view.

In fact, the Internet of Things is as much about space as it is about things – about sensing, interacting with and controlling the environment around us using technology to extend what we can sense and manipulate. Quadcopters are simply a manifestation of this idea – oh, and they are also really fun to fly.Possibly Related Posts:

Closer look at the SOA 12c Feature: Oracle Managed File Transfer

The rapid growth of cloud-based applications in the enterprise, combined with organizations' desire to integrate applications with mobile technologies, is dramatically increasing application...

We share our skills to maximize your revenue!
Categories: DBA Blogs

UKOUG 2014 Elections

Doug Burns - Sat, 2014-08-16 18:18
I noticed from Debra Lilley's blog post that there are some UKOUG elections at the moment, with voting closing on 1st September 2014.

Although not an active member or supporter of UKOUG any more (at least partly because I'm based in Singapore!), I've had a pretty long association with the user group and a lot of my friends have been involved over the years, so I still take an interest in what's going on there. Even more so this time, because I know two of the candidates pretty well.

Carl Dudley needs no introduction to anyone who has been remotely close to the UK or European OUG scene down the years and is an old mate who has put in a world of time to UKOUG over the years and, as a techie, has always tried to ensure that it remains relevant to all areas of the membership.

Pauline Drummond, on the other hand, will be largely unknown to most of the OUG community as I think she's only been attending events over the past few years. (I maybe be wrong about that as my memory isn't what it was for some reason ;-)) I know Pauline pretty well, though, as she was a manager at Standard Life when I worked there on contracts for several years before moving down to London, including being my direct manager for the last contract there. President Elect seems a pretty senior role within UKOUG but if Pauline applies the same boundless energy and enthusiasm that she always did in the office then I can see her being great at it. She makes me tired just thinking about all of the volunteering and organisation and sport and work stuff she gets through and is very dedicated and focussed to working with others to get things done, which strikes me as just what you need from a president of a user group.

For a change it's not one of my techie mates I'm suggesting would be good for the role of President because it is a role that needs to respect and appeal to the entire membership and the other entities that UKOUG has to deal with, not least Oracle, so you need someone with a broad corporate view. Pauline is an appropriate choice in this case, although I can't help hoping that she doesn't antagonise potential conference presenters as UKOUG seems to have done over recent times!

Regardless, I always hope for the best for UKOUG and my various mates who put a power of work into their volunteering and presenting roles, so hopefully some new voices will be a step in the right direction ....

Blame It On The Drugs

Floyd Teter - Sat, 2014-08-16 17:21
Bronchitis.  I catch it a lot.  Rotten experience.  It's like an invisible elephant is sitting on your chest.  And the drugs are mind-numbing.  Got it now.  Shivering under a blanket in 90 degree weather.  But, it'll pass.  And, in the meantime, if I write something weird...well, let's blame it on the drugs, OK?

Had a chat with a dear old friend this week.  Middle-manager for a Fortune 500 corporation.  Big Oracle customer.  Lots of excitement brewing in his neck of the woods over all the money they'll save moving to "the cloud".  I thought it would be interesting to explore this further, so we did some very rough calculations on the back of a napkin.  Over the long run, those savings went out the window.  Have to admit, I knew how the conversation would turn out.  And I didn't mean to rain on his parade. Blame it on the drugs.

Big companies don't move to the cloud for long-term savings.  They move to increase agility in the face of rapidly-changing markets.  They move in order to refocus internal resources on profit centers rather than cost centers.  They move in order to complement existing systems without causing huge operational upset.  Smaller companies also move to cloud because the financial barriers to entry are lower - less of an upfront cost to get the same tools the big enterprises are using.  But long-term dollar-for-dollar savings...yeah, those numbers don't seem to play out.

So we wrapped up the conversation on cost savings with the tried-and-true "well, they've already made the decision that it will save us money, so we're moving ahead."  So I let that slide and we moved on to his excitement in learning something new (this will be his first cloud project).  So I asked the question:  "What kind of cloud?  Private, hosted, SaaS, hybrid...what are ya'all doing?"

Crickets.  Nothing.  Silence.  Now, I didn't mean to throw the guy another curveball.  I mean, he's my friend.  Compassion has to play in here somewhere, right?  But it happened.  Silence...maybe with a little edge of frustration.  Sorry.  Blame it on the drugs.

I get a little nervous when customers announce a commitment to "going to the cloud" without really understanding the benefits they can expect or how they plan to achieve those benefits.  It's putting the cart before the horse and wondering why things don't move forward.  Just makes no sense to me.

Don't get me wrong.  I think many enterprises have much to gain from considering a cloud approach for their enterprise IT.  I just think they should understand the basic concepts and know why they're taking the leap before they jump.  Different enterprises will come to different conclusions.

But I see it more and more as time goes by...people buying into the hype without really knowing why.  Then again, maybe it's my perspective that's off?  If so, blame it on the drugs.

ASM Commands : 1 -- Adding and Using a new DiskGroup for RAC

Hemant K Chitale - Sat, 2014-08-16 10:22
In 11gR2 Grid Infrastructure and RAC

On node1, I discover and add a disk to ASM.  NFS "devices" asmdisk.1 to asmdisk.6 are present as ASM Disks. asmdisk.7 has been added on NFS mount point /data1. (Disks asmdisk.3 to asmdisk.6 are on /data2)

I start on node1 in my Cluster

[root@node1 ~]# su - grid
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sat Aug 16 23:42:02 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> show parameter asm_diskstring

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
asm_diskstring string /crs/*, /data1/*, /data2/*, /f
ra/*
SQL> !ls -l /data1/asm*
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 16 23:42 /data1/asmdisk.1
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 16 23:42 /data1/asmdisk.2
-rw-r--r-- 1 grid oinstall 2048000000 Aug 16 23:33 /data1/asmdisk.7

SQL> create diskgroup DATA3 disk '/data1/asmdisk.7';
create diskgroup DATA3 disk '/data1/asmdisk.7'
*
ERROR at line 1:
ORA-15018: diskgroup cannot be created
ORA-15072: command requires at least 2 regular failure groups, discovered only
1


SQL> create diskgroup DATA3 external redundancy disk '/data1/asmdisk.7';

Diskgroup created.

SQL>
SQL> select group_number, name, total_mb
2 from v$asm_diskgroup
3 where name = 'DATA3'
4 /

GROUP_NUMBER NAME TOTAL_MB
------------ ------------------------------ ----------
5 DATA3 1953

SQL>

I now have a new DiskGroup using External Redundancy with a single disk.  Is it visible at node2 ?

[root@node2 ~]# su - grid
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sat Aug 16 23:47:45 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> select group_number, name, total_mb
2 from v$asm_diskgroup
3 where name = 'DATA3'
4 /

GROUP_NUMBER NAME TOTAL_MB
------------ ------------------------------ ----------
0 DATA3 0

SQL>

Why is the size not visible yet ?  Because, although the CREATE from node1 had also MOUNTed the Disk Group, it hasn't been mounted on node2 yet.

SQL> alter diskgroup DATA3 mount;

Diskgroup altered.

SQL> select group_number, name, total_mb
2 from v$asm_diskgroup
3 where name = 'DATA3'
4 /

GROUP_NUMBER NAME TOTAL_MB
------------ ------------------------------ ----------
5 DATA3 1953

SQL>

Can I confirm the underlying disk ?

SQL> select group_number, disk_number, header_status, state, total_mb
2 from v$asm_disk
3 where group_number = 5;

GROUP_NUMBER DISK_NUMBER HEADER_STATU STATE TOTAL_MB
------------ ----------- ------------ -------- ----------
5 0 MEMBER NORMAL 1953

SQL>


What happens when I create a tablespace/datafile in this DiskGroup, from the instance on node1 ?

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options
-sh-3.2$ su - oracle
Password:
-sh-3.2$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 17 00:08:31 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> create tablespace NEW_TBS datafile '+DATA3';
create tablespace NEW_TBS datafile '+DATA3'
*
ERROR at line 1:
ORA-01119: error in creating database file '+DATA3'
ORA-15045: ASM file name '+DATA3' is not in reference form
ORA-17502: ksfdcre:5 Failed to create file +DATA3
ORA-15081: failed to submit an I/O operation to a disk


SQL>

Why do I get this error ? I could create a DiskGroup on the ASM Disk but I couldn't add a datafile ?  Let me check the permissions.

SQL> !sh
sh-3.2$ cd /data1
sh-3.2$ ls -l asmd*
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 17 00:11 asmdisk.1
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 17 00:11 asmdisk.2
-rw-r--r-- 1 grid oinstall 2048000000 Aug 17 00:11 asmdisk.7
sh-3.2$ su grid
Password:
sh-3.2$ pwd
/data1
sh-3.2$ ls -l asmd*
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 17 00:12 asmdisk.1
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 17 00:12 asmdisk.2
-rw-r--r-- 1 grid oinstall 2048000000 Aug 17 00:12 asmdisk.7
sh-3.2$ chmod 775 asmdisk.7
sh-3.2$ ls -l asmdisk.7
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 17 00:12 asmdisk.7
sh-3.2$

The oinstall group that is used by "oracle" did not have write permissions. Let me go back to Oracle now after having granted the permissions.

sh-3.2$ exit
exit
sh-3.2$ exit
exit

SQL> l
1* create tablespace NEW_TBS datafile '+DATA3'
SQL> /

Tablespace created.

SQL>

The CREATE TABLESPACE has succeeded.  I can verify the datafile and the ASM file from node2 now.

-sh-3.2$ id
uid=500(grid) gid=1001(oinstall) groups=1001(oinstall),1011(asmdba)
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 17 00:17:19 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> select group_number, file_number, bytes/1048576, type, redundancy
2 from v$asm_file
3 where group_number=5;

GROUP_NUMBER FILE_NUMBER BYTES/1048576
------------ ----------- -------------
TYPE REDUND
---------------------------------------------------------------- ------
5 256 100.007813
DATAFILE UNPROT


SQL>
SQL> exit
suDisconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options
-sh-3.2$
-sh-3.2$ su - oracle
Password:
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 17 00:19:34 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select file_name, bytes/1048576 from dba_data_files
2 where tablespace_name = 'NEW_TBS';

FILE_NAME
--------------------------------------------------------------------------------
BYTES/1048576
-------------
+DATA3/racdb/datafile/new_tbs.256.855792859
100


SQL>

Now, I have the new DataFile visible in ASM and the Database on the New DiskGroup.
.
.
.

Categories: DBA Blogs

Webcast - Oracle Database In-Memory Option

Next to the recent announcement by Larry Ellison on the Future of the Database, we are happy to share this exclusive series of live webcasts from Oracle Database Product Management, where you can...

We share our skills to maximize your revenue!
Categories: DBA Blogs

PeopleTools 8.54 Upgrade now Available

Jim Marion - Fri, 2014-08-15 23:38

Today Matthew Haavisto of the PeopleTools strategy team announced that the PeopleTools 8.54 upgrade is now available. Visit the PeopleSoft Technology Blog to learn more.

PeopleTools 8.54 Upgrade Now Available

PeopleSoft Technology Blog - Fri, 2014-08-15 16:36
We recently announced that PeopleTools 8.54 is generally available.  Now we are happy to announce that PeopleTools 8.54 Upgrade is also available for customers upgrading to 8.54 from earlier releases.  This documentation home page provides a wealth of information on upgrading to this important release.

All Access Pass to Oracle Support

Joshua Solomin - Fri, 2014-08-15 14:05
Untitled Document

GPIcon

Looking for tips, recommendations and resources to help you keep your Oracle applications and systems running at peak performance? Want to find out how to get more out of your Oracle Premier Support coverage?

More than 500 experts from across Services and Support will be on hand at Oracle OpenWorld to answer your questions and share best practices for adopting and optimizing Oracle technology.

  • Find out what Oracle experts know about the best tools, tips and resources for supporting and upgrading Oracle technology. Attend one of our “Best Practices” sessions.
  • Stop by the Oracle Support Stars Bar to talk with support experts. Open daily @ Moscone West, Exhibition hall 3161.
  • See Oracle support tools in action at one of our demos.

View the schedule of all of our Oracle Premier Support activities at Oracle OpenWorld for more information.

See you there!

In-memory limitation

Jonathan Lewis - Fri, 2014-08-15 13:51

I’ve been struggling to find time to have any interaction with the Oracle community for the last couple of months – partly due to workload, partly due to family matters and (okay, I’ll admit it) I really did have a few days’ holiday this month. So making my comeback with a bang – here’s a quick comment about the 12.1.0.2 in-memory feature, and how it didn’t quite live up to my expectation; but it’s also a comment about assumptions, tests, and inventiveness.

One of the 12.1.0.2 manuals tells us that the optimizer can combine the in-memory columnar storage mechanism with the “traditional” row store mechanisms – unfortunately it turned out that this didn’t mean quite what I had hoped; I had expected too much of the first release. Here’s a quick demo of what doesn’t happen, what I wanted to happen, and how I made it happen, starting with a simple definition (note – this is running 12.1.02 and the inmemory_size parameter has been set to enable the feature):


create table t1 nologging
as
select	*
from	all_objects
where	rownum <= 50000
;

alter table t1 inmemory
no inmemory (object_id, object_name)
inmemory memcompress for query low (object_type)
-- all other columns implicitly inmemory default
;

insert into t1 select * from t1;
commit;

insert into t1 select * from t1;
commit;

insert into t1 select * from t1;
commit;

begin
	dbms_stats.gather_table_stats(user, 't1', method_opt=>'for all columns size 1');
end;
/

rem
rem	Needs select on v$_im_column_level granted
rem

select
	table_name,
	column_name,
	inmemory_compression
from
	v$im_column_level
where	owner = user
and	table_name = 'T1'
order by
	segment_column_id
;

explain plan for
select
	last_ddl_time, created
from
	t1
where	t1.created > trunc(sysdate)
and	t1.object_type = 'TABLE'
and	t1.subobject_name is not null
;

select * from table(dbms_xplan.display);

All I’ve done at this point is create a table with most of its columns in-memory and a couple excluded from the columnar store. This is modelling a table with a very large number of columns where most queries are targeted at a relatively small subset of the data; I don’t want to have to store EVERY column in-memory in order to get the benefit of the feature, so I’m prepared to trade lower memory usage in general against slower performance for some queries. The query against v$im_column_level shows me which columns are in-memory, and how they are stored. The call to explain plan and dbms_xplan then shows that a query involving only columns that are declared in-memory could take advantage of the feature. Here’s the resulting execution plan:

-----------------------------------------------------------------------------------
| Id  | Operation                  | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |      |     1 |    27 |    73   (9)| 00:00:01 |
|*  1 |  TABLE ACCESS INMEMORY FULL| T1   |     1 |    27 |    73   (9)| 00:00:01 |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - inmemory("T1"."SUBOBJECT_NAME" IS NOT NULL AND
              "T1"."OBJECT_TYPE"='TABLE' AND "T1"."CREATED">TRUNC(SYSDATE@!))
       filter("T1"."SUBOBJECT_NAME" IS NOT NULL AND
              "T1"."OBJECT_TYPE"='TABLE' AND "T1"."CREATED">TRUNC(SYSDATE@!))

Note that the table access full includes the inmemory keyword; and the predicate section shows the predicates that have taken advantage of in-memory columns. The question is – what happens if I add the object_id column (which I’ve declared as no inmemory) to the select list.  Here’s the resulting plan:


--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    32 |  1818   (1)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     1 |    32 |  1818   (1)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("T1"."SUBOBJECT_NAME" IS NOT NULL AND
              "T1"."OBJECT_TYPE"='TABLE' AND "T1"."CREATED">TRUNC(SYSDATE@!))

There’s simply no sign of an in-memory strategy – it’s just a normal full tablescan (and I didn’t stop with execution plans, of course, I ran other tests with tracing, snapshots of dynamic performance views etc. to check what was actually happening at run-time).

In principle there’s no reason why Oracle couldn’t use the in-memory columns that appear in the where clause to determine the rowids of the rows that I need to select and then visit the rows by rowid but (at present) the optimizer doesn’t generate a plan to do that. There’s no reason, though, why we couldn’t try to manipulate the SQL to produce exactly that effect:


explain plan for
select
        /*+ no_eliminate_join(t1b) no_eliminate_join(t1a) */
        t1b.object_id, t1b.last_ddl_time, t1b.created
from
        t1 t1a, t1 t1b
where   t1a.created > trunc(sysdate)
and     t1a.object_type = 'TABLE'
and     t1a.subobject_name is not null
and     t1b.rowid = t1a.rowid
;

select * from table(dbms_xplan.display);

------------------------------------------------------------------------------------
| Id  | Operation                   | Name | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |      |     1 |    64 |    74   (9)| 00:00:01 |
|   1 |  NESTED LOOPS               |      |     1 |    64 |    74   (9)| 00:00:01 |
|*  2 |   TABLE ACCESS INMEMORY FULL| T1   |     1 |    31 |    73   (9)| 00:00:01 |
|   3 |   TABLE ACCESS BY USER ROWID| T1   |     1 |    33 |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - inmemory("T1A"."SUBOBJECT_NAME" IS NOT NULL AND
              "T1A"."OBJECT_TYPE"='TABLE' AND "T1A"."CREATED">TRUNC(SYSDATE@!))
       filter("T1A"."SUBOBJECT_NAME" IS NOT NULL AND
              "T1A"."OBJECT_TYPE"='TABLE' AND "T1A"."CREATED">TRUNC(SYSDATE@!))

I’ve joined the table to itself by rowid, hinting to stop the optimizer from getting too clever and eliminating the join. In the join I’ve ensured that one reference to the table can be met completely from the in-memory columns, isolating the no inmemory columns to the second reference to the table. It is significant that the in-memory tablescan is vastly lower in cost than the traditional tablescan – and there will be occasions when this difference (combined with the knowledge that the target is a relatively small number of rows) means that this is a very sensible strategy. Note – the hints I’ve used happen to be sufficient to demonstrate method but I’d be much more thorough in a production system (possibly using an SQL baseline to fix the execution plan).

Of course, this method is just another example of the “visit a table twice to improve the efficiency” strategy that I wrote about a long time ago; and it’s this particular variant of the strategy that allows you to think of the in-memory columnar option as an implementation of OLTP bitmap indexes.


Oracle Database 12c In-Memory Feature – Part V. You Can’t Use It If It’s Not “Enabled.” Not Being Able To Use A Feature Is An Important “Feature.”

Kevin Closson - Fri, 2014-08-15 13:04

This is part 5 in a series: Part I, Part II, Part III, Part IV, Part V.

Synopsis

This blog post is the last word on the matter.

Enabled?  It’s About Usage!

You don’t get charged for Oracle feature usage unless you use the feature. So why does Oracle inconsistently use the word enabled when we care about usage? If enabled precedes usage then enabled is a sanctified term. Please read on…

It’s All About Getting The Last Word? No, It’s About Taking Care Of Customers.

On August 6, 2014  Oracle shared their last word and official statement on the matter of bug-ridden tracking of the Oracle Database 12c  In-Memory feature usage in a quote to the press at CBR. I’ll paraphrase first and then quote the article. Here is what I hear when I read the words of Oracle’s spokesman:

Yeah, my bad, we have a bug. The defective code erroneously tracks feature usage for an Enterprise Edition additional cost option priced at $23,000 per processor core. Don’t worry. When we track this particular feature usage we’ll ignore it should you be audited. You have our spoken word that we’ll just shine this one on. Here, let me trade a few confusing words about usage without using the word enabled or disabled since those are taboo.

My paraphrase probably draws a more serene picture than the visions of tip-toeing and side-stepping conjured up by the following words I’ll quote from the CBR article. Bear in mind the fact that the bug spoken of in the quote is 19308780–a bug, by the way, that is not readable by maintenance contract holders. Now I’ll quote the article:

Recording that the In-Memory option is in use in this case is a bug and we will fix it in the first patchset update coming in October.

Yes, we knew it was a bug. I merely had to do the hard work of getting Oracle to acknowledge it. The article continued with the following quote. Please ignore the fact that Oracle’s spokesman refers to me common. Focus instead on the fact that throughout parts 1 through 4 in my series I suffered erroneous  feature usage reporting because of a bug (software defect). I quote:

Kevin initially claimed that feature tracking could report In-Memory usage, and therefore impact licensing, without the end-user doing anything. This was and is still not the case. Customer licensing of Oracle Database In-Memory is not impacted by the bug that Maria notes in her blog. When an end-user explicitly undertakes actions to set the INMEMORY attribute on a table but the In-Memory column store has not been allocated (by setting the inmemory_size parameter to a non zero value), the bug results in feature tracking incorrectly reporting In-Memory ‘in use’. However as no column store has been allocated, the feature is not in use and therefore there is no licensing impact.

 

Ah yes. The old, “it’s not in use but it reports it’s in use situation.” That’s could have been conveyed in very short sentences…could have.

Since the bug spoken of in the above quote is not visible to contract holders I’m just going to let you mull over the circular logic.  This whole situation could be a lot simpler if Oracle would either a) make a bug description visible to contract holders so customers know what is broken and how to test whether it got fixed when the patch is eventually applied and/or b) add this defect to MOS 1309070.1 which is a bug that tracks all the other bugs in feature usage reporting. Yes, indeed, there are other bugs of this sort with other features. All software has bugs.

Last Word On The Matter

My last word on the matter has to do with the fact that the feature cannot be unlinked. It is a very expensive–and very useful, important feature. As I pointed out in Part II the feature cannot be absolutely disabled at the executable level as is the case for other high cost options like Real Application Clusters and Partitioning.  I think Oracle is trying to tell us it is impossible computer science to make it an unlinkable feature–at least that’s how I interpret the following words in a blog post at Oracle.com:

Oracle Database In-Memory is not a bolt on technology to the Oracle Database. It has been seamlessly integrated into the core of the database as a new component of the Shared Global Area (SGA). When the Oracle Database is installed, Oracle Database In-Memory is installed. They are one and the same. You can’t unlink it or choose not to install it.

Now maybe this is not saying there is no way to code the feature as unlinkable. Maybe it’s saying the choice was made to not make it unlinkable. I don’t know. If, however, we are to believe that the mere fact the feature uses the SGA makes  it some sort of atomic-level symbiotic parasite, well, that argument doesn’t  hold water. Indeed, Real Application Clusters is massively integrated with the SGA. Ever heard of Cache Fusion? With Cache Fusion data blocks get shuttled from one SGA to another across hosts in a cluster! Real Application Clusters is unlinkable–that’s unthinkable!

 

What Is Unlinkable Anyway

There might be folks that don’t know what we mean when we say a feature is unlinkable. This doesn’t mean all the code for the feature is yanked out of the binary. It simply means that a single–or perhaps a few–binary objects are linked into the Oracle executable that enables the feature. If unlinked there is absolutely no way to use the feature–as is the case with, for instance, Real Application Clusters.

And not being able to use the feature is an important feature!

So let’s ponder the insurmountable computer science that must surely be involved in implementing the In-Memory Column Store feature as unlinkable.

Oracle has told us the INMEMORY_SIZE initialization parameter is the on/off  button for the feature. That means there is a single, central on/off button that is, indeed, able to be manipulated even by the user. Can you imagine how difficult it must be to implement a global variable–even a simple boolean–that get’s linked in and checked when one boots the database? Not hard to grasp. What if the variable had a silly name like inmemory_deactivated. What if the feature activation module–let’s call it inmem.o–had inmemory_deactived=TRUE but an alternate module called  inmemON.o had inmemory_deactivated=FALSE. In much the same way we relink Real Application Clusters, the link scripts manipulate the file name so that the default (with feature deactivated) gets replaced with the activated module–only if the user wants the possibility of using the feature. How would all this deep, dark, complex code come together? Well, when the database instance is booted inmemory_deactivated is evaluated and regardless of the user’s setting of INMEMORY_SIZE the In-Memory feature is really, truly, disabled–and most importantly not usable. No possibility for confusion. Would that be better than a game of Licensed-Feature Usage Prevention Twister(tm)?

1966_Twister_Cover

Intensely Deep Engineering Difficulty

Now, imagine that. We didn’t even have to use the back of a cocktail napkin to draw out a solution to the mysteries behind how utterly unlinkable the In-Memory Database feature must surely be. We simply a)  drew upon our understanding of other SGA-integrated features like Real Application Clusters and b) recalled how unlinking works for other features and c) drew upon our basic level understanding of the C programming language vis a vis global variables and object linking.

Let me summarize all that: There is a single user-modifiable boot-time parameter that disables In-Memory Database as per Oracle’s blog and spokesman assertions. Um, that’s a pretty simple focal point to make the feature unlinkable.

Summary

Yes, Oracle could implement a method for making the In-Memory Column Store feature an unlinkable option just like they did for Real Application Clusters. I can only imagine why they chose not to (visions of USD $23,000 per processor core).


Filed under: oracle

Smoothing the Transition – The New Smart View 11.1.2.1.102 for Microsoft Office and OBIEE

Rittman Mead Consulting - Fri, 2014-08-15 12:24

Introductions

There’s a good chance that, if you’re reading this, you likely perform some reporting, analytics, data stewardship role or probably some combination of all three. And be it for a large corporation or a small company, there are likely standards and practices that pertain to how the above jobs are performed on a day to day basis; not easily changed and perpetually validated by big budgets and long careers. It is equally likely that deeply ingrained within these reporting practices lies some moderate to heavy implementation of Excel. It wasn’t long ago that I found myself utilizing the spreadsheet program on a daily basis and for hours upon hours at a time.

What this essentially amounted to:

  • Pulling down large amounts of data from our department’s data model using large SQL queries that themselves could take most of the day to elucidate, let alone waiting on the query to yield results, which could easily warrant a bathroom break, a phone call, or if you were feeling adventurous, catching up on email.
  • Validating your results
  • Exporting to Excel (key step here!)Massaging and formatting your data by implementing innumerable and often unwieldy functions that deserved their own time slot on your schedule for the day to figure out
  • Proofing your analysis so that it got to management in ship shape
  • Hoping that an analyst from another department who utilized the same metric on their report and who would be at the same meeting actually coincided with yours

                   

Fast forward a bit and I’m sitting here, writing this blog as a sort of proverbial white flag in the great battle between Excel and the behemoth that is OBIEE. And just what is this white flag? Why, it’s Oracle’s most recent iteration of Smart View, which provides expanded functionality and support for the Microsoft Office Suite of programs. Namely, its golden boy, Excel. That’s right, Excel, the darling of office staff everywhere, the program upon which empires rise and fall. In paraphrasing a quote from www.cfo.com, some 64% of public and private companies still use Excel and other “manual” solutions to perform their finance functions. So, in the world of the spreadsheet, when does it makes sense to cross that blurry line from cell to subject area? Smart View now makes answering that question much easier. It seems that they’ve really gotten a grasp on the formatting shortcomings of the last version and made up for it in spades. Or, so at least they claim.

The Test Run – OBIEE to Excel 

The example below illustrates a simple import via Smart View. I generated a dashboard in Answers which mimics that of an Excel design I found online. Thank the good folks over at www.chandoo.org for their excellent skills in Excel dashboarding and for providing plenty of great examples. The dashboard contains a table with a selection of KPI’s that the user may then choose to sort on via a View Selector (each view has been sorted on a different KPI and is on a different Compound Layout). Upon selecting a KPI, the analysis will then display the Top 10 products by the KPI selected. In addition, the table contains conditional formatting which simply alerts users to the variance between different KPI’s and their targets. Lastly, there is a scatter plot view which displays our Product dimension as seen through the lens of Revenue and Quantity. Per the most recent Oracle documentation, we shouldn’t have any trouble including the current selections of a dashboard prompt either. Let’s see how it performs when we move it over to Excel.

 “OBIEE report and page prompts are fully supported as part of the import process. Dashboards can be imported through Oracle Smart View on a per page basis or the entire dashboard. Prompts are applied at the current state of the logged in user. Future releases of the product will support dashboard prompts directly through Microsoft Office.”

SVB 2

The Results

And there you have it! Excel displays our table and graph views as per the most recent selection from the Dashboard prompt. But wait! Our conditional formatting seems to be missing and to prove this, this is even the case when exported directly from the analysis view as an Excel workbook.

SVB 3

Conditional Formatting

For our second scenario, let’s see how Excel handles a simpler, heat map style conditional formatting. I’ve made a simple table on our dashboard that measures Revenue, Quantity Sold, and the Average Order in $. I set up conditional formatting around the Average Order measure to see how Excel handles importing the color scheme for the currently selected Time parameters on the dashboard.

Contrastingly, we see that Smart View has preserved a simpler, Heatmap style of conditional formatting when imported from OBIEE through Smart View. So, perhaps it is Excel’s lack of corresponding graphic in the previous example that has caused the migration snafu? OBIEE doesn’t even seem to render our arrow graphics as per the documentation.

“Oracle BI Customizations and View Standards – The Import of Oracle BI content can leverage the customizations and view standards used within an OBIEE environment. All view designed modifications such as conditional formatting, background colors or data configuration is automatically translated to the Microsoft Office environment.”

SVB 4

SVB 5

 

Excel to OBIEE

Let’s see what the latest edition of Smart View offers when moving an analysis from Excel to OBIEE.
Because we weren’t able to import our full table view, why don’t we construct it using the View Designer? The interface looks clean and provides an intuitive approach to producing basic Answers views. Accessing our subject area, I simply selected the columns that matched those on our Answers analysis. After clicking ‘OK’, sorting on our Revenue column from largest to smallest and doing a little deleting, we have a pseudo ‘Top 10’ analysis by Revenue. Given the aesthetic attributes of our Answers analysis, lets see how we’re going to replicate this in Excel.

 

SVB 6

SVB 8

SVB 7

After selecting the table, we can navigate to the Design tab under ‘Table Tools’ and select an alternating Grey scheme which gives us the ‘Enable Alternate Styling’ design quality. Now lets add some formulas and conditional formatting that will give us our Calculated column equivalents. We can insert two rows, one between Revenue and Target, and between Qty and Target, to make room for conditional formatting and Excel’s Icon Sets feature. We then create a simple formula that subtracts Revenue and Quantity from their respective targets in the column between the two, assign conditional formatting and voila! Excel even has a check box that lets you show the arrow only.

 

SVB 9

SVB 10

From Excel, we can select Publish View to deposit our analysis into our Shared Folder. The results indicate a sort of ‘two way street’ between Smart View and Excel and vice versa. Neither totally supports the formatting capabilities of the other, as if to say Smart View is giving ground with every new release. In this blog, we’ve taken a look at how Smart View handles some mildly complex conditional formatting and what it takes to replicate this feature in native Excel. In a user environment where reports are flying back and forth between the two platforms, Smart View definitely makes sense, however it might be advisable to simply deliver the minimum of what is needed and let an end user make any formatting based modifications. After all, who would want to do all that work only to have it lost in translation?

Categories: BI & Warehousing

Best of OTN - Week of August 10th

OTN TechBlog - Fri, 2014-08-15 11:12

Brief public service announcement before we get into the OTN community best of content for the week.... Four Bands. Three Epic Nights. Join Oracle for three evenings of entertainment and fun, all during Oracle OpenWorld and JavaOne, September 28-October 2, San Francisco. Learn More

Architect Community

Any discussion of the best of OTN must include the OTN ArchBeat Podcast. Consistently among the top 3 most popular Oracle podcasts, Archbeat focuses on real conversations with community members. Normally I pick the topics and the guest panelists for each program, but now you have a chance to take over that role and become a Guest Producer. In that role you'll pick the discussion topic and the panelists, while I do the all of the grunt work, allowing you to bask in the glory

Want to know how to become an OTN ArchBeat Podcast Guest Producer? You'll find the details here: Yes, you can take over the OTN ArchBeat Podcast!

And here are two examples of OTN ArchBeat Podcasts produced by community members:

-- OTN Architect Community Manager Bob Rhubart

Database Community

OTN DBA/DEV Watercooler Blog - Did You Say "JSON Support" in Oracle 12.1.0.2?.

-- OTN Database Community Manager Laura Ramsey

Java Community

The Java Source Blog - walkmod : A Tool to Apply Coding Conventions .

Friday Funny: I was worried the #NSA might be spying on me Thanks, @pacohope.

-- OTN Java Community Manager Tori Weildt

Systems Community

The OTN Systems Community HomePage- Find Great Resources for System Admins and Developers.

-- OTN Systems Community Manager Rick Ramsey