Feed aggregator

Oracle 12c Last Login time for Non-Sys Users

VitalSoftTech - Tue, 2014-06-10 09:45
Oracle 12c introduced a new, very useful security feature to store and display the last login information for non-sys users, logging in via SQL*PLUS. The last login time is displayed in the banner by default when we log into the database using SQL*PLUS in 12c. Related Articles • Oracle 12c: Data Redaction Unified Auditing Extended SHA-2 […]
Categories: DBA Blogs

Telling Tales

Greg Pavlik - Sun, 2014-06-08 18:50
After struggling to find time for many months, I finally was able to sit down and watch without interruption Benjamin Bagby's Beowulf performance - an adaptation that relies on Bagby's voice and a reconstruction of a 6th century 6 tone Anglo-Saxon harp. The performance is engrossing and provokes a strong imaginative response, one that would have been communally experienced. Of course the only way to revive a sense of communal experience in the case of Bagby is to see him perform live - however, given the performance is entirely in Old English and as such most unintelligible without subtitles, I think a digital adaptation may be a necessary tradeoff. In many ways, Bagby's Beowulf is a reminder of how impoverished our notion of entertainment is - ephemeral, base, isolating and essentially throw away as a rule.

By the way, it's not entirely the case that the English are unable to create something of the same texture today - several times during the performance I thought of Judith Weir's one person, unaccompanied opera King Harald's Saga. Weir's work is much shorter, principally a musical composition and less poetically rich, so it is difficult to compare the two directly: Beowulf remains the provenance of a balladeer first and foremost, and this is a genre that more and more feels lost to our world - poetry today rarely seems to be meant to be read allowed and even more rarely follows epic formats. This is a lost social phenomena, for which we are impoverished: in fact, the last long work of a balladeer I read was Ethiopian Enzira Sebhat, itself a medieval work dedicated to the Virgin Mary. The closest - though only indirectly comparable - works to the Enzira Sebhat that I am aware of currently being composed are akathistos hymns of the Russian Orthodox tradition. And while many of those recent compositions are less-than-accomplished literary works, they unquestionably represent a rich and living and at times very beautiful means of transmission of communal memory and values. I am not aware of any recent akathistos compositions that have the expressive beauty and originality of the Byzantine hymnographer Romanos the Melodist, the modern akathist has sometimes proven a source of inspiration for exceptionally great art: the late Sir John Tavener's setting of the "thanksgiving akathist" being perhaps the most significant case in point.

The LGWR Three Second Rule. Really?

This page has been permanently moved. Please CLICK HERE to be redirected.

Thanks, Craig.Does the Oracle Database 12c Log Writer Really Sleep For Three Seconds?I have learned the part of Oracle Database performance tuning is checking if what you have been taught is actually true. What I'm writing about today has been done before in Oracle Database 10g and 11g, but I wanted to document this using 12c.

When I was first learning about the Oracle Database process architecture, the instructor said there are a number of reasons the log writer (LGWR) background process will wake up and do "something." In addition to this, if the LGWR was not signaled to wake up, every three seconds it would wake up by itself and check if there was something it needed to do.

Is this "three second rule" really true? That's what this posting is all about.

If you don't want to read the details below, I created a short video for you to watch.

Here is what you just saw in static text. First let's get the LGWR process PID. Here's one way to do this.
$ ps -eaf | grep lgwr
oracle 41977 1 0 May16 ? 00:00:50 ora_lgwr_prod35
oracle 46294 46110 0 07:39 pts/0 00:00:00 grep lgwr
oracle 60264 1 0 Jan13 ? 00:14:14 ora_lgwr_prod30
My Oracle Database 12c instance is prod35, so the process I'm interested in 41977.

Note: Since this is a 12c instance, there are also two additional log writers, lg00 and lg01. But that's a topic for another posting!

At this point, for simplicity sake it's important there be little instance activity. It will make it much simpler to find what we are looking for... the three second sleep

To watch the LGWR process, I'm going to use the Linux strace command. To easily see the system call timing, I'm going to use the -r option. I'm also going to redirect the output, using the -o option to the out.txt file. I'll let the below strace run for about 15 seconds. OK... here we go:
$ strace -rp 41977 -o out.txt
Process 41977 attached - interrupt to quit
^CProcess 41977 detached
To break out of the strace, I simply did a control-c and the result is what you see above. Let's look at the first three lines of the out.txt file so I can tell you what we will be looking for.
$ head -3 out.txt
0.000000 clock_gettime(CLOCK_MONOTONIC, {19940373, 720653971}) = 0
0.000186 gettimeofday({1401979552, 853214}, NULL) = 0
0.000207 gettimeofday({1401979552, 853444}, NULL) = 0
With the -r option, the first column (far left) shows how long the system call took the for call directly above. For example, in the snipped above, the first clock_gettime call took 0.000186 seconds. The first gettimeofday call took 0.000207 seconds. Got it? Read on!

You may wonder why the system call time is the next line down? It actually makes since, because strace displays information as it becomes available. When the call starts obviously it does not know the call duration, so it just displays the call itself. When the call completes and therefore has the time, it shows return details (e.g., "= 0"), line feeds, prints the call time, the next call is displayed, and repeat. OK, back to our objective.

What we are looking for is a three second sleep. In other words, we are looking for a call that takes three seconds. And since a call's duration shown in the first column of the next line down, we want to look for values in the first column that start with 3.0.

Below is an example of what I found in the out.txt file.
0.000033 clock_gettime(CLOCK_MONOTONIC, {19940376, 746821505}) = 0
0.000033 semtimedop(7503875, {{16, -1, 0}}, 1, {3, 0}) = -1 EAGAIN
                   (Resource temporarily unavailable)
3.000436 clock_gettime(CLOCK_MONOTONIC, {19940379, 747351774}) = 0
0.000188 gettimeofday({1401979558, 879922}, NULL) = 0
0.000171 clock_gettime(CLOCK_MONOTONIC, {19940379, 747708976}) = 0
Looking at the above output snippet, notice the time of 3.000436. Then look at the call directly above it. It's the semtimedop call. This call can be used to put a process to sleep, but the process can be woken up. In other words, the LGWR can set an alarm for three seconds, go to sleep, and if it is not disturbed, it will wake up in three seconds. Shown above is an example of the LGWR setting the alarm for three seconds, not being woken up until the alarm fired. This is an example of the three second rule!

If you do what I did and look at your output file, you'll see lots of these "three seconds." If you do this on a pretty-much idle Oracle Database 12c instance's LG01 process (one of the two default LGWR children), it may contain mostly semtimedop calls, each with a three second sleep.

As you might have guessed, the sleep time is part of the system call. Look closely at the semtimedop call below:
semtimedop(7503875, {{16, -1, 0}}, 1, {3, 0})
Notice the "{3, 0}" near the far right? The defines the maximum delay, that is, the sleep. The 3 is the seconds and the 0 is the number of nanoseconds. If you're interested in other sleep options and the actual code to make this work, check out this posting.

Oracle could have used the select system call (not "select * from...") but that would have been a bad idea because then the process could not be woken before the three seconds limit. This topic touches very closely on Oracle Time Based Analysis and more specifically about Oracle's wait interface. I really get into this, including sample C code in my online video seminar entitled, Introduction To Oracle Time Based Analysis.

So there you go! The Oracle kernel code developers set the default sleep time to three seconds along with the ability for the process to be awoken, should the need arise.

Give It A Try
Check this out for yourself on a test or QA system. Once you see it for yourself, you'll never forget about it!

But it gets even better! If you really want to dig into this, do the same but on your database writer (DBWR). Does the DBWR have a three second rule? Does Oracle use the same system call? I'm not going to tell you the answer. You'll have to figure that out for yourself!

Always A Three Second Delay?
You may have noticed that sometimes the process does not sleep the entire three seconds and at other times Oracle sets the sleep time less than three seconds. What up with that? That will be my next posting!

Thanks for reading,

Categories: DBA Blogs

Apex theme fun

Gary Myers - Fri, 2014-06-06 20:05

Sometimes you are working with an off-the-shelf product and find something odd, and you're not quite sure whether it is a bug, a feature or whether you've lost the plot.

I use Oracle's Application Express, and was digging into the included theme_18. The templates refer to classes "t18success" and "t18notification"

And then I go looking into the CSS and see hash / ID selectors.

#t18Success{margin:5px auto;font-size:12px;color:#333;background:#DAEED2;width:600px;background-repeat:no-repeat;padding:5px;border:1px #95C682 solid;border-right:none;border-left:none;}

#t18Notification{margin:5px auto;padding:5px;font-size:12px;color:#333;text-align:center;vertical-align:top;border:1px #ffd700 solid;border-right:none;border-left:none;background-color:#ffffcc;width:600px;}

For added confusion, HTML class names are case-sensitive, but CSS selectors are case-insensitive, so the case differences may or may not be relevant.

The application looks nicer if I change the CSS to class selectors, and then I get coloured, dismissable boxes rather than hard to read, unstyled messages. I could probably get the same effect by changing the id="MESSAGE" in the templates, but that seems riskier. At least with the CSS, I am confident that I am just changing the appearance and it shouldn't affect the logic.

Digging deeper, the CSS for more than a dozen of the built-in themes utilise the ID selector "#notification-message" in the CSS. About half a dozen have only a class selector, and another three have both (with the prefix of t followed by the theme number). Finally three just have the ID selector with the theme prefix.

My gut feel is that they switched from the ID to the class selectors stopping in various places on the way. And some of those places aren't very pretty.

I'm not sure I see the benefit in having the theme number embedded in templates and selectors. The template tells it which theme CSS file to get, and as long as the template and CSS are consistent, the use of the theme number just seems to add more place you have to edit when you want to customise a theme.

This was all checked on a fresh Apex 4.0 instance because I just installed the new Windows 64-bit version of Oracle Express Edition. I'll do an upgrade of that default to the latest 4.2 this weekend too.

Oracle Database Express Edition 11g Release 2: Released

Asif Momen - Fri, 2014-06-06 12:36
Oracle Database Express Edition 11gR2 has been released. You may download the software and documentation following below links:

Happy downloading !!!

AeroGear Push Releases

Matthias Wessendorf - Thu, 2014-06-05 10:25

Today we are releasing two libraries around our Push offerings!

  • UnifiedPush Server 0.10.4
  • Java Sender client 0.7.0

Both releases are containing bug fixes and small improvements. This will be the last release of the 0.10.4 series, the next release (0.11) will contain an all new UI (powered by Angular.js), an integration with Keycloak and other new features such as analytics around push messages.

Both releases are available on Maven Central. The UnifiedPush Server on our OpenShift cartridge was also update to contain 0.10.4 for our PaaS offerings.

Last but not least: This summer we will be release the AeroGear MobilePush 1.0.0 to the community!

Stay tuned for more

Fun with SQL - Silver Pockets Full

Chet Justice - Wed, 2014-06-04 17:13
Silver Pockets Full, send this message to your friends and in four days the money will surprise you. If you don't, well, a pox on your house. Or something like that. I didn't know what it was, I just saw this in my FB feed:

Back in November, I checked to see the frequency of having incremental numbers in the date, like 11/12/13 (my birthday) and 12/13/14 (kate's birthday). I don't want to hear how the rest of the world does their dates either, I know (I now write my dates like YYYY/MM/DD on everything, just so you know, that way I can sort it...or something).

Anyway, SQL to test out the claim of once every 823 years. Yay SQL.

OK, I'm not going to go into the steps necessary because I'm lazy (and I'm just lucky to be writing here), so here it is:
select *
to_char( d, 'yyyymm' ) year_month,
count( case
when to_char( d, 'fmDay' ) = 'Saturday' then 1
else null
end ) sats,
count( case
when to_char( d, 'fmDay' ) = 'Sunday' then 1
else null
end ) suns,
count( case
when to_char( d, 'fmDay' ) = 'Friday' then 1
else null
end ) fris
select to_date( 20131231, 'yyyymmdd' ) + rownum d
from dual
connect by level <= 50000
group by
to_char( d, 'yyyymm' )
where fris = 5
and sats = 5
and suns = 5
So over the next 50,000 days, this happens 138 times. I'm fairly certain that doesn't rise to the once every 823 years claim. But it's cool, maybe.
YEAR_MONTH       SATS       SUNS       FRIS
---------- ---------- ---------- ----------
201408 5 5 5
201505 5 5 5
201601 5 5 5
201607 5 5 5
201712 5 5 5
128 more occurrences...
214607 5 5 5
214712 5 5 5
214803 5 5 5
214908 5 5 5
215005 5 5 5

138 rows selected
I'm not the only dork that does this either, here's one in perl. I'm sure there are others, but again, I'm lazy.
Categories: BI & Warehousing

Oracle 12c - New SYS-level Administration Privileges

alt.oracle - Wed, 2014-06-04 08:47
For a while now, Oracle has been moving its security model toward a more differentiated set of privileges than just SYSDBA and SYSOPER.  We saw this in 11g with the SYSASM privilege.  This is in response to the growing number of DBA shops that delineate permissions at a more granular level, even among DBAs.  Rather than saying, “Here’s my DBA team, everyone has SYSDBA,” more and more IT shops are defining DBA job roles at a lower level.  If an application DBA never has the job responsibility to shutdown or backup a database, then maybe they don’t need SYSDBA.  However, the lines aren’t always that clear, so Oracle has been adding new levels of admin privileges.  In 12c, they’ve added several.

The SYSBACKUP privilege allows a user to connect to the target database and run RMAN commands, without requiring SYSDBA.  Here’s what it looks like.


Recovery Manager: Release - Production on Tue Aug 13 14:57:41 2013

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target "altdotoracle@test1 as sysbackup"

target database Password:
connected to target database: TEST1 (DBID=1787845942)

RMAN> select user from dual;

using target database control file instead of recovery catalog

RMAN> backup tablespace users;

Starting backup at 13-AUG-13
allocated channel: ORA_DISK_1
Finished backup at 13-AUG-13


In truth, SYSBACKUP has a lot more rights than just those needed to do something like a hot backup, including startup and shutdown, creating tables and tablespaces and executing a number of supplied packages.  So from that perspective, I’m not exactly sure they hit the mark on this one.  Nevertheless, it does differentiate SYSBACKUP from the god-like SYSDBA admin privilege to some degree.  There are also the new admin privileges SYSDG, for administering Data Guard, and SYSKM to do Transparent Data Encryption (TDE) administration.  Consequently, there are new columns in v$pwfile_users to reflect these new privs.

SQL> desc v$pwfile_users
 Name                        Null?        Type
 --------------------------- --------     ---------------
 USERNAME                    VARCHAR2(30)
 SYSDBA                      VARCHAR2(5)
 SYSOPER                     VARCHAR2(5)
 SYSASM                      VARCHAR2(5)
 SYSBACKUP                   VARCHAR2(5)
 SYSDG                       VARCHAR2(5)
 SYSKM                       VARCHAR2(5)
 CON_ID                      NUMBER

If we examine the view itself, we see this in action.

SQL> select * from v$pwfile_users;

----------------- ----- ----- ----- ----- ----- ----- ----------
SYS               TRUE  TRUE  FALSE FALSE FALSE FALSE          0

Categories: DBA Blogs

Monthly article publication on Toad World portal

Syed Jaffar - Wed, 2014-06-04 06:50
Let me quickly give you an update if you are wondering why I kept clam, not making much buzz on my blog. Well, over the past one year or so, I have been contributing monthly articles to Toad World news letter, and all my articles are published at their portal. I must encourage you all to visit Toad World website where many Oracle Experts/Gurus contributing/sharing knowledge.

Beware April 2014 PSU and Golden Gate Integrated Capture Users

Don Seiler - Tue, 2014-06-03 14:55
When the latest quarterly Patch Set Update (PSU) came out from Oracle, we planned to get it in place as soon as reasonable due to our need to stay current with security patches, and my need to apply what I had assumed were well-tested bug fixes for our installations. However we were in for an unpleasant surprise.

We first applied the April 2014 PSU on our staging & development database hosts and things ran fine. After two weeks, we put it into production. The next morning we noticed that our Golden Gate extracts were abending with a message like this:

2014-05-13 01:41:44 ERROR OGG-02077 Extract encountered a read error in the asynchronous reader thread and is abending: Error code 600, error message:
ORA-00600: internal error code, arguments: [knlogcPackColList:101], [1], [], [], [],[], [], [], [], [], [], []. 

Obviously the PSU was the only change from the day before when these GoldenGate extracts ran fine. The error itself seemed to match Bug 16306373 (OGG Lightweight capture fails with ORA-600 [knlogcPackColList:101]), which affects integrated capture extracts, which was what we were trying to use.

So we had two questions to answer:

  1. Was it really the PSU, and if so do we need to rollback?
  2. Why didn't we see this in development/staging?

The answer to #2 came pretty quickly: the extracts had been stopped prior to applying the PSU in development and were never restarted. Our use of GoldenGate is for a new project that is still not quite solid in form, and so the extracts might be shut down for weeks at a time. However the DBA team was not aware of this (something that will be added to the checklist in the future) and so that part was never tested. We decided to turn the extracts on and, sure enough, we saw the problem within 30 minutes.

As far as #1, we opened an SR with Oracle Support. At first they assured us it wasn't the PSU, but within a few hours had started backing away from that confidence and by the end of the day were suggesting we roll back. However we decided to hold off on that due to the luxury of not needing those extracts to run quite yet (we had just been capturing data in production to make sure we could handle the workload). That gave us and Oracle Support some breathing room to get it fixed.

Eventually Oracle Support did confirm that it was Bug 16306373. The patch for this bug, however, conflicted with another patch we had installed. So they created a new merge patch MLR 18689166 for us. This was applied that evening to just our development database home, and immediately we saw more problems.

First, regularly-scheduled materialized view refreshes were failing with:

ORA-06508: PL/SQL: could not find program unit being called: "SYS.DBMS_SNAPSHOT" 

Then we saw DataPump exports failing with:

ORA-06508: PL/SQL: could not find program unit being called: "SYS.DBMS_LOGREP_UTIL" 

On top of that, GoldenGate gave us these errors trying to unregister the extract:

GGSCI (stagingdb) 14> unregister extract ext1ol database
ERROR: Cannot register or unregister EXTRACT EXTFOO because of the following SQL error: OCI Error ORA-01775: looping chain of synonyms (status = 1775). 

It definitely looked like something in that patch had hosed the catalog. Oracle quickly found the problem:

The problem with the invalid dictionary objects after the installation of patch 18689166 is due to the execution of script e1102000.sql that is being called from postinstall.sql e1102000.sql is a downgrade scripts and removes some objects from the data dictionary resulting in other dependent objects to become invalid. 
It is here that I should point out that Oracle had this merge patch in their QA for over 4 days. I could have seriously hurt myself from simultaneously rolling my eyes and face-palming as violently as I did.

The fix was to restart the instance into UPGRADE mode and run the catupgrd.sql script from $ORACLE_HOME/rdbms/admin, which rebuilds the catalog. We confirmed that this did fix the problem in DEV, and proceeded to patch the other databases, this time commenting out the call to e1102000.sql n the postinstall.sql script, per Oracle's workaround.

So, not only did the April 2014 PSU introduce a bug, but then the patch for that bug made things incredibly worse. We were fortunate in that we did not yet need the GoldenGate extracts in production. However many others will not have that luxury and will want to double-check that bug and the PSU if they're using GoldenGate and integrated capture.
Categories: DBA Blogs

Hot off the press : Latest Release of Oracle Enterprise Manager 12c (R4)

Pankaj Chandiramani - Tue, 2014-06-03 07:53

Read more here about the PRESS RELEASE:  Oracle Delivers Latest Release of Oracle Enterprise Manager 12c

Service Catalog for Database and Middleware as a Service; Enhanced
Database and Middleware Management Help Drive Enterprise-Scale Private
Cloud Adoption

In coming weeks  , i will be covering latest topics like :

  1. DbaaS Service Catalog incorporating High Availability and Disaster Recovery

  2. New Rapid Start kit

  3. Other new Features 

Stay Tuned !

Categories: DBA Blogs

The Hitchhiker’s Guide to the EXPLAIN PLAN (Act I)

Iggy Fernandez - Mon, 2014-06-02 20:17
On the Toad World site, I’m publishing a whole series of blog posts and articles on the subject of EXPLAIN PLAN. I’m using EXPLAIN PLAN as a central motif to teach not just SQL tuning but relational theory, logical database design, and physical database design. In a year’s time, I hope to have enough material for […]
Categories: DBA Blogs

Guest Post: Custom Balancing by 2 segments in EBS R12

David Haimes - Mon, 2014-06-02 08:15

Sangeeta Sameer, IT Leader – Finance, Renewables ERP at GE Power & Water

This is a guest post from Sangeeta Sameer, who worked for Oracle for 6 years before moving on to work for GE where has held a number of senior positions.  She has also joined the board of the OAUG Multinational Special Interest Group.

At the Collaborate 2014 Conference, David Haimes and I presented a custom solution to enable balancing by 2 segments of the Chart of Accounts in Release 12.1.3 of Oracle e-Business Suite. The attached PowerPoint and Paper that we presented describes in detail the requirements and the custom solution.

So when David asked me to write a guest post about this topic, I thought about what else I would include in the paper if we were writing it again. While the paper talks about the requirement to create a balanced Trial Balance by 2 segments, it does not expand on why we need a balanced Trial Balance by 2 segments. Global corporations like General Electric (GE) need better granularity and accountability for the financial performance of their geographical divisions and lines of businesses by being able to create a Balance Sheet at both a Legal Entity/Statutory level and Business Unit/Management Entity level. GE is a Global conglomerate that has diverse businesses operating in a large number of countries. GE needs the ability to create a balanced Trial Balance by the Legal Entity (LE) and the Management Entity (ME) segments in the Chart of Accounts.

The ability to automatically balance by more than one segment is not a new requirement for GE, and I think it would be a requirement for other multinational companies as well. This requirement is explained quite well in Oracle Fusion Applications Enterprise Structures Concepts Guide 11g Release 5 (11.1.5) – Part # E22899-05. See the section titled “Legal Entity and Its Relationship to Balancing Segments”. While Fusion Applications are able to balance by up to 3 segments in the Chart of Accounts, Oracle e-Business Suite (EBS) Applications are able to balance by one segment only.

In prior EBS Release 11i implementations at GE, the requirement to balance by 2 segments was satisfied by developing custom solutions that needed to be re-visited for Release 12. For example, one of the solutions in 11i was for a custom program to bring balancing entries into the Interface Table. With the introduction of SLA in Release 12, this 11i solution was no longer a good one for R12. The solution described in the attached Paper and PowerPoint is a much more robust solution, and is live in Production for 2 different ERP Projects at GE.

The OAUG Multinational Special Interest Group (SIG) is organizing a webinar on June 11th at 10.30 am Pacific (1.30 pm Eastern) on this topic. David and I will repeat the presentation from Collaborate 2014, and look forward to sharing this solution again. The webinar will be one hour long, including 20-25 minutes for Q&A.

The link to register for this webinar is:

Categories: APPS Blogs

Memories of the way we were...

Greg Pavlik - Sat, 2014-05-31 16:13
The fascinating thing about Hadoop is the obviousness of its evolutionary needs. For example, MapReduce coupled with reliable scale out storage was a powerful - even revolutionary - effect for organizations with both lots of and multi-structured data. Out of the gate, Hadoop unlocked data "applications" that were for all intents and purposes unimplementable. At the same time, it didn't take much imagination to see that separating the compute model from resource management would be essential for future applications that did not fit well with MapReduce itself. It took a lot of work and care to get YARN defined, implemented and hardened, but the need for YARN itself was fairly obvious. Now it is here and Hadoop is no longer about "batch" data processing.

Note, however, it takes a lot of work to make the evolutionary changes available. In some cases, bolt on solutions have emerged to fill the gap. For key value data management, HBase is a perfect example. Several years ago, Eric Baldeschwieler was pointing out that HDFS could have filled that role. I think he was right, but the time it would take to get "HBase-type" functionality implemented via HDFS would have been a very long path indeed. In that case, the community filled the gap with HBase and it is being "back integrated" into Hadoop via YARN in a way that will make for a happier co-existence.

Right now we are seeing multiple new bolt on attempts to add functionality to Hadoop. For example, there are projects to add MPP databases on top of Hadoop itself. It's pretty obvious that this is at best a stop gap again - and one that comes at a pretty high price - I don't know of anyone that seriously thinks that a bolt on MPP is ultimately the right model for the Hadoop ecosystem. Since the open source alternatives look to be several years away from being "production ready", that raises an interesting question: is Hadoop evolution moving ahead at a similar or even more rapid rate to provide a native solution - a solution that will be more scalable, more adaptive and more open to a wider range of use cases and applications - including alternative declarative languages and compute models?

I think the answer is yes: while SQL on Hadoop via Hive is really the only open source game in town for production use cases - and its gotten some amazing performance gains in the first major iteration on Tez that we'll talk more about in the coming days - its clear that the Apache communities are beginning to deliver a new series of building blocks for data management at scale and speed: Optiq's Cost Based Optimizer; Tez for structuring multi-node operator execution; ORC and vectorization for optimal storage and compute; HCat for DDL. But what's missing? Memory management. And man has it ever been missing - that should have been obvious as well (and it was - one reason that so many people are interested in Spark for efficient algorithm development).

What we've seen so far has been two extremes available when it comes to supporting memory management (especially for SQL) - all disk and all memory. An obvious point here is that neither is ultimately right for Hadoop. This is a long winded intro to point to two, interrelated pieces by Julian Hyde and Sanjay Radia unveiling a model that is being introduced across multiple components called Discardable In-memory Materialized Query (DIMMQ). Once you see this model, it becomes obvious that the future of Hadoop for SQL - and not just SQL - is being implemented in real time. Check out both blog posts:



It's All About CPU But There Is NO CPU Bottleneck! What?

This page has been permanently moved. Please CLICK HERE to be redirected.

Thanks, Craig.It's All About CPU But There Is NO CPU Bottleneck...
Diagnosing Oracle Database performance is like solving a puzzle. But what I really enjoy is coming up with performance solutions that are anti-intuitive for most people. You know, the ones when you can see people stop talking and actually think, "Why would Craig say that!?" In this posting, I delve into one of these situations.

The Situation
Just over a month ago I received an email from a former Australian Oracle Performance Firefighting student about their "CPU bottleneck." Based on an one-hour AWR report's Time Model Statistics section (which is based on v$sys_time_model view), the  DB CPU was 95% of DB Time, leaving 5% for Oracle wait time. The top CPU consuming SQL statements were easily identified and work had begun with tuning them. By "tuning" I mean a small group of DBAs were working to reduce their CPU consumption. But something didn't feel right to him, so he emailed me about it.

My first question was, "Is there an OS CPU bottleneck?" That is, is the host (or virtual machine if you like) out of CPU resources? I was wondering if the average CPU utilization was 80% or higher. Why was I asking this? Read on...

DB Time Math
Over an interval of time, DB Time is all the Oracle server process CPU time (i.e., DB CPU) plus all the non-idle wait time.

Note: Oracle does not include background process CPU in DB Time. Why they don't include background process CPU time is an entirely different subject. And of course, we are trusting Oracle that what they consider "non-idle" wait time should actually be "non-idle" wait time. Again, that's an entirely different subject.

My former student knew the DB Time math and therefore understood that since DB CPU was pretty close to DB Time, there was little non-idle wait time. It was "all about CPU." While I still wanted to know the wait event situation because it provides performance hindering clues and forces the DBA to review their work and think a little more.

OS CPU Utilization Math
In my Utilization On Steroids online video seminar, I demonstrate utilization math by shrinking myself down to the size of a water glass and also the size of a printer cartridge. (Like the motto says, "What Online Training Should Be!") In a nutshell, utilization can be viewed/understood as requirements divided by capacity. If a 500ml glass contains 250ml of water, the glass is 50% utilized that is, 50% full.

As I present in my online seminar, there are two simple methods to figure OS CPU utilization using only the v$osstat view. One of these methods as based on the BUSY_TIME and the IDLE_TIME statistics from the Operating System Statistics section of any AWR and Statspack report. The BUSY_TIME is the instance CPU consumption, that is, the CPU requirements. As I carefully explain in my seminar and in my classes, the capacity is actually equal to the BUSY_TIME plus the IDLE_TIME. That is:

Utilization = Requirements / Capacity

Utilization =  BUSY_TIME / ( BUSY_TIME + IDLE_TIME )

This is true unless the OS is updating OS structures incorrectly (e.g., /proc) or Oracle is not gathering the data from the OS correctly or frequently enough. And yes, I talk about how to check this in my seminar.

My former student did the utilization math. The average utilization was around 65%. Since this is the only Oracle instance doing any real work on the box, is was not a surprise to him that vmstat also indicated the OS CPU utilization was 65%.

Why I Asked About A CPU Bottleneck?
Suppose there is plenty of available CPU resources AND the SQL users are complaining about are doing nothing but buffer gets (i.e., logical reads). In other words, from a resource consumption perspective, it's all about CPU consumption. Is this possible? The answer is, YES!

If there was a single server process running this SQL statement, parallel query operations were not occurring and there are four CPU cores, what would the CPU utilization be? Probably around 25% (plus overhead between 5% to 15%). That is because on average only one CPU core was processing the SQL statement!

If this SQL statement took "too long to run" adding more CPU power (specifically more cores) would not help.

As strange at it may seem, one performance improving solution is to use the available CPU! Read on.

Use More CPU! Now!
Suppose the average CPU utilization of a 10 CORE box (or VM if you prefer) was running at an average of 50% utilization. This means that on average 5 of the 10 cores are busy. Now suppose a SQL statement takes 4 minutes to process without parallel query or very-cool (but perhaps costly) application design. How long would the SQL statement take to run if it ran in four parallel streams?

It depends. We need to ensure there is available CPU power. Assuming each parallel stream consumes one CPU core, the box has 10 cores with 5 being available (remember the average utilization is 50%), that leaves with us 5 "extra" cores. For our SQL statement to run in 4 parallel streams, we need three more CPU cores... and we have them!

If there is no parallelism overhead (ha! don't count on that!), the SQL statement would run in around 1 minutes, 8 CPU cores would be active (5+3), and the average CPU utilization would be around 80% (8 cores required / 10 cores of capacity).

So by creatively using available CPU resources, we were able to consume 4 minutes of CPU in a single minute! ...a beautiful example of parallelism. And of course, the users will be delighted and I may get a free pizza.

Be Anti-Intuitive
When a SQL statement, batch process or an Oracle process is constrained by CPU, yet there is plenty of CPU power available, the problem is probably a serialization issue. (Another possibility is some type of locking or blocking problem.) The solution can be found by parallelizing, thereby using the available CPU resources which will reduce the wall/clock/run time of the process.

If the above paragraph makes sense, then you understand what I'm talking about. If it does not make sense, stop, breath, and re-read the previous paragraph or paragraphs. It will be worth your time.

This next paragraph is really important to understand:

It is important to understand we have NOT reduced the total SQL statement's CPU requirements. We have NOT increased the total SQL statement's CPU requirements. We have NOT tuned the SQL statement in the traditional sense by reducing the buffer gets (logical reads, logical IOs). What we have done is simply altered the situation by consuming the necessary CPU requirements within a shorter duration. (In our example, the four minutes of CPU time was consumed within a single minute.) We did this taking a serial process and parallelizing it to use more of the available CPU resources.

Expand Our View Into The OS
Without understanding the OS situation we can easily misdiagnose the problem. And we can miss creative and powerful solutions.

Most DBAs immediately jump to tuning the SQL and tuning Oracle. While those are both valid technical solutions, there are others... like parallelizing to reduce run time while consuming more resources (in a shorter period of time).

Some DBAs but more likely OS Administrator but for sure hardware vendors may insist on getting more CPU power. While faster CPU will help somewhat, adding more of the same CPUs cores will likely do absolutely nothing...unless we can increase parallelism to use them.

Being able to determine the OS bottleneck is so important, I devote an entire chapter in my Oracle Performance Firefighting book on the subject. When I was learning to optimize Oracle systems, learning how to find the OS bottleneck significantly increased my value in the Oracle DBA market. Later when I ruminated on serialization and parallelization, I turned situations upside down by placing a larger (yet shorter-term) load on the system as a valid performance improving solution.

If performance is not what it needs to be AND there are available OS resources, consider using them to improve performance. It's a creative and rarely used solution.

Thanks for reading!

Categories: DBA Blogs

Do animals have souls?

FeuerThoughts - Thu, 2014-05-29 09:04
OK, first of all, don't tell me your answer to this question. That would make the rest of this post seem a bit rude.

Here is one of the dumbest questions I can ever imagine a person asking, much less answering:

Do animals have souls?

How utterly ridiculous.

No one knows what a soul is. No one knows what it looks like, what it means, whether or not it really exists.

Furthermore, we certainly have no idea - please allow me to repeat that because I think it is so fundamental to accept this as fact: we have no idea at all - of what is going on inside an animal’s head. Clearly, a whole lot is going on, if you take the time to pay attention to animals and think about what it takes to do what they do. But many of the things humans blithely state as fact regarding animals, such as “They don’t know the difference between right and wrong.” is fundamentally meaningless because we simply cannot know what is going on inside another creature’s mind. We just make the assumption that they are really super different from us in all the ways that matter - to us.

We are intelligent, moral, sentient. We are smart and they are dumb, brute animals. We are conscious, we have history, philosophy, nuclear power. What do animals have? Nothing!

Oh really? How do we know what animals have? Or even what “have” means to a butterfly or a snake or a black bear? Again, we really have no idea whatsoever what animals have, what they want, or how they would feel about killing others just to make themselves comfortable (something that we humans do every second of every day).

So we make the most self-serving assumption imaginable. We simply outright declare that other creatures have no souls, are not sentient. They are food or threat or benign, but they are not like us.

We will continue to reject the evidence of our senses, the clear demonstrations of sentience, of complex social structures, in other animals. That way we don’t have to feel bad about enslaving them and killing them. Think for just a moment about how smart pigs are, and then think about pig farms in which tens of thousands of these poor creatures live short miserable lives - brought into this world for the very purpose of slaughtering them for bacon. And then later a dam bursts and an entire town is swamped with pig feces from the refuse lake at the farm. Go, humans, go!

I sure am glad there wasn’t and isn’t a species of creature on this planet that's three times our size, extremely powerful and licking its lips at the prospect of a nicely smoked human torso. 

We do not know what goes on inside a pig’s head, but it sure seems like they can feel and express terror. 

So, yes, humans will keep on keeping on, keep on consuming, reproducing, and assuming. But that doesn't mean we can’t try to recover a shred, a mere shred, of our individual dignity by at least acknowledging what we are doing, and taking at least one step, no matter how small to help heal our planet and our co-inhabitants.

We can start by acknowledging, accepting, that the thing that we believe makes us unique and special among all living things is simply an unknowable assumption we make. It is an arbitrary, self-serving action - and brings into question the very idea that humans can be considered moral creatures. 
Categories: Development

Does my GoPro Qualify as a Wearable Device?

David Haimes - Thu, 2014-05-29 08:44
Is this small camera a wearable?

Is this small camera a wearable?

Last week I attended a Wearables Design Jam organized by Ultan (@ultan) from the Oracle UX team (a future post is coming on the event) and since then I have been thinking about what makes a successful wearable device.  The most successful devices in the marketplace seem to be the fitness trackers and I have had great benefit from my fitbit, not least of all the inspiration for walking meetings.  A fitness tracker is not a new idea (I joke that a fitbit is just a $100 pedometer), but the wearable devices today make tracking your fitness so much easier.  No need to keep diaries of your activities, everything is there for you in a dashboard with all sorts of interesting analytics, goal setting, alerts etc.

The GoPro has similarities, it is a small personal camcorder with only memory, no tape and no screen for viewing, similar to the Flip camera which Cisco bought and then killed and many other cameras.  However it takes action videos really well, with very little effort and is pretty much indestructible so you can take it  anywhere.  They produce professional quality video and photos including time lapse videos, fast frame rates for smooth slow motion and have an ultra wide field of view so you rarely miss any action.  You can also attach them to your body, bike and other helmets, cars and of course surfboards. My model (Hero 3+ Black edition) has WiFi so I can control it from my iPhone or iPad and grab content from it easily, although that does drain the battery fast.

If you haven’t already seen it, you must check out the Time Lapse video I took with the GoPro of the Team USA Americas Cup yacht being installed at Oracle HQ.  This was around 9,000 photos taken over 50+ hours compressed into a 2 minute video.  I have also strapped it to my car to capture a journey down the highway 1, strapped it to the outside of my car, strapped it to a remote control car, strapped it to myself and my kids in a swimming pool, to my Son’s bike and to my daughter’s wheeled shoes.  Now I am trying to think of some enterprise applications of this amazing device, watch this space.

Finally, I’m embedding a shot from my car, especially for Jake (@jkuramot) who always likes to know how my car is doing, plus a few other interesting (IMHO) shots.

Good for 100 mph plus

Good for 100 mph plus


UPDATE (5/28/14):

I was reading a post from my colleague John Cartan this morning about his experience with the the Narrative Clip which is a smaller device, intended to be used for life logging.  It is worn all the time and captures a picture every 30 seconds.  The experience did not seem to be great, but I think a GoPro, altho a little bigger would be much better for this purpose.  I could imagine wearing it all day at a conference say, or in a series of meetings, brainstorming sessions etc. it would give a really good replay of events.  I will probably try some of this in the coming weeks, just to see how interesting it is.

Categories: APPS Blogs

Maker Faire, Java and the Internet of Things

David Haimes - Wed, 2014-05-28 11:00

makerpassMaker Faire is a popular event here in the San Francisco Bay Area, I’ve been going with my kids for several years now and all of us look forward to it every year.  I describe it as everything from embroidery to fire breathing robots, anything you can make is there and everyone is passionate about making stuff and everyone enjoys sharing their passion.

However this year I was there as staff with the Apps Lab who were providing a DIY activity for the Java team who are a big supporter of Maker Faire, Jake explains in detail in his blog.  My summary is we showed use of embedded java on a Raspberry Pis to control a variety of devices with input from other devices, for example I could use a motion sensor to turn on a fan, or send a tweet or send an SMS message.  It was an interesting experience showing this off to people, explaining what we mean by the internet of things and seeing people’s reaction.  There are a huge range of ages and types of people at Maker Fairs, so it was cool to see young and old alike enjoying making things happen.  The most interesting thing to see first hand is how young children (as young as 4 or 5) can just figure out how to do things with minimal guidance and in fact seem to prefer to explore on their own, older people are much quicker to get help or listen to me explain rather than do it.  I have no idea what this means, if anything, but it was interesting to me at least.

Categories: APPS Blogs

SQLDev: How unshared worksheets should work!

Barry McGillin - Wed, 2014-05-28 08:45
Unshared worksheets are created to have a private connection to the database.  When that unshared worksheet is closed, the connection and session for it should disappear as well.

This graphic shows what should happen!


Subscribe to Oracle FAQ aggregator