Feed aggregator

HGV Levy

Tony Andrews - Wed, 2014-04-16 05:45
The UK government has introduced a new service for foreign lorry drivers to pay a levy to use UK roads here: https://www.hgvlevy.service.gov.uk/ It was built by my current employer, Northgate Information Solutions. Guess what technology it runs on? We had a lot of interesting challenges when building this: Compliance with UK Government styling and standards Responsive design to work on Tony Andrewshttp://www.blogger.com/profile/16750945985361011515noreply@blogger.com0http://tonyandrews.blogspot.com/2014/04/hgv-levy.html

April 2014 Critical Patch Update Released

Oracle Security Team - Tue, 2014-04-15 15:04

Hello, this is Eric Maurice again.

Oracle today released the April 2014 Critical Patch Update.  This Critical Patch Update provides fixes for 104 vulnerabilities across a number of product lines including: Oracle Database, Oracle Fusion Middleware, Oracle Hyperion, Oracle Supply Chain Product Suite, Oracle iLearning, Oracle PeopleSoft Enterprise, Oracle Siebel CRM, Oracle Java SE, Oracle and Sun Systems Products Suite, Oracle Linux and Virtualization, and Oracle MySQL.  A number of the vulnerabilities fixed in this Critical Patch Update have high CVSS Base Score and are being highlighted in this blog entry.  Oracle recommends this Critical Patch Update be applied as soon as possible.

Out of the 104 vulnerabilities fixed in the April 2014 Critical Patch Update, 2 were for the Oracle Database.  The most severe of these database vulnerabilities received a CVSS Base Score of 8.5 for the Windows platform to denote a full compromise of the targeted system, although a successful exploitation of this bug requires authentication by the malicious attacker.  On other platforms (e.g., Linux, Solaris), the CVSS Base Score is 6.0, because a successful compromise would be limited to the Database and not extend to the underlying Operating System.  Note that Oracle reports this kind of vulnerabilities with the ‘Partial+’ value for Confidentiality, Integrity, and Availability impact (Partial+ is used when the exploit affects a wide range of resources, e.g. all database tables).  Oracle makes a strict application of the CVSS 2.0 standard, and as a result, the Partial+ does not result in an inflated CVSS Base Score (CVSS only provides for ‘None,’ ‘Partial,’ or ‘Complete’ to report the impact of a bug).  This custom value is intended to call customers’ attention to the potential impact of the specific vulnerability and enable them to potentially manually increase this severity rating.  For more information about Oracle’s use of CVSS, see http://www.oracle.com/technetwork/topics/security/cvssscoringsystem-091884.html.

This Critical Patch Update also provides fixes for 20 Fusion Middleware vulnerabilities.  The highest CVSS Base Score for these Fusion Middleware vulnerabilities is 7.5.  This score affects one remotely exploitable without authentication vulnerability in Oracle WebLogic Server (CVE-2014-2470).  If successfully exploited, this vulnerability can result in a wide compromise of the targeted WebLogic Server (Partial+ rating for Confidentiality, Integrity, and Availability.  See previous discussion about the meaning of the ‘Partial+’ value reported by Oracle). 

Also included in this Critical Patch Update were fixes for 37 Java SE vulnerabilities.  4 of these Java SE vulnerabilities received a CVSS Base Score of 10.0.  29 of these 37 vulnerabilities affected client-only deployments, while 6 affected client and server deployments of Java SE.  Rounding up this count were one vulnerability affecting the Javadoc tool and one affecting unpack200.  As a reminder, desktop users, including home users, can leverage the Java Autoupdate or visit Java.com to ensure that they are running the most recent version of Java.  Java SE security fixes delivered through the Critical Patch Update program are cumulative.  In other words, running the most recent version of Java provides users with the protection resulting from all previously-released security fixes.   Oracle strongly recommends that Java users, particularly home users, keep up with Java releases and remove obsolete versions of Java SE, so as to protect themselves against malicious exploitation of Java vulnerabilities. 

This Critical Patch Update also included fixes for 5 vulnerabilities affecting Oracle Linux and Virtualization products suite.  The most severe of these vulnerabilities received a CVSS Base Score of 9.3, and this vulnerability (CVE-2013-6462) affects certain versions of Oracle Global Secure Desktop. 

Due to the relative severity of a number of the vulnerabilities fixed in this Critical Patch Update, Oracle strongly recommends that customers apply this Critical Patch Update as soon as possible.  In addition, as previously discussed, Oracle does not test unsupported products, releases and versions for the presence of vulnerabilities addressed by each Critical Patch Update.  However, it is often the case that earlier versions of affected releases are affected by vulnerabilities fixed in recent Critical Patch Updates.  As a result, it is highly desirable that organizations running unsupported versions, for which security fixes are no longer available under Oracle Premier Support, update their systems to a currently-supported release so as to fully benefit from Oracle’s ongoing security assurance effort.

For more information:

The April 2014 Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/topics/security/cpuapr2014-1972952.html

More information about Oracle’s application of the CVSS scoring system is located at http://www.oracle.com/technetwork/topics/security/cvssscoringsystem-091884.html

An Ovum white paper “Avoiding security risks with regular patching and support services” is located at http://www.oracle.com/us/corporate/analystreports/ovum-avoiding-security-risks-1949314.pdf

More information about Oracle Software Security Assurance, including details about Oracle’s secure development and ongoing security assurance practices is located at http://www.oracle.com/us/support/assurance/overview/index.html

The details of the Common Vulnerability Scoring System (CVSS) are located at http://www.first.org/cvss/cvss-guide.html. 

Java desktop users can verify that they are running the most version of Java and remove older versions of Java by visiting http://java.com/en/download/installed.jsp.      

 

 

Frequently Misused Metrics in Oracle

Steve Karam - Tue, 2014-04-15 14:43
Lying Businessman

Back in March of last year I wrote an article on the five frequently misused metrics in Oracle: These Aren’t the Metrics You’re Looking For.

To sum up, my five picks for the most misused metrics were:

Business Graph

  1. db file scattered read – Scattered reads aren’t always full table scans, and they’re certainly not always bad.
  2. Parse to Execute Ratio – This is not a metric that shows how often you’re hard parsing, no matter how many times you may have read otherwise.
  3. Buffer Hit Ratio – I want to love this metric, I really do. But it’s an advisory one at best, horribly misleading at worst.
  4. CPU % – You license Oracle by CPU. You should probably make sure you’re making the most of your processing power, not trying to reduce it.
  5. Cost – No, not money. Optimizer cost. Oracle’s optimizer might be cost based, but you are not. Tune for time and resources, not Oracle’s own internal numbers.

Version after version, day after day, these don’t change much.

Anyways, I wanted to report to those who aren’t aware that I created a slideshow based on that blog for RMOUG 2014 (which I sadly was not able to attend at the last moment). Have a look and let me know what you think!

Metric Abuse: Frequently Misused Metrics in Oracle

Have you ever committed metric abuse? Gone on a performance tuning snipe hunt? Spent time tuning something that, in the end, didn’t even really have an impact? I’d love to hear your horror stories.

Also while you’re at it, have a look at the Sin of Band-Aids, and what temporary tuning fixes can do to a once stable environment.

And lastly, keep watching #datachat on Twitter and keep an eye out for an update from Confio on today’s #datachat on Performance Tuning with host Kyle Hailey!

The post Frequently Misused Metrics in Oracle appeared first on Oracle Alchemist.

RMAN Redundancy is not a Viable Retention Policy

Don Seiler - Tue, 2014-04-15 14:07

Originally posted by me on the Pythian blog. This is an older post that I somehow forgot to post on my own blog, but another recent redundancy foul-up reminded me of it.

The story you are about to read is based on actual events. Names and paths have been changed to protect the innocent. I call this scenario “The Perfect Storm” because it took just the right combination of events and configurations. Sadly, this doesn’t make it an unlikely occurrence, so I’m posting it here in hopes that you’ll be able to save yourselves before it’s too late.

I have always had a preternatural dislike for using REDUNDANCY as a retention policy for Oracle RMAN, greatly preferring RECOVERY WINDOW instead, simply because REDUNDANCY doesn’t really guarantee anything valuable to me, whereas RECOVERY WINDOW guarantees that I’ll be able to do a point-in-time recovery to anytime within the past x days. Plus, I had already been burned once by a different client using REDUNDANCY. With the story I’m about to tell, this dislike has turned into violent hatred. I’m going to be light on the technical details, but I hope you’ll still feel the full pain.



First some table setting:
  • Standalone 10.2.0.2 instance (no RAC, no DataGuard/Standby)
  • RMAN retention policy set to REDUNDANCY 2
  • Backups stored in the Flash Recovery Area (FRA)
A few months ago, we had a datafile corruption on this relatively new instance (data had been migrated from an old server about a week prior). The on-call DBA followed up the page by checking for corruptions in the datafile with this command:

RMAN> backup check logical datafile '/path/to/foobar_data.dbf';

This, my friends, led to the major fall, though we did not know it for many hours. You see, the FRA was already almost full. This causes the FRA to automatically delete obsolete files to free up space. That last backup command, while only intended to check for logical corruption, did actually perform a backup of the file, and rendered the earliest backup of the file obsolete since there were two newer copies. That earliest file happened to be from the level 0 backup from which we would later want to restore.

Of course, at first we didn’t know why the file was missing. Logs showed that it was on disk no less than two hours before the problem started. Later, scanning the alert log for the missing backup filename yielded this:

Deleted Oracle managed file /path/to/flash_recovery_area/FOO_DB/backupset/2008_12_01/o1_xxxx.bkp

Oracle deleted the one backup file that we needed!

Even worse, it wasn’t until this time on a Monday night that we realized that the level 0 taken the previous weekend had failed to push the backup files to tape because of a failure on the NetBackup server. The problem was reported as part of Monday morning’s routine log checks, but the missing files had not yet been pushed to tape.

In the end, we were able to drop and restore the tablespace to a previous point in time on a test instance from another backup file and exp/imp data back over. It was ugly, but it got things back online. Many DBAs better than myself gave their all on this mission.

To summarize, the ingredients:
  • Oracle RMAN
  • CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
  • Flash Recovery Area near full, obediently deleting obsolete files.
  • Tape backup failure
Add in an innocent backup command and . . . BOOM! Failure Surprise.

The two biggest points to take away are:

  • Tape backup failures are still serious backup failures and should be treated as such, even if you backup to disk first.
  • REDUNDANCY is not a viable retention policy. In my house, it is configuration non grata.
Categories: DBA Blogs

JasperReportsIntegration - Full path requirement and workaround

Dietmar Aust - Mon, 2014-04-14 17:38
I have just posted an answer to a question that seems like a bug in the JasperReportsIntegration toolkit, that you have to use absolute paths for referencing images or subreports, which is typically a bad thing.

Don't know exactly why it doesn't work, but there is a workaround for that: http://www.opal-consulting.de/site/jasperreportsintegration-full-path-requirement-and-workaround/

Hope that helps,
~Dietmar.

First blogpost at my own hosted wordpress instance

Dietmar Aust - Mon, 2014-04-14 16:50
I have been blogging at daust.blogspot.com for quite some years now ... and many people have rather preferred wordpress to blogspot, I can now understand why :).

It is quite flexible and easy to use and there are tons of themes available ... really cool ones.

The main decision to host my own wordpress instance was in the end motivated by different means. I have created two products and they needed a platform to be presented:
First I wanted to buy a new theme from themeforest and build an APEX theme for that ... but this is a lot of work.

I then decided to host my content using wordpress since I have already bought a new theme: http://www.kriesi.at/themedemo/?theme=enfold

And this one has a really easy setup procedure for wordpress and comes with a ton of effects and wizards, cool page designer, etc.

Hopefully this will get mit motivated to post more frequently ... we will see ;).

Cheers,
~Dietmar.

Unique identifiers - but what do they identify

Gary Myers - Fri, 2014-04-11 23:39
Most of the readers of this blog will be developers, or DBAs, who got the rules of Normalisation drummed into them during some phase of the education or training. But often we get to work with people who don't have that grounding. This post is for them. Feel free to point them at it.

Through normalisation, the tendency is to start with a data set, and by a methodical process extract candidate keys and their dependent attributes. In many cases there isn't a genuine or usable candidate key and artificial / surrogate keys need to be generated. While your bank can generally work out who you are based on your name and address, either of those could change and so they assign you a more permanent customer or account number.

The difficulty comes when those identifiers take on a life of their own. 

Consider the phone number. When I dial my wife's phone number, out of all the phones in Australia (or the world), it is her's alone that will ring. Why that one ? 

In the dark ages, the phone number would indicate a particular exchange and a copper wire leading out of that exchange hard wired to a receiver (or a set of receivers in the case of Party Lines).  Now all the routing is electronic, telephones can be mobile and the routing for calls to a particular number can be changed in an instant. A phone number no longer identifies a device, but a service, and a new collection of other identifiers have risen up to support the implementation of that service. An IMEI can identify a mobile handset and the IMSI indicates a SIM card from a network provider, and we can change the SIM card / IMSI that corresponds to a phone number, or swap SIM cards between handsets. Outside the cellular world, VOIP can shunt 'phone number' calls around innumerable devices using IP addresses. 

Time is another factor. While I may 'own' a given phone number at a particular time, I may give that up and someone else might take it over. That may get represented by adding dates, or date ranges to the key, or it can be looked at as a sequence. For example, Elizabeth Taylor's husband may indicate one of seven men depending on context. The "fourth husband" or "her husband on 1st Jan 1960" would be Eddie Fisher.

Those without a data modelling background that includes normalisation may flinch at the proliferation of entities and tables in a relational environment. As developers and architects look at newer technologies some of the discipline of the relational model will be passed over. Ephemeral transactions can cluster the attributes together in XML or JSON formats with no need for consistency of data definitions beyond the period of processing. Data warehousing quickly discarded relational formats in favour of 'facts' and 'dimensions'. 

The burden of managing a continuous and connected set of data extending over a long period of time, during which the identifiers and attributes morph, is an ongoing challenge in database design.

HeartBleed and Oracle

Fuad Arshad - Fri, 2014-04-11 08:23
There are a lot of people asking about Heartbleed and how it has impacted the web.
Oracle has published  MOS Note 1645479.1 that talks about all the products impacted and if and when fixes will be available.
The following blog post is also a good reference about the vulnerability.  https://blogs.oracle.com/security/entry/heartbleed_cve_2014_0160_vulnerability



The Riley Family, Part III

Chet Justice - Thu, 2014-04-10 21:44


That's Mike and Lisa, hanging out at the hospital. Mike's in his awesome cookie monster pajamas and robe...must be nice, right? Oh wait, it's not. You probably remember why he's there, Stage 3 cancer. The joys.

In October, we helped to send the entire family to Game 5 of the World Series (Cards lost, thanks Red Sox for ruining their night).

In November I started a GoFundMe campaign, to date, with your help, we've raised $10,999. We've paid over 9 thousand dollars to the Riley family (another check to be cut shortly).

In December, Mike had surgery. Details can be found here. Shorter: things went fairly well, then they didn't. Mike spent 22 days in the hospital and lost 40 lbs. He missed Christmas and New Years at home with his family. But, as I've learned over the last 6 months, the Riley family really knows how to take things in stride.

About 6 weeks ago Mike started round 2 of chemo, he's halfway through that one now. He complains (daily, ugh) about numbness, dizziness, feeling cold (he lives in St. Louis, are you sure it's not the weather?), and priapism (that's a lie...I hope).

Mike being Mike though, barely a complaint (I'll let you figure out where I'm telling a lie).

Four weeks ago, a chilly (65) Saturday night, Mike and Lisa call. "Hey, I've got some news for you."

"Sweet," I think to myself. Gotta be good news.

"Lisa was just diagnosed with breast cancer."

WTF?

ARE YOU KIDDING ME? (Given Mike's gallows humor, it's possible).

"Nope. Stage 1. Surgery on April 2nd."

FFS

(Surgery was last week. It went well. No news on that front yet.)

Talking to them two of them that evening you would have no idea they BOTH have cancer. Actually, one of my favorite stories of the year...the hashtag for Riley Family campaign was #fmcuta. Fuck Mike's Cancer (up the ass). I thought that was hilarious, but I didn't think the Riley's would appreciate it. They did. They loved it. I still remember Lisa's laugh when I first suggested it. They've dropped the latest bad news and Lisa is like, "Oh, wait until you hear this. I have a hashtag for you."

"What is it?" (I'm thinking something very...conservative. Not sure why, I should know better by now).

#tna

I think about that for about .06 seconds. Holy shit! Did you just say tna? Like "tits and ass?"

(sounds of Lisa howling in the background).

Awesome. See what I mean? Handling it in stride.

"We're going to need a bigger boat." All I can think about now is, "what can we do now?"

First, I raised the campaign goal to 50k. This might be ambitious, that's OK, cancer treatments are expensive enough for one person, and 10K (the original amount) was on the low side. So...50K.

Second, Scott Spendolini created a very cool APEX app, ostensibly called the Riley Support Group (website? gah). It's a calendar/scheduling app that allows friends and family coordinate things like meals, young human (children) care and other things that most of us probably take for granted. Pretty cool stuff. For instance, Tim Gorman provides pizza on Monday nights (Dinner from pizza hut...1 - large hand-tossed cheese lovers, 1 - large thin-crispy pepperoni, 1 - 4xpepperoni rolls, 1 - cheesesticks).

Third. There is no third.

So many of you have donated your hard earned cash to the Riley family, they are incredibly humbled by, and grateful for, everyone's generosity. They aren't out of the woods yet. Donate more. Please. If you can't donate, see if there's something you can help out with (hit me up for details, Tim lives in CO, he's not really close). If you can't do either of those things, send them your prayers or your good thoughts. Any and all help will be greatly appreciated.
Categories: BI & Warehousing

‘Heartbleed’ (CVE-2014-0160) Vulnerability in OpenSSL

Oracle Security Team - Thu, 2014-04-10 13:44

Hi, this is Eric Maurice.

A vulnerability affecting certain versions of the OpenSSL libraries was recently publicly disclosed.  This vulnerability has received the nickname ‘Heartbleed’ and the CVE identifier CVE-2014-0160. 

Oracle is investigating the use of the affected OpenSSL libraries in Oracle products and solutions, and will provide mitigation instructions when available for these affected Oracle products. 

Oracle recommends that customers refer to the 'OpenSSL Security Bug - Heartbleed CVE-2014-0160' page on the Oracle Technology Network (OTN) for information about affected products, availability of fixes and other mitigation instructions.  This page will be periodically updated as Oracle continues its assessment of the situation.   Oracle customers can also open a support ticket with My Oracle Support if they have additional questions or concerns.

 

For More Information:

The CVE-2014-016 page on OTN is located at http://www.oracle.com/technetwork/topics/security/opensslheartbleedcve-2014-0160-2188454.html

The Heartbleed web site is located at http://heartbleed.com/.  This site is not affiliated with Oracle and provides a list of affected OpenSSL versions.

The My Oracle Support portal can be accessed by visiting https://support.oracle.com

 

_direct_read_decision_statistcs_driven, _small_table_threshold and direct path reads on partitioned tables in 11.2.0.3 (Part 2)

Mihajlo Tekic - Thu, 2014-04-10 01:30
This is continuation of my last post regarding direct path reads on partitioned tables in Oracle 11.2.0.3.

To recap, the behavior I observed is that direct path reads will be performed if number of blocks for all partitions that will be accessed exceeds _small_table_threshold value. That is if a table is consisted of 10 partitions each having 100 blocks and if a query goes after two of the partitions, direct path reads will be performed if _small_table_threshold is lower than 200.

Also regardless of how much data has been cached(in the buffer cache)  for each of the partitions, if direct path reads are to be performed, all partition segments will be directly scanned. So, it is all or nothing situation.

I also indicated that _direct_read_decision_statistics_driven parameter was set to TRUE (default) for the tests done in my earlier post.

What is _direct_read_decision_statistics_driven anyway? According to the parameter description, it enables direct path read decision to be based on optimizer statistics. If the parameter is set to FALSE Oracle will use segment headers to determine how many blocks the segment has. (read Tanel Poder’s blogpost for more information)

Let’s see how queries that access table partitions (full scan) behave if _direct_read_decsision_statiscs_driven parameter is set to FALSE in 11.2.0.3. My expectation was that it should be the same as if it was set to TRUE. I thought that once Oracle gets information about the number of blocks in each of the partitions it would use the same calculation as if the parameter was set to TRUE. Let’s see.

But, before moving forward a small disclaimer: Do not perform these tests in production or any other important environment. Changing of undocumented parameters should be done under the guidance of Oracle Support. The information presented here is for demonstration purposes only.

I will use the same table, TEST_PART, that I used in my earlier post.

I started with flushing the buffer cache (to make sure none of the partitions has blocks in the cache).

I set _direct_read_decision_statistcs_driven parameter to false and ran a query that selects data from PART_1 partition only. Each of the partitions contains 4000 rows stored in 65 blocks, plus one segment header block.

_small_table_threshold in my sandbox environment was set to 117.


SQL> alter session set "_direct_read_decision_statistics_driven"=FALSE;

Session altered.

SQL> SELECT count(1) FROM test_part WHERE col1 in (1);

COUNT(1)
----------
4000


As expected, no direct path reads were performed (I used my sese.sql script that scans v$sesstat for statistics that match given keyword)


SQL> @sese direct

no rows selected


Now let’s see what happens with a query that accesses the first two partitions. Remember if _direct_read_decision_statistcs_driven parameter is set to TRUE, this query would perform direct path reads because the number of blocks in both partitions, 130 (2x65) exceeds
_small_table_threshold(117) parameter.


SQL> select count(1) from test_part where col1 in (1,2);

COUNT(1)
----------
8000

SQL> @sese direct

no rows selected


No direct reads. Definitely different compared to when _direct_read_decision_statistcs_driven was set to TRUE.

How about for a query that accesses three partitions:


SQL> select count(1) from test_part where col1 in (1,2,3);

COUNT(1)
----------
12000

SQL> @sese direct

no rows selected


Still no direct path reads.

How about if we access all 7 partitions:


SQL> select count(1) from test_part where col1 in (1,2,3,4,5,6,7);

COUNT(1)
----------
28000

SQL> @sese direct

no rows selected


No direct path reads.

So what is going on? Seems when _direct_read_decision_statistcs_driven is set to FALSE, Oracle makes decision on partition by partition basis. If the number of blocks in the partition is less or equal than _small_table_threshold buffer cache will be used, otherwise direct path reads.

What if some of the partitions were already cached in the buffer cache?

In the next test I’ll:
  • Flush the buffer cache again
  • Set _direct_read_decision_statistcs_driven is set to FALSE
  • Run a query that accesses the first two partitions
  • Decrease the value for _small_table_threshold to 60
  • Run a query that accesses the first three partitions.
  • Check if direct path reads were performed and how many
With this test I’d like to see if Oracle will utilize the buffer cache if the segment data is cached and the number of blocks in partition is greater than _small_table_threshold.


SQL> alter system flush buffer_cache;

System altered.

SQL> alter session set "_direct_read_decision_statistics_driven"=FALSE;

Session altered.

SQL> select count(1) from test_part where col1 in (1,2);

COUNT(1)
----------
8000

SQL> @sese direct

no rows selected


At this point, PART_1 and PART_2 partitions should be entirely in the buffer cache. If you want, you could query X$KCBOQH to confirm this (from a different session logged in as SYS).


SQL> conn /as sysdba
Connected.
SQL> select o.subobject_name, b.obj#, sum(b.num_buf)
2 from X$KCBOQH b, dba_objects o
3 where b.obj#=o.data_object_id
4 and o.object_name='TEST_PART'
5 group by o.subobject_name, b.obj#
6 order by 1;

SUBOBJECT_NAME OBJ# SUM(B.NUM_BUF)
------------------------------ ---------- --------------
PART_1 146024 66
PART_2 146025 66


As expected, both partitions are in the buffer cache.

Now let’s change decrease _small_table_threshold to 60 and run a query that scans the first three partitions:


SQL> alter session set "_small_table_threshold"=60;

Session altered.

SQL> alter session set events '10046 trace name context forever, level 8';

Session altered.

SQL> select count(1) from test_part where col1 in (1,2,3);

COUNT(1)
----------
12000

alter session set events '10046 trace name context off';

SQL> @sese direct

SID ID NAME VALUE
---------- ---------- -------------------------------------------------- ----------
9 76 STAT.consistent gets direct 65
9 81 STAT.physical reads direct 65
9 380 STAT.table scans (direct read) 1


Here they are, 65 direct path reads, one table scan (direct read) which means one of the partitions was scanned using direct path reads. Which one? Yes, you are right, the one that is not in the buffer cache (PART_3 in this example).

If you query X$KCBOQH again you can see that only one block of PART_3 is in the cache. That is the segment header block.


SQL> conn /as sysdba
Connected.
SQL> select o.subobject_name, b.obj#, sum(b.num_buf)
2 from X$KCBOQH b, dba_objects o
3 where b.obj#=o.data_object_id
4 and o.object_name='TEST_PART'
5 group by o.subobject_name, b.obj#
6 order by 1;

SUBOBJECT_NAME OBJ# SUM(B.NUM_BUF)
------------------------------ ---------- --------------
PART_1 146024 66
PART_2 146025 66
PART_3 146026 1 <===


This means that when _direct_read_decision_statistcs_driven is set to FALSE, in 11.2.0.3, Oracle uses totally different calculation compared to the one used when the parameter is set to TRUE (see in my earlier post).

Moreover, seems Oracle examines each of the partitions separately (which I initially expected to be a case even when _direct_read_decision_statistcs_driven is set to TRUE ) and applies the rules as described in Alex Fatkulin’s blogpost. That is, if any of the following is true, oracle will scan the data in the buffer cache, otherwise direct path reads will be performed: 
  •  the number of blocks in the segment is lower or equal than _small_table_threshold 
  •  at least 50% of the segment data blocks are in the buffer cache
  •  at least 25% of the data blocks are dirty 
The conclusion so far is that in 11.2.0.3, you may observe different behavior for the queries that access table partitions using FTS if you decide to change _direct_read_decision_statistcs_driven parameter.

I will stop here. I ran the same tests against 11.2.0.4 and 12.1.0.1 and noticed some differences in the behavior compared to the one I just wrote about (11.2.0.3). I will post these results in the next few days.

Stay tuned...



SQL Developer’s Interface for GIT: Interacting with a GitHub Repository Part 1

Galo Balda's Blog - Wed, 2014-04-09 23:45

In my previous post, I showed how to clone a GitHub repository using SQL Developer. In this post I’m going to show to synchronize the remote and local repositories after remote gets modified.

Here I use GitHub to commit a file called sp_test_git.pls.  You can create files by clicking on the icon the red arrow is pointing to.

new_file

The content of the file is a PL/SQL procedure that prints a message.

file_content

At this point, the remote repository and the local repository are out of sync. The first thing that you may want to do before modifying any repository, is to make sure that you have the most current version of it so that it includes the changes made by other developers. Let’s synchronize remote and local.

Make sure you open the Versions window. Go to the main menu click on Team -> Versions.

versions

Open the Local branch and click on master, then go to main menu click on Team -> Git -> Fetch to open the “Fetch from Git” wizard. Fetching a repository copies changes from the remote repository into your local system, without modifying any of your current branches. Once you have fetched the changes, you can merge them into your branches or simply view them. We can see the changes on the Branch Compare window by going to the main menu click on Team -> Git -> Branch Compare.

branch_compare

 Branch Compare is showing that sp_test_git.pls has been fetched from the remote master branch. We can right click on this entry and select compare to see the differences.

compare

The window on the left displays the content of the fetched file and the window on right displays the content of the same file in the local repository. In this case the right windows is empty because this is a brand new file that doesn’t exist locally. Let’s accept the changes and merge them into the local repository. We go to the Branch Compare window, right click on the entry, select merge and click on the “Ok” button.

merge

Now the changes should have been applied to the local repository.

local_update

We can go to the path where the local repository is located and confirm that sp_test_git.pls is there.

 

 


Filed under: GIT, SQL Developer, Version Control Tagged: GIT, SQL Developer, Version Control
Categories: DBA Blogs

Oracle Application Express 4.2.5 now available

Joel Kallman - Wed, 2014-04-09 14:50
Oracle Application Express 4.2.5 is now released and available for download.  If you wish to download the full release of Oracle Application Express 4.2.5, you can get it from the Downloads page on OTN.  If you have Oracle Application Express 4.2, 4.2.1, 4.2.2, 4.2.3 or 4.2.4 already installed, then you need to download the APEX 4.2.5 patch set from My Oracle Support.  Look up patch number 17966818.

As is stated in the patch set note that accompanies the Oracle Application Express 4.2.5 patch set:
  • If you have Oracle Application Express release 4.2, 4.2.1, 4.2.2, 4.2.3 or 4.2.4 installed, download the Oracle Application Express 4.2.5 patch set from My Oracle Support and apply it.  Remember - patch number 17966818.
  • If you have Oracle Application Express release 4.1.1 or earlier installed (including Oracle HTML DB release 1.5), download and install the entire Oracle Application Express 4.2.5 release from the Oracle Technology Network (OTN).
  • If you do not have Oracle Application Express installed, download and install the entire Oracle Application Express 4.2.5 release from the Oracle Technology Network (OTN).
As usual, there are a large number of issues corrected in the Application Express 4.2.5 patch set.  You can see the full list in the patch set note.

Some changes in the the Oracle Application Express 4.2.5 patch set:
  1. A number of bug fixes and functionality additions to many of the Packaged Applications.
  2. One new packaged application - Live Poll.  This was the creation of Mike Hichwa.  Live Poll is intended for real-time, very brief polling (in contrast to a formal survey, which can be created and administered via Survey Builder).
  3. One new sample application - the Sample Geolocation Showcase, created by Oracle's Carsten Czarski, who did a masterful job in demonstrating how Oracle's spatial capabilities (via Oracle Locator) can be easily exploited in an Oracle Application Express application.  Try it for yourself today on apex.oracle.com!
  4. A handful of bug fixes in the underlying Application Express engine and APIs.

APEX 4.2.5 should be the end of the line for Oracle Application Express 4.2.x.

On Error Messages

Chen Shapira - Wed, 2014-04-09 14:01

Here’s a pet peeve of mine: Customers who don’t read the error messages. The usual symptom is a belief that there is just on error: “Doesn’t work”, and that all forms of “doesn’t work” are the same. So if you tried something, got an error, your changed something and you are still getting an error, nothing changed.

I hope everyone who reads this blog understand why this behavior makes any troubleshooting nearly impossible. So I won’t bother to explain why I find this so annoying and so self defeating. Instead, I’ll explain what can we, as developers, can do to improve the situation a bit. (OMG, did I just refer to myself as a developer? I do write code that is then used by customers, so I may as well take responsibility for it)

Here’s what I see as main reasons people don’t read error messages:

  1. Error message is so long that they don’t know where to start reading. Errors with multiple Java stack dumps are especially fun. Stack traces are useful only to people who look at the code, so while its important to get them (for support), in most cases your users don’t need to see all that very specific information.
  2. Many different errors lead to the same message. The error message simply doesn’t indicate what the error may be, because it can be one of many different things. I think Kerberos is the worst offender here, so many failures look identical. If this happens very often, you tune out the error message.
  3. The error is so technical and cryptic that it gives you no clue on where to start troubleshooting.  “Table not Found” is clear. “Call to localhost failed on local exception” is not.

I spend a lot of time explaining to my customers “When <app X> says <this> it means that <misconfiguration> happened and you should <solution>”.

To get users to read error messages, I think error messages should be:

  1. Short. Single line or less.
  2. Clear. As much as possible, explain what went wrong in terms your users should understand.
  3. Actionable. There should be one or two actions that the user should take to either resolve the issue or gather enough information to deduce what happened.

I think Oracle are doing a pretty good job of it. Every one of their errors has an ID number, a short description, an explanation and a proposed solution. See here for example: http://docs.oracle.com/cd/B28359_01/server.111/b28278/e2100.htm#ORA-02140

If we don’t make our errors short, clear and actionable – we shouldn’t be surprised when our users simply ignore them and then complain that our app is impossible to use (or worse – don’t complain, but also don’t use our app).

 

 

 


Categories: DBA Blogs

Deep Dive: Oracle WebCenter Tips and Traps!

Bex Huff - Tue, 2014-04-08 18:26

I'm currently at IOUG Collaborate 2014 in Las Vegas, and I recently finished my 2-hour deep dive into WebCenter. I collected a bunch of tips & tricks in 5 different areas: metadata, contribution, consumption, security, and integrations:


As usual, a lot of good presentations this year, but the Collaborate Mobile App makes it a bit tough to find them...

Bezzotech will be at booth 1350, right by Oracle, be sure to swing by and register for a free iPad, or even a free consulting engagement!

read more

Categories: Fusion Middleware

Impala docs now included in CDH 5 library

Tahiti Views - Tue, 2014-04-08 17:12
With the release of CDH 5.0.0 and Impala 1.3.0, now for the first time the Impala docs are embedded alongside the CDH Installation Guide, Security Guide, and other CDH docs. This integration makes it easier to link back and forth both ways, and also will help readers find Impala-related content when they search within the CDH 5 library. Here's the full layout of the CDH 5.0.0 library. Notice John Russellhttp://www.blogger.com/profile/17089970732272081637noreply@blogger.com0

Easy access to Java SE 7 on Oracle Linux

Wim Coekaerts - Tue, 2014-04-08 13:10
In order to make it very easy to install Java SE 7 on Oracle Linux, we added a Java channel on ULN (http://linux.oracle.com). Here is a brief description of how to enable the channel and install Java on your system.

Enable the Java SE 7 ULN channel for Oracle Linux 6

- Start with a server or desktop installed with Oracle Linux 6 and registered with ULN (http://linux.oracle.com) for updates

This is typically using uln_register on your system.

- Log into ULN, go to the Systems tab for your server/desktop and click on Manage Subscriptions

-> Ensure your system is registered to the "Oracle Linux 6 Add ons (x86_64)" channel (it should appear in the 'Subscribed channels' list)

if your system is not registered with the above channel, add it :

-> Click on "Oracle Linux 6 Add ons (x86_64)" in the Available Channels tab and click on the right arrow to move it to Subscribed channels. -> Click on Save Subscriptions

- In order to register with the 'Java SE 7' channel, you first have to install a yum plugin to enable access to channels with licenses

# yum install yum-plugin-ulninfo
Loaded plugins: rhnplugin
This system is receiving updates from ULN.
ol6_x86_64_addons                                        | 1.2 kB     00:00     
ol6_x86_64_addons/primary                                |  44 kB     00:00     
ol6_x86_64_addons                                                       177/177
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package yum-plugin-ulninfo.noarch 0:0.2-9.el6 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

========================================================================================================================
 Package                          Arch                 Version                    Repository                       Size
========================================================================================================================
Installing:
 yum-plugin-ulninfo               noarch               0.2-9.el6                  ol6_x86_64_addons                13 k

Transaction Summary
========================================================================================================================
Install       1 Package(s)

Total download size: 13 k
Installed size: 23 k
Is this ok [y/N]: y
Downloading Packages:
yum-plugin-ulninfo-0.2-9.el6.noarch.rpm                                                          |  13 kB     00:00     
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : yum-plugin-ulninfo-0.2-9.el6.noarch                                                                  1/1 
  Verifying  : yum-plugin-ulninfo-0.2-9.el6.noarch                                                                  1/1 

Installed:
  yum-plugin-ulninfo.noarch 0:0.2-9.el6                                                                                 

Complete!

- In future versions of Oracle Linux 6, this RPM will become part of the base channel and at that point you will no longer need to register with the Add ons channel to install yum-plugin-ulninfo

- Add the Java SE 7 channel subscription to your system in ULN

-> Click on "Java SE 7 for Oracle Linux 6 (x86_64) (Public)" in the Available Channels tab and click on the right arrow to move it to Subscribed channels

-> Click on Save Subscriptions

-> A popup will appear with the EULA for Java SE 7, click on Accept or Decline

- Now your system has access to the Java SE 7 channel. You can verify this by executing :

# yum repolist
Loaded plugins: rhnplugin, ulninfo
This system is receiving updates from ULN.
ol6_x86_64_JavaSE7_public:
By downloading the Java software, you acknowledge that your use of the Java software is 
subject to the Oracle Binary Code License Agreement for the Java SE Platform Products and 
JavaFX (which you acknowledge you have read and agree to) available 
at http://www.java.com/license.

ol6_x86_64_JavaSE7_public                                                                        | 1.2 kB     00:00     
ol6_x86_64_JavaSE7_public/primary                                                                | 1.9 kB     00:00     
ol6_x86_64_JavaSE7_public                                                                                           2/2
repo id                        repo name                                                                          status
ol6_x86_64_JavaSE7_public      Java SE 7 for Oracle Linux 6 (x86_64) (Public)                                          2
ol6_x86_64_UEKR3_latest        Unbreakable Enterprise Kernel Release 3 for Oracle Linux 6 (x86_64) - Latest          122
ol6_x86_64_addons              Oracle Linux 6 Add ons (x86_64)                                                       177
ol6_x86_64_ksplice             Ksplice for Oracle Linux 6 (x86_64)                                                 1,497
ol6_x86_64_latest              Oracle Linux 6 Latest (x86_64)                                                     25,093
repolist: 26,891

- To install Java SE 7 on your system, simply us yum install :

# yum install jdk
Loaded plugins: rhnplugin, ulninfo
This system is receiving updates from ULN.
ol6_x86_64_JavaSE7_public:
By downloading the Java software, you acknowledge that your use of the Java software is 
subject to the Oracle Binary Code License Agreement for the Java SE Platform Products
 and JavaFX (which you acknowledge you have read and agree to) 
available at http://www.java.com/license.

Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package jdk.x86_64 2000:1.7.0_51-fcs will be installed
--> Finished Dependency Resolution

Dependencies Resolved

========================================================================================================================
 Package           Arch                 Version                           Repository                               Size
========================================================================================================================
Installing:
 jdk               x86_64               2000:1.7.0_51-fcs                 ol6_x86_64_JavaSE7_public               117 M

Transaction Summary
========================================================================================================================
Install       1 Package(s)

Total download size: 117 M
Installed size: 193 M
Is this ok [y/N]: y
Downloading Packages:
jdk-1.7.0_51-fcs.x86_64.rpm                                                                                                         | 117 MB     02:27     
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : 2000:jdk-1.7.0_51-fcs.x86_64                                                                                                            1/1 
Unpacking JAR files...
	rt.jar...
	jsse.jar...
	charsets.jar...
	tools.jar...
	localedata.jar...
	jfxrt.jar...
  Verifying  : 2000:jdk-1.7.0_51-fcs.x86_64                                                                                                            1/1 

Installed:
  jdk.x86_64 2000:1.7.0_51-fcs                                                                                                                             

Complete!

- You now have a completely install Java SE 7 on your Oracle Linux environment.

# ls /usr/java/jdk1.7.0_51/
bin  COPYRIGHT  db  include  jre  lib  LICENSE  man  README.html  release  src.zip  
THIRDPARTYLICENSEREADME-JAVAFX.txt  THIRDPARTYLICENSEREADME.txt

_small_table_threshold and direct path reads on partitioned tables in 11.2.0.3

Mihajlo Tekic - Tue, 2014-04-08 00:33

I was troubleshooting a performance problem few days ago. The database the problem was experienced on was recently migrated from Oracle 10.2.0.4 to Oracle 11.2.0.3.

Long story short, the problem was described as performance of a query that scans two or more partitions in a table is much worse compared to combined performances of queries accessing each of the partitions separately.

After a short investigation I narrowed down the problem to “direct path reads” being the culprit of the problem.

As you know, due to the adaptive direct read feature introduced in 11g full table scans may utilize PGA instead of the buffer cache as it was a case in the earlier versions.

There are few good articles on this change in behavior among which I personally favor Tanel’s blogpost and hacking session and the post by Alex Fatkulin. You could also check MOS Note 793845.1.

What I observed in 11.2.0.3.0 was quite surprising and a bit different from what I’ve read so far. I know that there are different parameters/variables that influence the decision whether or not direct part reads should be used. I tried to be careful and not to fall in any of these traps.

Please note all the tests were done in a sandbox environment. I advise against trying these tests in any production environment.

The database version was 11.2.0.3.0.

_serial_direct_read = auto

_direct_read_decision_statistics_driven = TRUE

_small_table_threshold = 117


I used ASSM, a tablespace with uniform extent size(64K)

As you may know _small_table_threshold parameter is set to about 2% of the size of the buffer cache. On my test machine I have pretty small buffer cache, 5892 buffers big (117 is 1.98% of 5892)


SQL> SELECT name,block_size,buffers FROM v$buffer_pool;

NAME BLOCK_SIZE BUFFERS
-------------------------------------------------- ---------- ----------
DEFAULT 8192 5892


I will try to simplify the problem by using a partitioned table, TEST_PART containing 7 partitions.


CREATE TABLE test_part
(
col1 NUMBER NOT NULL
, col2 VARCHAR2(100)
)
PARTITION BY LIST (col1)
(
PARTITION PART_1 VALUES (1)
, PARTITION PART_2 VALUES (2)
, PARTITION PART_3 VALUES (3)
, PARTITION PART_4 VALUES (4)
, PARTITION PART_5 VALUES (5)
, PARTITION PART_6 VALUES (6)
, PARTITION PART_7 VALUES (7)
) ;


Each of the 7 partitions will be populated with 4000 rows (for total of 65 blocks allocated per partition) using the following SQL statement:


INSERT INTO test_part (SELECT mod(rownum,7)+1, rpad('A',100,'A') FROM dual CONNECT BY rownum<=28000);


I will collect stats using the statement below:


exec dbms_stats.gather_table_stats(user,'TEST_PART');


As you can see from the output below, each of the partitions has 65 blocks below the HWM:


SQL> select table_name, partition_name, num_rows, blocks, avg_row_len
from user_tab_partitions
where table_name='TEST_PART'; 2 3

TABLE_NAME PARTITION_NAME NUM_ROWS BLOCKS AVG_ROW_LEN
------------------------------ ------------------------------ ---------- ---------- -----------
TEST_PART PART_1 4000 65 104
TEST_PART PART_2 4000 65 104
TEST_PART PART_3 4000 65 104
TEST_PART PART_4 4000 65 104
TEST_PART PART_5 4000 65 104
TEST_PART PART_6 4000 65 104
TEST_PART PART_7 4000 65 104

7 rows selected.



Observation #1 -
 _small_table_threshold is applied on the total number of blocks expected to be returned by the query (considering all partition segments that will be accessed)

As you can see number of blocks in each of the partitions (65) is lower than _small_table_threshold value (117). Therefore a query that accesses only one of the partitions uses the buffer cache to store the segment blocks.


SQL> select count(1) from test_part where col1 in (1);

COUNT(1)
----------
4000


I will use my sese.sql script to check the values for specific session statistics. It simply scans v$sesstat for the current session and a given keyword. If there are statistics that contain the specified keyword and their value is greater than 0 they will be reported. As you can see no direct path reads were performed.


SQL> @sese direct

no rows selected


I expected to see the next query utilizing the buffer cache as well. It scans two partitions. As you know, each of the partitions has 65 blocks which is less than _small_table_threshold value (117), hence I thought I won't see any direct path reads.


SQL> select count(1) from test_part where col1 in (1,2);

COUNT(1)
----------
8000

However, direct path reads were performed. Moreover, even though one of the partitions I previously scanned was already in the buffer cache, both partitions were scanned using direct path reads. As shown in the output below, two segments were fully scanned using direct reads for total of 130 direct reads were performed (2x65).


SQL> @sese direct

SID ID NAME VALUE
---------- ---------- -------------------------------------------------- ----------
7 76 STAT.consistent gets direct 130
7 81 STAT.physical reads direct 130
7 380 STAT.table scans (direct read) 2

Let’s see what happens when I increase _small_table_threshold to 130 and run the last query.


SQL> alter session set "_small_table_threshold"=130;

Session altered.

SQL> select count(1) from test_part where col1 in (1,2);

COUNT(1)
----------
8000

SQL> @sese direct

SID ID NAME VALUE
---------- ---------- -------------------------------------------------- ----------
7 76 STAT.consistent gets direct 130
7 81 STAT.physical reads direct 130
7 380 STAT.table scans (direct read) 2


The number of direct path reads stayed the same, which means no direct path reads were performed.

How about if I we add one more partition to the equation now (_small_table_threshold=130):


SQL> select count(1) from test_part where col1 in (1,2,3);

COUNT(1)
----------
12000

SQL> @sese direct

SID ID NAME VALUE
---------- ---------- -------------------------------------------------- ----------
7 76 STAT.consistent gets direct 325
7 81 STAT.physical reads direct 325
7 380 STAT.table scans (direct read) 5


Now since we scan 3 partitions, that is 195 blocks Oracle went back to direct path reads and the statistic numbers went up by 195 (3x65) , 130+195=325 or three new table/segment scans.

Therefore seems the logic behind the decision whether or not to perform direct path reads is:

IF SUM(blocks of all partitions that are accessed)>_small_table_threshold THEN
     perform direct path reads for all partitions that are accessed
ELSE
     utilize buffer cache

Again, just to remind you this behavior is specific to 11.2.0.3.


Observation #2 -
The percentage of cached blocks per partition is not relevant


This brings me to the second observation. If you query X$KCBOQH.NUM_BUF for the partition segments (read Tanel’s blogpost or watch his hacking session ) you can see that even though partitions PART_1 and PART_2 were in the cache, Oracle still performed direct path reads for all three partitions:


SQL> conn /as sysdba
Connected.

SQL> select o.subobject_name, b.obj#, sum(b.num_buf)
2 from X$KCBOQH b, dba_objects o
3 where b.obj#=o.data_object_id
4 and o.object_name='TEST_PART'
5 group by o.subobject_name, b.obj#
6 order by 1;

SUBOBJECT_NAME OBJ# SUM(B.NUM_BUF)
------------------------------ ---------- --------------
PART_1 146024 66
PART_2 146025 66
PART_3 146026 1


I ran the output above after the last test. As you can see PART_1 and PART_2 segments are completely in the buffer cache, 66 blocks each (65 blocks for the data and 1 block for the segment header). PART_3 however has only one block in the cache and that is most likely the segment header block.

But, even when all 3 partitions were fully loaded in the buffer cache, Oracle still performed direct path reads:


SQL> conn *****/*****
Connected.
SQL> select count(1) from test_part where col1 in (3);

COUNT(1)
----------
4000

SQL> conn /as sysdba
Connected.

SQL> select o.subobject_name, b.obj#, sum(b.num_buf)
2 from X$KCBOQH b, dba_objects o
3 where b.obj#=o.data_object_id
4 and o.object_name='TEST_PART'
5 group by o.subobject_name, b.obj#
6 order by 1;

SUBOBJECT_NAME OBJ# SUM(B.NUM_BUF)
------------------------------ ---------- --------------
PART_1 146024 66
PART_2 146025 66
PART_3 146026 66

SQL> conn *****/*****
Connected.
SQL> @sese direct

no rows selected

SQL> select count(1) from test_part where col1 in (1,2,3);

COUNT(1)
----------
12000

SQL> @sese direct

SID ID NAME VALUE
---------- ---------- -------------------------------------------------- ----------
7 76 STAT.consistent gets direct 195
7 81 STAT.physical reads direct 195
7 380 STAT.table scans (direct read) 3

SQL>

I will stop with this post here. Tomorrow I will publish another post where I'll show what difference _direct_read_decision_statistics_driven could make for partitioned tables in 11.2.0.3.
(Update: Part2 - what difference does _direct_read_decision_statistics_driven=FALSE make)

I will repeat the same tests in 11.2.0.4 and 12.1.0.1 and see if the behavior is any different.

Stay tuned.
 

HTTP-404 on /oamconsole

Frank van Bortel - Mon, 2014-04-07 08:35
WeblogicHost versus WeblogicCluster Despite the fact, the oamconsole can not be clustered, it has to be "clustered". If you ever find yourself in a scenario, where your configure a webgate in front of your OAM Console, make sure you configure it like ############################################## ## Entries Required by Oracle Access Manager ############################################## # OAM Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com15

Pages

Subscribe to Oracle FAQ aggregator