Skip navigation.

Feed aggregator

The worst database developers in the world?

DBMS2 - 1 hour 31 min ago

If the makers of MMO RPGs (Massive Multi-Player Online Role-Playing Games) aren’t quite the worst database application developers in the world, they’re at least on the short list for consideration. The makers of Guild Wars didn’t even try to have decent database functionality. A decade later, when they introduced Guild Wars 2, the database-oriented functionality (auction house, real-money store, etc.) would crash for days at a time. Lord of the Rings Online evidently had multiple issues with database functionality. Now I’m playing Elder Scrolls Online, which on the whole is a great game, but which may have the most database screw-ups of all.

ESO has been live for less than 3 weeks, and in that time:

1. There’s been a major bug in which players’ “banks” shrank, losing items and so on. Days later, the data still hasn’t been recovered. After a patch, the problem if anything worsened.

2. Guild functionality has at times been taken down while the rest of the game functioned.

3. Those problems aside, bank and guild bank functionality are broken, via what might be considered performance bugs. Problems I repeatedly encounter include:

  • If you deposit a few items, the bank soon goes into a wait state where you can’t use it for a minute or more.
  • Similarly, when you try to access a guild — i.e. group — bank, you often find it in an unresponsive state.
  • If you make a series of updates a second apart, the game tells you you’re doing things too quickly, and insists that you slow down a lot.
  • Items that are supposed to “stack” appear in 2 or more stacks; i.e., a very simple kind of aggregation is failing. There are also several other related recurring errors, which I conjecture have the same underlying cause.

In general, it seems like that what should be a collection of database records is really just a list, parsed each time an update occurs, periodically flushed in its entirety to disk, with all the performance problems you’d expect from that kind of choice.

4. Even stupider are the in-game stores, where fictional items are sold for fictional money. They have an e-commerce interface that is literally 15+ years out of date — items are listed with VERY few filtering options, and there is no way to change the sort. But even that super-primitive interface doesn’t work; in particular, filter queries frequently return incorrect empty-set responses.

5. Much as in other games, over 10 minutes of state changes can be lost.

Except perhaps for #5, these are all functions that surely are only loosely coupled to the rest of the game. Hence the other difficulties of game scaling and performance should have no bearing on them. Hence there’s no excuse for doing such a terrible job of development on large portions of gameplay functionality.

Based on job listings, ESO developer Zenimax doesn’t see database functionality as a major area to fix. This makes me sad.

Categories: Other

April 2014 Critical Patch Update Released

Oracle Security Team - Tue, 2014-04-15 14:04

Hello, this is Eric Maurice again.

Oracle today released the April 2014 Critical Patch Update.  This Critical Patch Update provides fixes for 104 vulnerabilities across a number of product lines including: Oracle Database, Oracle Fusion Middleware, Oracle Hyperion, Oracle Supply Chain Product Suite, Oracle iLearning, Oracle PeopleSoft Enterprise, Oracle Siebel CRM, Oracle Java SE, Oracle and Sun Systems Products Suite, Oracle Linux and Virtualization, and Oracle MySQL.  A number of the vulnerabilities fixed in this Critical Patch Update have high CVSS Base Score and are being highlighted in this blog entry.  Oracle recommends this Critical Patch Update be applied as soon as possible.

Out of the 104 vulnerabilities fixed in the April 2014 Critical Patch Update, 2 were for the Oracle Database.  The most severe of these database vulnerabilities received a CVSS Base Score of 8.5 for the Windows platform to denote a full compromise of the targeted system, although a successful exploitation requires of this bug requires authentication by the malicious attacker.  On other platforms (e.g., Linux, Solaris), the CVSS Base Score is 6.0, because a successful compromise would be limited to the Database and not extend to the underlying Operating System.  Note that Oracle reports this kind of vulnerabilities with the ‘Partial+’ value for Confidentiality, Integrity, and Availability impact (Partial+ is used when the exploit affects a wide range of resources, e.g. all database tables).  Oracle makes a strict application of the CVSS 2.0 standard, and as a result, the Partial+ does not result in an inflated CVSS Base Score (CVSS only provides for ‘None,’ ‘Partial,’ or ‘Complete’ to report the impact of a bug).  This custom value is intended to call customers’ attention to the potential impact of the specific vulnerability and enable them to potentially manually increase this severity rating.  For more information about Oracle’s use of CVSS, see

This Critical Patch Update also provides fixes for 20 Fusion Middleware vulnerabilities.  The highest CVSS Base Score for these Fusion Middleware vulnerabilities is 7.5.  This score affects one remotely exploitable without authentication vulnerability in Oracle WebLogic Server (CVE-2014-2470).  If successfully exploited, this vulnerability can result in a wide compromise of the targeted WebLogic Server (Partial+ rating for Confidentiality, Integrity, and Availability.  See previous discussion about the meaning of the ‘Partial+’ value reported by Oracle). 

Also included in this Critical Patch Update were fixes for 37 Java SE vulnerabilities.  4 of these Java SE vulnerabilities received a CVSS Base Score of 10.0.  29 of these 37 vulnerabilities affected client-only deployments, while 6 affected client and server deployments of Java SE.  Rounding up this count were one vulnerability affecting the Javadoc tool and one affecting unpack200.  As a reminder, desktop users, including home users, can leverage the Java Autoupdate or visit to ensure that they are running the most recent version of Java.  Java SE security fixes delivered through the Critical Patch Update program are cumulative.  In other words, running the most recent version of Java provides users with the protection resulting from all previously-released security fixes.   Oracle strongly recommends that Java users, particularly home users, keep up with Java releases and remove obsolete versions of Java SE, so as to protect themselves against malicious exploitation of Java vulnerabilities. 

This Critical Patch Update also included fixes for 5 vulnerabilities affecting Oracle Linux and Virtualization products suite.  The most severe of these vulnerabilities received a CVSS Base Score of 9.3, and this vulnerability (CVE-2013-6462) affects certain versions of Oracle Global Secure Desktop. 

Due to the relative severity of a number of the vulnerabilities fixed in this Critical Patch Update, Oracle strongly recommends that customers apply this Critical Patch Update as soon as possible.  In addition, as previously discussed, Oracle does not test unsupported products, releases and versions for the presence of vulnerabilities addressed by each Critical Patch Update.  However, it is often the case that earlier versions of affected releases are affected by vulnerabilities fixed in recent Critical Patch Updates.  As a result, it is highly desirable that organizations running unsupported versions, for which security fixes are no longer available under Oracle Premier Support, update their systems to a currently-supported release so as to fully benefit from Oracle’s ongoing security assurance effort.

For more information:

The April 2014 Critical Patch Update Advisory is located at

More information about Oracle’s application of the CVSS scoring system is located at

An Ovum white paper “Avoiding security risks with regular patching and support services” is located at

More information about Oracle Software Security Assurance, including details about Oracle’s secure development and ongoing security assurance practices is located at

The details of the Common Vulnerability Scoring System (CVSS) are located at

Java desktop users can verify that they are running the most version of Java and remove older versions of Java by visiting      



<b>Contributions by Angela Golla,

Oracle Infogram - Tue, 2014-04-15 13:52
Contributions by Angela Golla, Infogram Deputy Editor

Mark Hurd’s Latest Blog Explains Why Customer-Obsessed Marketing Is Your Next Competitive Edge

Oracle President Mark Hurd has posted his latest LinkedIn Influencer blog, “Customer-Obsessed Marketing Is Your Next Competitive Edge.” 
Mark HurdMark Hurd,
President, OracleIn this new blog, Mark writes, “Marketing executives are leading the charge to convince their organizations of the inherent danger in today’s highly digitized buyer-seller relationship. And they’re doing that by proving that “your customers are only one click away from your competitors” is more than just a clever phrase—it’s the difference between being a market leader and going out of business.
"The good news is that as marketing executives strive to develop new customer-engagement models, to optimize multiple channels formerly in conflict and generate new revenue streams, they now have access to world-class marketing-automation tools, which have the potential to keep more prospects from making that one-click jump to a competitor…

Frequently Misused Metrics in Oracle

Steve Karam - Tue, 2014-04-15 13:43
Lying Businessman

Back in March of last year I wrote an article on the five frequently misused metrics in Oracle: These Aren’t the Metrics You’re Looking For.

To sum up, my five picks for the most misused metrics were:

Business Graph

  1. db file scattered read – Scattered reads aren’t always full table scans, and they’re certainly not always bad.
  2. Parse to Execute Ratio – This is not a metric that shows how often you’re hard parsing, no matter how many times you may have read otherwise.
  3. Buffer Hit Ratio – I want to love this metric, I really do. But it’s an advisory one at best, horribly misleading at worst.
  4. CPU % – You license Oracle by CPU. You should probably make sure you’re making the most of your processing power, not trying to reduce it.
  5. Cost – No, not money. Optimizer cost. Oracle’s optimizer might be cost based, but you are not. Tune for time and resources, not Oracle’s own internal numbers.

Version after version, day after day, these don’t change much.

Anyways, I wanted to report to those who aren’t aware that I created a slideshow based on that blog for RMOUG 2014 (which I sadly was not able to attend at the last moment). Have a look and let me know what you think!

Metric Abuse: Frequently Misused Metrics in Oracle

Have you ever committed metric abuse? Gone on a performance tuning snipe hunt? Spent time tuning something that, in the end, didn’t even really have an impact? I’d love to hear your horror stories.

Also while you’re at it, have a look at the Sin of Band-Aids, and what temporary tuning fixes can do to a once stable environment.

And lastly, keep watching #datachat on Twitter and keep an eye out for an update from Confio on today’s #datachat on Performance Tuning with host Kyle Hailey!

The post Frequently Misused Metrics in Oracle appeared first on Oracle Alchemist.

Links to External Articles and Interviews

Michael Feldstein - Tue, 2014-04-15 11:41

Last week I was off the grid (not just lack of Internet but also lack of electricity), but thanks to publishing cycles I managed to stay artificially productive: two blog posts and one interview for an article.

Last week brought news of a new study on textbooks for college students, this time from a research arm of the  National Association of College Stores. The report, “Student Watch: Attitudes and Behaviors toward Course Materials, Fall 2013″, seems to throw some cold water on the idea of digital textbooks based on the press release summary [snip]

While there is some useful information in this survey, I fear that the press release is missing some important context. Namely, how can students prefer something that is not really available?

March 28, 2014 may well go down as the turning point where Big Data lost its placement as a silver bullet and came down to earth in a more productive manner. Triggered by a March 14 article in Science Magazine that identified “big data hubris” as one of the sources of the well-known failures of Google Flu Trends,[1] there were five significant articles in one day on the disillusionment with Big Data. [snip]

Does this mean Big Data is over and that education will move past this over-hyped concept? Perhaps Mike Caulfield from the Hapgood Blog stated it best, including adding the education perspective . . .

This is the fun one for me, as I finally have my youngest daughter’s interest (you made Buzzfeed!). Buzzfeed has added a new education beat focusing on the business of education.

The public debut last week of education technology company 2U, which partners with nonprofit and public universities to offer online degree programs, may have looked like a harbinger of IPO riches to come for companies that, like 2U, promise to disrupt the traditional education industry. At least that’s what the investors and founders of these companies want to believe. [snip]

“We live in a post-Facebook area where startups have this idea that they can design a good product and then just grow, grow, grow,” said Phil Hill, an education technology consultant and analyst. “That’s not how it actually works in education.”


The post Links to External Articles and Interviews appeared first on e-Literate.

Twitter Oracle Security Open Chat Thursday 6th March

Pete Finnigan - Tue, 2014-04-15 10:50

I will be co-chairing/hosting a twitter chat on Thursday 6th March at 7pm UK time with Confio. The details are here . The chat is done over twitter so it is a little like the Oracle security round table sessions....[Read More]

Posted by Pete On 05/03/14 At 10:17 AM

Categories: Security Blogs

PFCLScan Reseller Program

Pete Finnigan - Tue, 2014-04-15 10:50

We are going to start a reseller program for PFCLScan and we have started the plannng and recruitment process for this program. I have just posted a short blog on the PFCLScan website titled " PFCLScan Reseller Program ". If....[Read More]

Posted by Pete On 29/10/13 At 01:05 PM

Categories: Security Blogs

PFCLScan Version 1.3 Released

Pete Finnigan - Tue, 2014-04-15 10:50

We released version 1.3 of PFCLScan our enterprise database security scanner for Oracle a week ago. I have just posted a blog entry on the PFCLScan product site blog that describes some of the highlights of the over 220 new....[Read More]

Posted by Pete On 18/10/13 At 02:36 PM

Categories: Security Blogs

PFCLScan Updated and Powerful features

Pete Finnigan - Tue, 2014-04-15 10:50

We have just updated PFCLScan our companies database security scanner for Oracle databases to version 1.2 and added some new features and some new contents and more. We are working to release another service update also in the next couple....[Read More]

Posted by Pete On 04/09/13 At 02:45 PM

Categories: Security Blogs

Oracle Security Training, 12c, PFCLScan, Magazines, UKOUG, Oracle Security Books and Much More

Pete Finnigan - Tue, 2014-04-15 10:50

It has been a few weeks since my last blog post but don't worry I am still interested to blog about Oracle 12c database security and indeed have nearly 700 pages of notes in MS Word related to 12c security....[Read More]

Posted by Pete On 28/08/13 At 05:04 PM

Categories: Security Blogs

Oracle 12c Security - SQL Translation and Last Logins

Pete Finnigan - Tue, 2014-04-15 10:50

There has been some big new security items added to 12cR1 such as SHA2 in DBMS_CRYPTO, code based security in PL/SQL, Data Redaction, unified audit or even privilege analysis but also as I hinted in some previous blogs there are....[Read More]

Posted by Pete On 31/07/13 At 11:11 AM

Categories: Security Blogs

Hacking Oracle 12c COMMON Users

Pete Finnigan - Tue, 2014-04-15 10:50

The main new feature of Oracle 12cR1 has to be the multitennant architecture that allows tennant databases to be added or plugged into a container database. I am interested in the security of this of course and one element that....[Read More]

Posted by Pete On 23/07/13 At 02:52 PM

Categories: Security Blogs

Oracle Security Loop hole from Steve Karam

Pete Finnigan - Tue, 2014-04-15 10:50

I just saw a link to a post by Steve Karam on an ISACA list and went for a look. The post is titled " Password Verification Security Loophole ". This is an interesting post discussing the fact that ALTER....[Read More]

Posted by Pete On 22/07/13 At 08:39 PM

Categories: Security Blogs

Supercharge your Applications with Oracle WebLogic

All enterprises are using an application server but the question is why they need an application server. The answer is they need to deliver applications and software to just about any device...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Why is Affinity Mask Negative in sp_configure?

Pythian Group - Tue, 2014-04-15 07:56

While looking at a SQL server health report, I found affinity mask parameter in sp_configure output showing a negative value.

name                                minimum     maximum     config_value run_value
----------------------------------- ----------- ----------- ------------ -----------
affinity mask                       -2147483648 2147483647  -1066394617  -1066394617

Affinity mask is a SQL Server configuration option which is used to assign processors to specific threads for improved performance. To know more about affinity mask, read this. Usually, the value for affinity mask is a positive integer (decimal format) in sp_configure. The article in previous link shows an example of binary bit mask and corresponding decimal value to be set in sp_configure.


I was curious to find out why the value of affinity mask could be negative as according to BOL


 affinity_mask_memeThe values for affinity mask are as follows:

          · A one-byte affinity mask covers up to 8 CPUs in a multiprocessor computer.


          · A two-byte affinity mask covers up to 16 CPUs in a multiprocessor computer.


          · A three-byte affinity mask covers up to 24 CPUs in a multiprocessor computer.


          · A four-byte affinity mask covers up to 32 CPUs in a multiprocessor computer.


         · To cover more than 32 CPUs, configure a four-byte affinity mask for the first 32    CPUs and up to a four-byte affinity64 mask for the remaining CPUs.


Time to unfold the mystery. Windows Server 2008 R2 supports more than 64 logical processors. From ERRORLOG, I see there are 40 logical processors on the server:


2014-03-31 18:18:18.18 Server      Detected 40 CPUs. This is an informational message; no user action is required.


Further, going down in the ERRORLOG I see this server has four NUMA nodes configured.


Processor affinity turned on: node 0, processor mask 0x0000000000001c00.

Processor affinity turned on: node 1, processor mask 0×0000000000000007.

Processor affinity turned on: node 2, processor mask 0×0000000000700000.

Processor affinity turned on: node 3, processor mask 0x00000001c0000000.


Node configuration: node 0: CPU mask: 0x00000000000ffc00:0 Active CPU mask: 0x0000000000001c00:0.

Node configuration: node 1: CPU mask: 0x00000000000003ff:0 Active CPU mask: 0×0000000000000007:0.

Node configuration: node 2: CPU mask: 0x000000003ff00000:0 Active CPU mask: 0×0000000000700000:0.

Node configuration: node 3: CPU mask: 0x000000ffc0000000:0 Active CPU mask: 0x00000001c0000000:0. 


These were hard NUMA nodes. No soft NUMA node configured on the server (no related registry keys exist)


An important thing to note is that the affinity mask value forsp_configure ranges from -2147483648 to 2147483647 = 2147483648 + 2147483647 + 1 = 4294967296 = 2^32 = the range of int data type. Hence affinity mask value from sp_configure is not sufficient to hold more than 64 CPUs. To deal with this, ALTER SERVER CONFIGURATION was introduced in SQL Server 2008 R2 to support and set the processor affinity for more than 64 CPUs. However, the value of affinity mask in sp_configure, in such cases, is still an *adjusted* value which we are going to find out below.


Let me paste the snippet from ERRORLOG again:


Processor affinity turned on: node 0, processor mask 0x0000000000001c00.

Processor affinity turned on: node 1, processor mask 0×0000000000000007.

Processor affinity turned on: node 2, processor mask 0×0000000000700000.

Processor affinity turned on: node 3, processor mask 0x00000001c0000000.


As it says, the underlined values above are for the processor mask i.e. processor affinity or affinity mask. These values correspond to that of online_scheduler_mask in sys.dm_os_nodes which makes up the ultimate value for affinity mask in sp_configure. Ideally, affinity mask should be a sum of these values. Let’s add these hexadecimal values using windows calculator (Choose Programmer from Viewmenu)



+ 0×0000000000000007

+ 0×0000000000700000

+ 0x00000001c0000000


= 0x00000001C0701C07


= 7523539975 (decimal)


So, affinity mask in sp_configure should have been equal to 7523539975. Since this no. is greater than the limit of 2^32 i.e. 4294967296 we see an *adjusted* value (apparently a negative value). The reason I say it an *adjusted* value is because sum of processor mask values (in decimal) is adjusted (subtracted from the int range i.e. 4294967296 so that it fits within the range and falls below or equal to 4294967296 ). Here’s is an example which explains the theory:


7523539975 – 4294967296  – 4294967296 = –1066394617 = the negative value seen in sp_configure

name                                minimum     maximum     config_value run_value
----------------------------------- ----------- ----------- ------------ -----------
affinity mask                       -2147483648 2147483647  -1066394617  -1066394617

That explains why affinity mask shows up as a negative number in sp_configure.


To make the calculation easier, I wrote a small script to find out the sp_configure equivalent value of affinity mask in case of NUMA nodes


-- Find out the sp_configure equivalent value of affinity mask in case of NUMA nodes


DECLARE @real_value bigint; -- to hold the sum of online_scheduler_mask

DECLARE @range_value bigint = 4294967296; -- range of int dataype i.e. 2^32

DECLARE @config_value int = 0; -- default value of affinity_mask in sp_configure output. to be set later.
-- Fetch the sum of Online Scheudler Mask excluding node id 64 i.e. Hidden scheduler

SET @real_value =( SELECT SUM(online_scheduler_mask) as online_scheduler_mask

FROM sys.dm_os_nodes

WHERE memory_node_id <> 64

-- Calculate the value for affinity_mask in sp_configure

WHILE (@real_value > 2147483647)


SET @real_value=(@real_value - @range_value);

-- Copy the value for affinity_mask as seen in sp_configure

SET @config_value = @real_value;
PRINT 'The current config_value for affinity_mask parameter in sp_configure is: ' + cast(@config_value as varchar);


This script will give the current config value for SQL server in any case, NUMA nodes, >64 procs, SQL Server 2008 R2..


Hope this post will help you if were as puzzled as I was seeing the negative no. in sp_configure.


Stay tuned!

Categories: DBA Blogs

Delivering the Moments of Engagement Across the Enterprise

WebCenter Team - Tue, 2014-04-15 07:00

 Delivering Moments of Engagement Across the Enterprise

A Five Step Roadmap for Mobilizing a Digital business

Geoffrey Bock, Principal, Bock & Company
Michael Snow, Principal Product Marketing Director, Oracle WebCenter

Over the past few years, we have been fascinated by the impact of mobility on business. As employees, partners, and customers, we now carry powerful devices in our pockets and handbags. Our smartphones and tablets are always with us, always on, and always collecting information. We are no longer tethered to fixed work places; we can frequently find essential information with just a few taps and swipes. More and more, this content is keyed to our current context. Moreover, we often are immersed in an array of sensors that track our actions, personalize the results, and assist us in innumerable ways. Our business and social worlds are in transition. This is not the enterprise computing environment of the 1990’s or even the last decade.

Yet while tracking trends with the mobile industry, we have encountered a repeated refrain from many technology and business leaders. Sure, mobile apps are neat, they say. But how do you justify the investments required? What are the business benefits of enterprise mobility? When should companies harness the incredible opportunities of the mobile revolution?

To answer these questions, we think that it is important to recognize the steps along the mobile journey. Certainly companies have been investing in their enterprise infrastructure for many years. In fact, enterprise-wide mobility is just the latest stage in the development of digital business initiatives.

What is at stake is not simply introducing nifty mobile apps as access points to existing enterprise applications. The challenge is weaving innovative digital technologies (including mobile) into the fabric (and daily operations) of an organization. Companies become digital businesses by adapting and transforming essential enterprise activities. As they mobilize key business experiences, they drive digital capabilities deeply into their application infrastructure.

Please join us for a conversation this Thursday (04/17/14 @ 10AM PST) about how Oracle customers are making this mobile journey, our five-step roadmap for delivering the moments of engagement across the enterprise.

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/> /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Cambria","serif";}

Creating some Pivotal Cloud Foundry (PCF) PHD services

Pas Apicella - Tue, 2014-04-15 06:52
After installing PHD add on for Pivotal Cloud Foundry 1.1 I quickly created some development services for PHD using the CLI as shown below.

[Tue Apr 15 22:40:08 papicella@:~/vmware/pivotal/products/cloud-foundry ] $ cf create-service p-hd-hawq-cf free dev-hawq
Creating service dev-hawq in org pivotal / space development as pas...
[Tue Apr 15 22:42:31 papicella@:~/vmware/pivotal/products/cloud-foundry ] $ cf create-service p-hd-hbase-cf free dev-hbase
Creating service dev-hbase in org pivotal / space development as pas...
[Tue Apr 15 22:44:10 papicella@:~/vmware/pivotal/products/cloud-foundry ] $ cf create-service p-hd-hive-cf free dev-hive
Creating service dev-hive in org pivotal / space development as pas...
[Tue Apr 15 22:44:22 papicella@:~/vmware/pivotal/products/cloud-foundry ] $ cf create-service p-hd-yarn-cf free dev-yarn
Creating service dev-yarn in org pivotal / space development as pas...

Finally using the web console to brow the services in the "Development" space
Categories: Fusion Middleware

OBIEE Security: Usage Tracking, Logging and Auditing for SYSLOG or Splunk

Enabling OBIEE Usage Tracking and Logging is a key part of most any security strategy. More information on these topics can be found in the whitepaper references below. It is very easy to setup logging such that a centralized logging solution such as SYSLOG or Splunk can receive OBIEE activity.

Usage Tracking

Knowing who ran what report, when and with what parameters is helpful not only for performance tuning but also for security. OBIEE 11g provides a sample RPD with a Usage Tracking subject area. The subject area will report on configuration and changes to the RPD as well as configuration changes to Enterprise Manager.  To start using the functionality, one of the first steps is to copy the components from the sample RPD to the production RPD.

Usage tracking can also be redirected to log files. The STORAGE_DIRECTORY setting is in the NQSConfig.INI file. This can be set if OBIEE usage logs are being sent, for example, to a centralized SYSLOG database.

The User Tracking Sample RPD can be found here:



OBIEE offers standard functionality for application level logging.  This logging should be considered as one component of the overall logging approach and strategy. The operating system and database(s) supporting OBIEE should be using a centralized logging solution (most likely syslog) and it is also possible to parse the OBIEE logs for syslog consolidation.

For further information on OBIEE logging refer to the Oracle Fusion Middleware System Administrator’s Guide for OBIEE 11g (part number E10541-02), chapter eight.

To configure OBIEE logging, the BI Admin client tool is used to set the overall default log level for the RPD as well as identify specific users to be logged. The log level can differ among users. No logging is possible for a role.

Logging Levels are set between zero and seven.

Level 0 - No logging

Level 1 - Logs the SQL statement issued from the client application.

Level 2 - All level 1 plus OBIEE infrastructure information and query statisics

Level 3 - All level 2 plus Cache information

Level 4 - All level 3 plus query plan execution

Level 5 - All level 4 plus intermediate row counts

Level 6 & 7 - not used


OBIEE log files

BI Component

Log File

Log File Directory







BI Server



BI Server Query

nquery<n>.log <n>=data and timestamp for example nqquery-20140109-2135.log

Oracle BI Server query Log


BI Cluster Controller



Oracle BI Scheduler



Useage Tracking


STORAGE_DIRECTORY parameter in the Usage Tracking section of the NQSConfig.INI file determines the location of usage tracking logs

Presentation Services

sawlog*.log (for example, sawlog0.log)



The configuration of this log (e.g. the writer setting to output to syslog or windows event log) is set in instanceconfig.xml

BI JavaHost




If you have questions, please contact us at

 -Michael Miller, CISSP-ISSMP



Tags: Oracle Business Intelligence (OBIEE)AuditorIT Security
Categories: APPS Blogs, Security Blogs

WordPress 3.8.3 – Auto Update

Tim Hall - Tue, 2014-04-15 01:53

WordPress 3.8.3 came out yesterday. It’s a small maintenance release, with the downloads and changelog in the usual places. For many people, this update will happen automatically and they’ll just receive and email to say it has been applied.

I’m still not sure what to make of the auto-update feature of WordPress. Part of me likes it and part of me is a bit irritated by it. For the lazy folks out there, I think it is a really good idea, but for those who are on their blog admin screens regularly it might seem like a source of confusion. I currently self-host 5 WordPress blogs and the auto-update feature seems a little erratic. One blog always auto-updates as soon as the new a new release comes out. A couple sometimes do. I don’t think this blog has ever auto-updated…

I’d be interested to hear if other self-hosting WordPress bloggers have had a similar experience…



WordPress 3.8.3 – Auto Update was first posted on April 15, 2014 at 8:53 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.