Skip navigation.

Feed aggregator

Oracle encryption wallet password found in SGA

ContractOracle - Sun, 2014-07-13 20:51
If companies are worried about data privacy or leakage, they are often recommended to encrypt sensitive data inside Oracle databases to stop DBAs from accessing it, and implement "separation of duties" so that only the application or data owner has the encryption keys or wallet password.  One method to encrypt data is to use Oracle Transparent Database Encryption which stores keys in the Oracle wallet protected by a wallet password.  Best practice dictates using a very long wallet password to avoid rainbow tables and brute force attacks, and keep the key and password secret.

I wrote a simple program to search for data in Oracle shared memory segments, and it was able to find the Oracle wallet password, which means anyone who can connect to the shared memory can get the wallet password and access the encrypted data.  The following demonstrates this :-

First open and close the wallet using the password :-

CDB$ROOT@ORCL> alter system set encryption wallet open identified by "verylongverysecretwalletpassword1";

System altered.

CDB$ROOT@ORCL> alter system set wallet close identified by "verylongverysecretwalletpassword1";

System altered.

Now search for the wallet password in SGA :-
oracle@localhost shared_memory]$ ./sga_search verylongverysecretwalletpassword1USAGE :- sga_search searchstring

Number of input parameters seem correct.SEARCH FOR   :- verylongverysecretwalletpassword1/dev/shm/ora_orcl_35258369_30 found string at 3473189verylongverysecretwalletpassword1
The search found the password in SGA, so it should be possible to analyse the memory structure that currently stores the known password, and create another program to directly extract passwords on unknown systems.  It may also be possible to find the password by selecting from v$ or x$ tables.  I have not done that analysis, so don't know how difficult it would be, but if the password is stored, it will be possible to extract it, and even if it is mixed up with a lot of other sql text and variables it would be very simple to just try opening the wallet using every string stored in SGA.
The password is still in SGA after flushing the buffer cache.
CDB$ROOT@ORCL> alter system flush buffer_cache;
System altered.

[oracle@localhost shared_memory]$ ./sga_search verylongverysecretwalletpassword1USAGE :- sga_search searchstring

Number of input parameters seem correct.SEARCH FOR   :- verylongverysecretwalletpassword1/dev/shm/ora_orcl_35258369_30 found string at 3473189verylongverysecretwalletpassword1

After flushing the shared pool the password is no longer available.  
CDB$ROOT@ORCL> alter system flush shared_pool;
System altered.

[oracle@localhost shared_memory]$ ./sga_search verylongverysecretwalletpassword1USAGE :- sga_search searchstring

Number of input parameters seem correct.SEARCH FOR   :- verylongverysecretwalletpassword1[oracle@localhost shared_memory]$ 
As this password really should be secret, Oracle really should not store it.   More research is needed to confirm if the password can be hidden by using bind variables, obfuscation, or wrapping it in plsql.
Categories: DBA Blogs

Master Data Services installation for SQL Server 2012

Yann Neuhaus - Sun, 2014-07-13 20:32

This posting is a tutorial for installing Master Data Services on your Windows Server 2012. Microsoft SQL Server Master Data Services (MDS) is a Master Management product from Microsoft, code-named Bulldog. It is the rebranding of the Stratature MDM product, titled +EDM and acquired in June 2007 by Microsoft. Initially, it was integrated for the first time in Microsoft SQL Server 2008 as an additional installer. But since SQL Server 2012, Master Data Services is integrated as a feature within the SQL Server installer.



Master Data Services is part of the Enterprise Information Management (EMI) technologies, provided by Microsoft, for managing information in an enterprise.

EMI technologies include:

  • Integration Services
  • Master Data Services
  • Data Quality Services



Master Data Services covers five main components:

  • MDS Configuration Manager tool: used to configure Master Data Services
  • MDS Data Manager Web Application: used essentially to perform administrative tasks
  • MDS Web Service: used to extend or develop custom solutions
  • MDS Add-in for Excel: used to manage data, create new entities or attributes …


SQL Server Editions & Versions

Master Data Services can be installed only with the following SQL Server Editions & Versions:

  • SQL Server 2008 R2 edition: Datacenter or Enterprise versions
  • SQL Server 2012 or SQL Server 2014 editions: Enterprise or BI versions


Master Data Services prerequisites in SQL Server 2012

First, Master Data Services is based on an application web named Master Data Manager Web Application, in order to perform administrative task, for example. This web application is hosted by Internet Information Services (IIS), so it is a necessary prerequisite.

Furthermore, to be able to display the content from the web application, you need Internet Explorer 7 or later (Internet Explorer 6 is not supported) with Silverlight 5.

Moreover, if you planned to use Excel with Master Data Services, you also need to install Visual Studio 2010 Tools for Office Runtime, plus the Master Data Services Add-in for Microsoft Excel.

Finally, often forgotten, but PowerShell 2.0 is required for Master Data Services.

 Let’s resume the requirements for Master Data Services:

  • Internet Information Services (IIS)
  • Internet Explorer 7 or later
  • Silverlight 5
  •  PowerShell 2.0
  • Visual Studio 2010 Tools for Office Runtime and Excel Add-in for Microsoft Excel (only if you plan to use Excel with Master Data Services).


Configuration at the Windows Server level

In the Server Manager, you have to activate the Web Server (IIS) Server Roles to be able to host the Master Data Web Application, as well as the .Net 3.5 feature.

For the Server Roles, you have to select:

  • Web Server (IIS)

For the Server Features, you have to select:

- .NET Framework 3.5 Features
  - .NET Framework 3.5
  - HTTP Activation
- .NET Framework 4.5 features
  - .NET Framework 4.5
  - ASP.NET 4.5
  - WCF Services
    - HTTP Activation
    - TCP Port Sharing




For the IIS features selection, you have to select:

- Web Server
  - Common HTTP Features
    - Default Document
    - Directory Browsing
    - HTTP Errors
    - Static Content
  - Health and Diagnostics
    - HTTP Logging
    - Request Monitor
  - Performance
    - Static Content Compression
  - Security
    - Request Filtering
    - Windows Authentication
  - Application Development
    - .NET Extensibility
    - .NET Extensibility 4.5
    - ASP.NET 3.5
    - ASP.NET 4.5
    - ISAPI Extensions
    - ISAPI Filters
  - Management Tools
    - IIS Management Console





Installation of SQL Server 2012

Master Data Services stores its data on a SQL Server database, so you need a SQL Server Engine installed.

Of course, SQL Server Engine can be installed on a different Windows Server. So the Windows Server with Master Data Services installed is used as a Front Server.

Then, in order to personalize the roles of your Master Data Services, you also need to install Management Tools.

At the features installation step, you have to select:

  • Database Engine Services
  • Management Tools
  • Master Data Services



At this point, Master Data Services should be installed with all the needed prerequisites.



However, Master Data Services cannot be used without configuring it. Three main steps need to be performed through the MDS Configuration Manager:

  • First, you have to create a MDS database
  • Then, you have to create a MDS web application hosted in IIS
  • Finally, you have to link the MDS database with the MDS web application

Deferrable RI – 2

Jonathan Lewis - Sun, 2014-07-13 12:46

A question came up on Oracle-L recently about possible locking anomalies with deferrable referential integrity constraints.

An update by primary key is taking a long time; the update sets several columns, one of which is the child end of a referential integrity constraint. A check on v$active_session_history shows lots of waits for “enq: TX – row lock contention” in mode 4 (share), and many of these waits also identify the current object as the index that has been created to avoid the “foreign key locking” problem on this constraint (though many of the waits show the current_obj# as -1). A possible key feature of the issue is that foreign key constraint is defined as “deferrable initially deferred”. The question is, could such a constraint result in TX/4 waits.

My initial thought was that if the constraint was deferrable it was unlikely, there would have to be other features coming into play.

Of course, when the foreign key is NOT deferrable it’s easy to set up cases where a TX/4 appears: for example you insert a new parent value without issuing a commit then I insert a new matching child, at that point my session will wait for your session to commit or rollback. If you commit my insert succeeds if you rollback my session raises an error (ORA-02291: integrity constraint (schema_name.constraint_name) violated – parent key not found). But if the foreign key is deferred the non-existence (or potential existence, or not) of the parent should matter.  If the constraint is deferrable, though, the first guess would be that you could get away with things like this so long as you fixed up the data in time for the commit.

I was wrong. Here’s a little example:

create table parent (
	id	number(4),
	name	varchar2(10),
	constraint par_pk primary key (id)

create table child(
	id_p	number(4)
		constraint chi_fk_par
		references parent
		deferrable initially deferred,
	id	number(4),
	name	varchar2(10),
	constraint chi_pk primary key (id_p, id)

insert into parent values (1,'Smith');
insert into parent values (2,'Jones');

insert into child values(1,1,'Simon');
insert into child values(1,2,'Sally');

insert into child values(2,1,'Jack');
insert into child values(2,2,'Jill');



pause Press return

update child set id_p = 3 where id_p = 2 and id = 2;

If you don’t do anything after the pause and before the insert then the update will succeed – but fail on a subsequent commit unless you insert parent 3 before committing. But if you take advantage of the pause to use another session to insert parent 3 first, the update will then hang waiting for the parent insert to commit or rollback – and what happens next may surprise you. Basically the deferrability doesn’t protect you from the side effects of conflicting transactions.

The variations on what can happen next (insert the parent elsewhere, commit or rollback) are interesting and left as an exercise.

I was slightly surprised to find that I had had a conversation about this sort of thing some time ago, triggered by a comment to an earlier post. If you want to read a more thorough investigation of the things that can happen and how deferrable RI works then there’s a good article at this URL.


RAC Commands : 1 -- Viewing Configuration

Hemant K Chitale - Sun, 2014-07-13 05:58
In 11gR2

Viewing the configuration of a RAC database

[root@node1 ~]# su - oracle
-sh-3.2$ srvctl config database -d RACDB
Database unique name: RACDB
Database name: RACDB
Oracle home: /u01/app/oracle/rdbms/11.2.0
Oracle user: oracle
Spfile: +DATA1/RACDB/spfileRACDB.ora
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: RACSP
Database instances:
Disk Groups: DATA1,FRA,DATA2
Mount point paths:
Services: MY_RAC_SVC
Type: RAC
Database is policy managed

So, we see that :
a) The database name is RACDB
b) It is a Policy Managed database (not Administrator Managed)
c) It is dependent on 3 ASM Disk Groups DATA1, DATA2, FRA
d) There is one service called MY_RAC_SVC configured
e) The database is in the  RACSP server pool
f) The database is configured to be Auto-started when Grid Infrastructure starts

Viewing the configuration of a RAC service

-sh-3.2$ srvctl config service -d RACDB -s MY_RAC_SVC
Service name: MY_RAC_SVC
Service is enabled
Server pool: RACSP
Cardinality: UNIFORM
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Service is enabled on nodes:
Service is disabled on nodes:

So, we see that :
a) The service name is MY_RAC_SVC
b) The UNIFORM cardinality means that it is to run on all active nodes in the server pool
c) The server-side connection load balancing goal is LONG (for long running sessions)

Viewing the configuration of Server Pools

-sh-3.2$ srvctl config srvpool
Server pool name: Free
Importance: 0, Min: 0, Max: -1
Candidate server names:
Server pool name: Generic
Importance: 0, Min: 0, Max: -1
Candidate server names:
Server pool name: RACSP
Importance: 0, Min: 0, Max: 2
Candidate server names:

So we see that :
a) The RACSP server pool is the only created (named) server pool
b) This server pool has a max of 2 nodes

Categories: DBA Blogs

A response to Bloomberg article on UCLA student fees

Michael Feldstein - Sat, 2014-07-12 13:56

Megan McArdle has an article that was published in Bloomberg this week about the growth of student fees. The setup of the article was based on a new “$4 student fee to pay for better concerts”.

To solve this problem, UCLA is introducing a $4 student fee to pay for better concerts. That illuminates a budgeting issue in higher education — and indeed among human beings more generally.

That $4 is not a large fee. Even the poorest student can probably afford it. On the other hand, collectively, UCLA’s student fees are significant: more than $3,500, or about a quarter of the mandatory cost of attending UCLA for a year.

Those fees are made up of many items, each trivial individually. Only collectively do they become a major source of costs for students and their families and potentially a barrier to college access for students who don’t have an extra $3,500 lying around.

I’m sympathetic to the argument that college often costs too much and that institutions can play revenue games to avoid the appearance of raising tuition. I also think that Megan is one of the better national journalists on the topic of the higher education finances.


However, this article is somewhat sloppy in a way that harms the overall message. I would like to clarify the student fees data to help show the broader point.

Let’s look at the actual data from UCLA’s web site. I assume that Megan is basing this analysis on in-state undergraduate full-time students. The data is listed per quarter, and UCLA has three quarters for a full academic year. I have summarized below summing three quarters into yearly data, and you can:

  • Hover over each measure to see the fee description from UCLA’s fee description page;
  • Click on each category that I added to see the component fees;
  • Sort either column; and
  • Choose which rows to keep or exclude.
  • NOTE: Static image above if you cannot see interactive graphics

UCLA Fees for In-State Undergrads (Total $3,749.97)

Learn About Tableau Some Clarifications Needed
  • The total of non-tuition fees is $3,750 per year, not $3,500; however, Megan is right that this represents “about a quarter of the mandatory cost of attending UCLA for a year” ($3,750 out of $14,970).
  • The largest single fee is the UC health insurance fee (UC-SHIP), which is more than half of the total non-tuition fees. This fact (noted by Michael Berman on Twitter) should have been pointed out, given the significant percentage of the total.
  • With the UC-SHIP at $1,938 and the student services fee at $972, I hardly consider these as “trivial individually”.
Broader Point on Budgeting

The article’s broader point is that using extraneous fees to create additional revenue leads to a flawed budgeting process.

As I’ve written before, this is a common phenomenon that you see among people who have gotten themselves into financial trouble — or, for that matter, people who are doing OK but complain that they don’t know where the money goes and can’t save for the big-ticket items they want. They consider each purchase individually, rather than in the context of a global budget, which means that they don’t make trade-offs. Instead of asking themselves “Is this what I want to spend my limited funds on, or would I rather have something else?” they ask “Can I afford this purchase on my income?” And the answer is often “Yes, I can.” The problem is that you can’t afford that purchase and the other 15 things that you can also, one by one, afford to buy on your income. This is how individual financial disasters occur, and it is also one way that college tuition is becoming a financial disaster for many families.

This point is very important. Look at the Wooden Center fee, described here (or by hovering over chart):

Covers repayment of the construction bond plus the ongoing maintenance and utilities costs for the John Wooden Recreation Center. It was approved by student referendum. The fee is increased periodically based on the Consumer Price Index.

To take Megan’s point, this fee “was approved by student referendum”, which means that UCLA has moved budgeting responsibility away from a holistic approach to saying “the students voted on it”. This makes no financial sense, nor does it make sense to shift bond repayment and maintenance and utilities cost onto student fees.

While this article had some sloppy reporting in terms of accurately describing the student fees, it does highlight an important aspect of the budget problems in higher education and how the default method is to shift the costs to students.

The post A response to Bloomberg article on UCLA student fees appeared first on e-Literate.

Downloading VirtualBox VM “Oracle Enterprise Manager 12cR4″

Marco Gralike - Sat, 2014-07-12 11:10
Strangely enough those cool VirtualBox VM machine downloads are nowadays a bit scattered on different Oracle places on and others. So in all that...

Read More

ADF 12c (12.1.3) Line Chart Overview Feature

Andrejus Baranovski - Sat, 2014-07-12 10:51
ADF 12c (12.1.3) is shipped with completely rewritten DVT components, there are no graphs anymore - they are called charts now. But there are much more, besides only a name change. Previous DVT components are still running fine, but JDeveloper wizards are not supporting them anymore. You should check ADF 12c (12.1.3) developer guide for more details, in this post I will focus on line chart overview feature. You should keep in mind, new DVT chart components do not work well with Google Chrome v.35 browser (supposed to be fixed in Google Chrome v.36) - check JDeveloper 12c (12.1.3) release notes.

Sample application -, is based on Employees data and displays line chart for the employee salary, based on his job. Two additional lines are displayed for maximum and minimum job salaries:

Line chart is configured with zooming and overview support. User can change overview window and zoom into specific area:

This helps a lot to analyse charts with large number of data points on X axis. User can zoom into peaks and analyse data range:

One important hint about new DVT charts - components should stretch automatically. Keep in mind -parent component (surrounding DVT chart) should be stretchable. As you can see, I have set type = 'stretch' for panel box, surrounding line chart:

Previous DVT graphs had special binding elements in Page Definition, new DVT charts are using regular table bindings - nothing extra:

Line chart in the sample application is configured with zooming and scrolling (there are different modes available - live, on demand with delay):

Overview feature is quite simple to enable - it is enough to add DVT overview tag to the line chart, and it works:

R12.2 :Modulus Check Validations for Bank Accounts

OracleApps Epicenter - Sat, 2014-07-12 08:45
The existing bank account number validations for domestic banks only check the length of the bank account number. These validations are performed during the entry and update of bank accounts.With R12.2 the account number validations for United Kingdom are enhanced to include a modulus check alongside the length checks. Modulus checking is the process of [...]
Categories: APPS Blogs

ORA-09925: Unable to create audit trail file

Oracle in Action - Sat, 2014-07-12 03:33

RSS content

I received this error message when I started my virtual machine and tried to logon to my database as sysdba to startup the instance.
[oracle@node1 ~]$ sqlplus / as sysdba

ORA-09925: Unable to create audit trail file
Linux Error: 30: Read-only file system
Additional information: 9925
ORA-09925: Unable to create audit trail file
Linux Error: 30: Read-only file system
Additional information: 9925

- I rebooted my machine and got following messages which pointed to some errors encountered during filesystem check and instructed to run fsck manually.

[root@node1 ~]# init 6

Checking filesystems

(i.e., without -a or -p options)
*** An error occurred during the filesystem check.
*** Dropping you to a shell; the system will reboot
*** when you leave the shell.
Give root password for maintenance
(or type Control-D to continue):

– I entered password for root to initiate filesystem check. As a result I was prompted multiple no. of times to allow fixing of  various filesystem errors.

(Repair filesystem) 1 # fsck

- After all the errors had been fixed, filesystem check was restarted

Restarting e2fsck from the beginning...

/: ***** REBOOT LINUX *****

- After filesystem had been finally checked to be correct, I exited for reboot to continue.

(Repair filesystem) 2 # exit

– After the reboot, I could successfully connect to my database as sysdba .

[oracle@node1 ~]$ sqlplus / as sysdba

SQL*Plus: Release Production on Sat Jul 12 09:21:52 2014

Copyright (c) 1982, 2009, Oracle. All rights reserved.

Connected to an idle instance.


I hope this post was useful.

Your comments and suggestions are always welcome.


Related Links:


Database Index





Comments:  0 (Zero), Be the first to leave a reply!You might be interested in this:  
Copyright © ORACLE IN ACTION [ORA-09925: Unable to create audit trail file], All Right Reserved. 2014.

The post ORA-09925: Unable to create audit trail file appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Swimming Progress

Tim Hall - Sat, 2014-07-12 02:59

While I was at BGOUG I went for swim each morning before the conference. That got me to thinking, perhaps I should start swimming again…

It’s been 4 weeks since I got back from the conference and I’ve been swimming very morning. It was a bit of a struggle at first. I think it took me 2-3 days to work up to a mile (1600M – about 9M short of a real mile). Since then I’ve been doing a mile each day and it’s going pretty well.

I’m pretty much an upper body swimmer it the moment. I kick my legs just enough to keep them from sinking, but don’t really generate any forward thrust with them. At this point I’m concentrating on my upper body form. When I think about it, my form is pretty good. When I get distracted, like when I am having to pass people, it breaks down a little. I guess you could say I am in state of “concious competence“. Over the next few weeks this should set in a bit and I can start working on some other stuff. It’s pointless to care too  much about speed at this point because if my form breaks down I end up having a faster arm turnover, but use more effort and actually swim slower. The mantra is form, form, form!

Breathing is surprisingly good. I spent years as a left side breather (every 4th stroke). During my last bout of swimming (2003-2008) I forced myself to switch to bilateral breathing, but still felt the left side was more natural. Having had a 6 year break, I’ve come back and both sides feel about the same. If anything, I would say my right side technique is slightly better than my left. Occasionally I will throw in a length of left-only or right-only (every 4th stroke) breathing for the hell of it, but at the moment every 3rd stroke is about the best option for me. As I get fitter I will start playing with things like every 5th stroke and lengths of no breathing just to add a bit of variety.

Turns are generally going pretty well. Most of the time I’m fine. About 1 in 20 I judge the distance wrong and end up having a really flimsy push off. I’m sure my judgement will improve over time.

At this point I’m taking about 33 minutes to complete a mile. The world record for 1500M short course (25M pool) is 14:10. My first goal is to get my 1600M time down to double the 1500M world record. Taking 5 minutes off my time seems like quite a big challenge, but I’m sure as I bring my legs into play and my technique improves my speed will increase significantly.

As I get more into the swing of things I will probably incorporate a bit of interval training, like a sprint length, followed by 2-3 at a more sedate pace. That should improve my fitness quite a lot and hopefully improve my speed.

For a bit of fun I’ve added a couple of lengths of butterfly after I finish my main swim. I used to be quite good at butterfly, but at the moment I’m guessing the life guards think I’m having a fit. It would be nice to be able to bang out a few lengths of that and not feel like I was dying. :)

I don’t do breaststroke any more, as it’s not good for my hips. Doing backstroke in a pool with other people in the lane sucks, so I can’t be bothered with that. Maybe on days when the pool is quieter I will work on it a bit, but for now the main focus is crawl.



PS. I reserve the right to get bored, give up and eat cake instead at any time… :)

Swimming Progress was first posted on July 12, 2014 at 9:59 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Finished first pass through Alapati 12c OCP upgrade book

Bobby Durrett's DBA Blog - Fri, 2014-07-11 17:29

I just finished reading Sam Alapati’s 12c OCP upgrade book for the first time and I really like it because of the content that it covered which I hadn’t discovered through my study of the Oracle manuals.  Also, it did a good job explaining some things that Oracle’s manuals left unclear.

After reading each chapter I took the end of chapter test and got between 60% and 75% of the questions right.  Next I plan to take the computer based test that was on the CD that came with the book and which covers both parts of the upgrade exam.

I did find minor errors throughout the book, but I still found it very useful especially after having already studied the same topics on my own without a study guide like this one to direct me.  The author’s insights into the test and the material it covers adds value because they guide me to the areas that I need to focus on.

– Bobby

Categories: DBA Blogs

OTN Latin America Tour, 2014

Hans Forbrich - Fri, 2014-07-11 17:12
The dates, and the speakers, for the Latin America Tour have been anounnced.

Categories: DBA Blogs

EMC XtremIO – The Full-Featured All-Flash Array. Interested In Oracle Performance? See The Whitepaper.

Kevin Closson - Fri, 2014-07-11 16:32

NOTE: There’s a link to the full article at the end of this post.

I recently submitted a manuscript to the EMC XtremIO Business Unit covering some compelling lab results from testing I concluded earlier this year. I hope you’ll find the paper interesting.

There is a link to the full paper at the bottom of this block post. I’ve pasted the executive summary here:

Executive Summary

Physical I/O patterns generated by Oracle Database workloads are well understood. The predictable nature of these I/O characteristics have historically enabled platform vendors to implement widely varying I/O acceleration technologies including prefetching, coalescing transfers, tiering, caching and even I/O elimination. However, the key presumption central to all of these acceleration technologies is that there is an identifiable active data set. While it is true that Oracle Database workloads generally settle on an active data set, the active data set for a workload is seldom static—it tends to move based on easily understood factors such as data aging or business workflow (e.g., “month-end processing”) and even the data source itself. Identifying the current active data set and keeping up with movement of the active data set is complex and time consuming due to variability in workloads, workload types, and number of workloads. Storage administrators constantly chase the performance hotspots caused by the active dataset.

All-Flash Arrays (AFAs) can completely eliminate the need to identify the active dataset because of the ability of flash to service any part of a larger data set equally. But not all AFAs are created equal.

Even though numerous AFAs have come to market, obtaining the best performance required by databases is challenging. The challenge isn’t just limited to performance. Modern storage arrays offer a wide variety of features such as deduplication, snapshots, clones, thin provisioning, and replication. These features are built on top of the underlying disk management engine, and are based on the same rules and limitations favoring sequential I/O. Simply substituting flash for hard drives won’t break these features, but neither will it enhance them.

EMC has developed a new class of enterprise data storage system, XtremIO flash array, which is based entirely on flash media. XtremIO’s approach was not simply to substitute flash in an existing storage controller design or software stack, but rather to engineer an entirely new array from the ground-up to unlock flash’s full performance potential and deliver array-based capabilities that are unprecidented in the context of current storage systems.

This paper will help the reader understand Oracle Database performance bottlenecks and how XtremIO AFAs can help address such bottlenecks with its unique capability to deal with constant variance in the IO profile and load levels. We demonstrate that it takes a highly flash-optimized architecture to ensure the best Oracle Database user experience. Please read more:  Link to full paper from

Filed under: All Flash Array, Flash Storage for Databases, oracle, Oracle I/O Performance, Oracle performance, Oracle Performnce Monitoring, Oracle SAN Topics, Oracle Storage Related Problems

Best of OTN - Week of July 6th

OTN TechBlog - Fri, 2014-07-11 11:13

Virtual Technology Summit - Content is now OnDemand!

In this four track virtual event attendees had the opportunity to learn firsthand from Oracle ACEs, Java Champions, and Oracle product experts, as they shared their insight and expertise on Java, systems, database and middleware. A replay of the sessions is now available for your viewing.

Architect Community

In addition to interviews with tech experts and community leaders, the OTN ArchBeat YouTube Channel also features technical videos, most pulled from various OTN online technical events. The following are the three most popular of those tech videos for the past seven days.

Debugging and Logging for Oracle ADF Applications
We're only human. Regardless how much work Oracle ADF does for us, or how powerful the JDeveloper IDE is, the inescapable truth is that as developers we will still make mistakes and introduce bugs into our ADF applications. In this video Oracle ADF Product Manager Chris Muir explores the sophisticated debugging tooling JDeveloper provides.

Developer Preview: Oracle WebLogic 12.1.3
Oracle WebLogic 12.1.3 includes some exciting developer-centric enhancements. IN this video Steve Button focuses on some of the more interesting updates around Java EE 7 features and examines how they will affect your development process.

Best Practices in Oracle ADF Development
In this video Frank Nimphius presents a brown-bag of ideas, hints and best practices that will help you to build better ADF applications.

Friday Funny
"I always wanted to be somebody, but now I realize I should have been more specific." - Lily Tomlin

Java Community 

Codename One & Java Code Geeks are giving away free JavaOne Tickets (worth $3,300)! Read More!

@Java RT @JDeveloper: Running Oracle ADF application High availability (HA)

Tech Article: Leap Motion and JavaFX

Database Community

OTN DBA/DEV Watercooler Blog - Database Application Development VM--Get It Now

Oracle DB Dev FaceBook Posts -

Systems Community

New Tech Article - Playing with ZFS Shadow Migration

New - Hangout: Which Virtualization Should I Use for What? with Brian Bream

Oracle-PeopleSoft is pleased to announce the general availability of PeopleTools 8.54

PeopleSoft Technology Blog - Fri, 2014-07-11 10:51
PeopleTools is proud to announce the release of PeopleTools 8.54.  This is a landmark release for PeopleSoft, one that offers remarkable advances to our applications and our customers.  We are particularly excited about the new PeopleSoft Fluid User Experience.  With this, our applications will offer a UI that is simple and intuitive, yet highly productive and that can be used on different devices from laptops to tablets and smart phones. 
We’ve also made important improvements in reporting and analytics, life-cycle management, security, integration technology, platforms and infrastructure, and accessibility.

To get the details about everything this wonderful new release has to offer, visit these sites:                  + Release Notes                 + Release Value Proposition                 + Cumulative Feature Overview Tool                 + Installation Guides                 + Certification Table                 + Browser Compatibility Guide                 + Licensing Notes Today, PeopleTools 8.54 is Generally Available for new installations.  Customers that want to upgrade to 8.54 from earlier releases will be able to upgrade in the near future when the 02 patch is available.  
Many of our customers have shown interest in Fluid and have asked us the best way to get productive quickly.  Our answer is to use the working examples they will find in the upcoming PSFT 9.2 application images.

E-Business Suite Applications Technology Group (ATG) - WebCast

Chris Warticki - Fri, 2014-07-11 10:20

Thursday July 17, 2014 at 18:00 UK / 10:00 PST / 11:00 MST / 13:00 EST


  • Analyzer: E-Business Reports & Printing
  • E-Business Reports Analysis
  • Recommended Reports Patching
  • Reports Profile Options
  • E-Business Printing Analysis
  • Recommended Printing Patching
  • Printer Profile Options
  • Best Practices

Details & Registration : Note 1681612.1

If you have any question about the schedules or if you have a suggestion for an Advisor Webcast to be planned in future, please send an E-Mail to Ruediger Ziegler.

Oracle users may require remote database management

Chris Foot - Fri, 2014-07-11 10:01

A reputed professional recently discovered a bug in one of Oracle's key security implementations, which may prompt some of its customers to seek active database monitoring solutions. 

A good start, but needs work 
According to Dark Reading, David Litchfield, one of the world's most well-recognized database protection experts, recently discovered a couple of faults in Oracle's redaction feature for its 12c servers. The defensive measure allows database administrators to mask sensitive information from malicious figures.

Although Litchfield regarded the feature as a good deployment, he asserted that a highly skilled hacker would be capable of bypassing the function. He noted that employing a type of Web-based SQL injection is a feasible way for an unauthorized party to gain access to information. Litchfield is expected to demonstrate this technique among others at Black Hat USA in Las Vegas next month. 

"To be fair, it's a good step in the right direction," said Litchfield, as quoted by the source. "Even if a patch isn't available from Oracle, it's going to protect you in 80 percent of the cases. No one really know how to bypass it at this point."

Constant surveillance
Although Oracle is working to mitigate this problem, enterprises need to wonder what's going to protect them from the other 20 percent of instances. Having a staff of remote database support professionals actively monitor all server activity is arguably the most secure option available. 

Specifically, Oracle customers require assistance from those possessing the wherewithal to defend databases from SQL injection attacks. Network World outlined a few situations in which this invasive technique has caused harrowing experiences for retailers:

  • In the winter of 2007, malware was inserted into Heartland Payment Systems' transaction processing system, resulting in 130 million stolen card numbers. 
  • In early November 2007, Hannaford Brothers sustained a malicious software attack that led to the theft of 4.2 million card access codes.
  • Between January 2011 and March 2012, a series of SQL injection endeavors against Global Payment Systems incited $92.7 million in losses. 

Take the simple steps 
Network World acknowledged the importance of treating routine processes as critical features. For example, forgetting to close a database after testing the system for vulnerabilities is negligence that can't be afforded to transpire. 

In addition, it's imperative that enterprises understand the mapping of their database architectures. This protocol can be realized when organizations employ consistent surveillance of all activity, allowing professionals to see which channels are the most active and what kind of data is flowing through them. 

The post Oracle users may require remote database management appeared first on Remote DBA Experts.

A Ringleader Proxy for Sporadically-Used Web Applications

Pythian Group - Fri, 2014-07-11 08:46

As you might already know, I come up with my fair share of toy web applications.

Once created, I typically throw them on my server for a few weeks but, as the resources of good ol’ Gilgamesh are limited, they eventually have to be turned off to make room for the next wave of shiny new toys. Which is a darn shame, as some of them can be useful from time to time. Sure, running all webapps all the time would be murder for the machine, but there should be a way to only fire up the application when it’s needed.

Of course there’s already a way of doing just that. You might have heard of it: it’s called CGI. And while it’s perfectly possible to run PSGI applications under CGI, it’s also… not quite perfect. The principal problem is that since there is no persistence at all between requests (of course, with the help of mod_perl there could be persistence, but that would defeat the purpose), so it’s not exactly snappy. Although, to be fair, it’d probably be still fast enough for most small applications. But still, it feels clunky. Plus, I’m just plain afraid that if I revert to using CGI, Sawyer will burst out of the wall like a vengeful Kool-Aid Man and throttle the life out of me. He probably wouldn’t, but I prefer not to take any chances.

So I don’t want single executions and I don’t want perpetual running. What I’d really want is something in-between. I’d like the applications to be disabled by default, but if a request comes along, to be awaken and ran for as long as there is traffic. And only once the traffic has abated for a reasonable amount of time do I want the application to be turned off once more.

The good news is that it seems that Apache’s mod_fastcgi can fire dynamic applications upon first request. If that’s the case, then the waking-up part of the job comes for free, and the shutting down is merely a question of periodically monitoring the logs and killing processes when inactivity is detected.

The bad news is that I only heard that after I was already halfway done shaving that yak my own way. So instead of cruelly dropping the poor creature right there and then, abandoning it with a punk-like half-shave, I decided to go all the way and see how a Perl alternative would look.

It’s all about the proxy

My first instinct was to go with Dancer (natch). But a quick survey of the tools available revealed something even more finely tuned to the task at hand: HTTP::Proxy. That module does exactly what it says on the tin: it proxies http requests, and allows you to fiddle with the requests and responses as they fly back and forth.

Since I own my domain, all my applications run on their own sub-domain name. With that setting, it’s quite easy to have all my sub-domains point to the port running that proxy and have the waking-up-if-required and dispatch to the real application done as the request comes in.

use HTTP::Proxy;
use HTTP::Proxy::HeaderFilter::simple;

my $proxy = HTTP::Proxy->new( port => 3000 );

my $wait_time = 5;
my $shutdown_delay = 10;

my %services = (
    '' => $foo_config,
    '' => $bar_config,


$proxy->push_filter( request => 
    HTTP::Proxy::HeaderFilter::simple->new( sub {

            my( $self, $headers, $request ) = @_;

            my $uri = $request->uri;
            my $host = $uri->host;

            my $service = $services{ $host } or die;

            $uri->host( 'localhost' );
            $uri->port( $service->port );

            unless ( $service->is_running ) {
                sleep 1;

            # store the latest access time


With this, we already have the core of our application, and only need a few more pieces, and details to iron out.

Enter Sandman

An important one is how to detect if an application is running, and when it goes inactive. For that I went for a simple mechanism. Using CHI to provides me with a persistent and central place to keep information for my application. As soon as an application comes up, I store the time of the current request in its cache, and each time a new request comes in, I update the cache with the new time. That way, the existence of the cache tells me if the application is running, and knowing if the application should go dormant is just a question of seeing if the last access time is old enough.

use CHI;

# not a good cache driver for the real system
# but for testing it'll do
my $chi = CHI->new(
    driver => 'File',
    root_dir => 'cache',


# when checking if the host is running
unless ( $chi->get($host) ) {
    sleep 1;


# and storing the access time becomes
$chi->set( $host => time );

# to check periodically, we fork a sub-process 
# and we simply endlessly sleep, check, then sleep
# some more

sub start_sandman {
    return if fork;

    while( sleep $shutdown_delay ) {
        check_activity_for( $_ ) for keys %services;

sub check_activity_for {
    my $s = shift;

    my $time = $chi->get($s);

    # no cache? assume not running
    return if !$time or time - $time <= $shutdown_delay;



Minding the applications

The final remaining big piece of the puzzle is how to manage the launching and shutting down of the applications. We could do it in a variety of ways, beginning by using plain system calls. Instead, I decided to leverage the service manager Ubic. With the help of Ubic::Service::Plack, setting a PSGI application is as straightforward as one could wish for:

use Ubic::Service::Plack;

    server => "FCGI",
    server_args => { listen => "/tmp/foo_app.sock",
                     nproc  => 5 },
    app      => "/home/web/apps/foo/bin/",
    port     => 4444,

Once the service is defined, it can be started/stopped from the CLI. And, which is more interesting for us, straight from Perl-land:

use Ubic;

my %services = (
    # sub-domain      # ubic service name
    '' => '',
    '' => '',

$_ = Ubic->service($_) for values %services;

# and then to start a service

# or to stop it

# other goodies can be gleaned too, like the port...

Now all together

And that’s all we need to get our ringleader going. Putting it all together, and tidying it up a little bit, we get:

use 5.20.0;

use experimental 'postderef';

use HTTP::Proxy;
use HTTP::Proxy::HeaderFilter::simple;

use Ubic;

use CHI;

my $proxy = HTTP::Proxy->new( port => 3000 );

my $wait_time      = 5;
my $shutdown_delay = 10;

my $ubic_directory = '/Users/champoux/ubic';

my %services = (
    '' => '',

$_ = Ubic->service($_) for values %services;

# not a good cache driver for the real system
# but for testing it'll do
my $chi = CHI->new(
    driver => 'File',
    root_dir => 'cache',

$proxy->push_filter( request => HTTP::Proxy::HeaderFilter::simple->new(sub{
            my( $self, $headers, $request ) = @_;
            my $uri = $request->uri;
            my $host = $uri->host;

            my $service = $services{ $host } or die;

            $uri->host( 'localhost' );
            $uri->port( $service->port );

            unless ( $chi->get($host) ) {
                sleep 1;

            # always store the latest access time
            $chi->set( $host => time );



sub start_sandman {
    return if fork;

    while( sleep $shutdown_delay ) {
        check_activity_for( $_ ) for keys %services;

sub check_activity_for {
    my $service = shift;

    my $time = $chi->get($service);

    # no cache? assume not running
    return if !$time or time - $time <= $shutdown_delay;



It’s not yet completed. The configuration should go in a YAML file, we should have some more safeguards in case the cache and the real state of the application aren’t in sync, and the script itself should be started by Unic too to make everything Circle-of-Life-perfect. Buuuuut as it is, I’d say it’s already a decent start.

Categories: DBA Blogs

Log Buffer #379, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-07-11 07:34

During this summer time in Northern hemisphere, and winter time in Southern hemisphere, the bloggers are solving key problems either by sitting besides the bonfire, or enjoying that bbq. This Log Buffer Edition shares both of these with them.


3 Key Problems To Solve If You Want A Big Data Management System

OpenWorld Update: Content Catalog NOW LIVE!

Interested in Showcasing your Solutions around Oracle Technologies at Oracle OpenWorld?

GoldenGate and Oracle Data Integrator – A Perfect Match in 12c… Part 4: Start Journalizing!

What You Need to Know about OBIEE

SQL Server:

Interoperability between Microsoft and SOA Suite 12c

This article describes a way to speed up various file operations performed by SQL Server.

The Mindset of the Enterprise DBA: Creating and Applying Standards to Our Work

Stairway to T-SQL: Beyond The Basics Level 8: Coding Shortcuts using += and -= Operators

Microsoft Azure Diagnostics Part 2: Basic Configuration of Azure Cloud Service Diagnostics


MySQL Enterprise Monitor 2.3.18 has been released

Harnessing the power of master/slave clusters to operate data-driven businesses on MySQL

NoSQL Now! Conference – coming to San Jose, CA this August!

Manually Switch Slaves to new Masters in mySQL 5.6 (XTRADB 5.6)

How to Configure ClusterControl to run on nginx

Categories: DBA Blogs

powershell goodies for Active Directory

Laurent Schneider - Fri, 2014-07-11 07:04

What are my groups?

PS> Get-ADPrincipalGroupMembership lsc |
      select -ExpandProperty "name"
Domain Users

Who is member of that group ?

PS> Get-ADGroupMember oracle| 
      select -ExpandProperty "name"
Laurent Schneider
Alfred E. Newmann
Scott Tiger

What is my phone number ?

PS> (get-aduser lsc -property MobilePhone).MobilePhone
+41 792134020

This works like a charm on your Windows 7 PC.
1) Download and install Remote Server Administration Tools
2) Activate the windows feature under control panel program called “Active Directory Module for Powershell”
3) PS> Import-Module ActiveDirectory
Read the procedure there : how to add active directory module in powershell in windows 7