Skip navigation.

Feed aggregator

Lock Timeout Error While Creating Database

Pythian Group - Mon, 2014-07-14 08:07

Recently I worked on a issue where a third-party application was failing during the installation. Below was the error returned by the Application.

Lock request time out period exceeded. (Microsoft SQL Server, Error: 1222)

The application was failing while it was trying to create a database. The application seemed to have a default timeout setting which was about 60 seconds after which it was failing, as the command had not yet returned any results.

I tried creating a test database directly from the SQL Server Management Studio and noticed that it was taking long time as well. Once I checked the sys.sysprocesses, I found that the created database was having IO related waits.

Some of the reasons why it might be taking more time while creating the database are

  • IO bottleneck on the drive where we are creating the database files
  • Large size of Model database
  • Instant File Initialization is not enabled

I verified the size of the model database files and found that the model database data file is 5 GB and log file is 1 GB. I have reduced the size of the model database to 1 GB and log file size to 512 MB and then I was able to create the Test database quickly.

Now we started the installation of the Application and it completed successfully as well.

Categories: DBA Blogs

Script to Get the Create Date of an Object from Multiple Databases

Pythian Group - Mon, 2014-07-14 08:04

As a DBA, it is common to get a request to run scripts against production databases. However, there can be environments where there are multiple databases on same instance, where the script needs to be run against. I have seen a environment where there were 50+ databases which have same schema with different data, and each database serving different customers.

When we get a request to run large scripts against many databases, at times with user over sight, it may be possible to miss running the script against one or more databases. The requirement comes to verify if the script was executed against all the databases. One way to verify if the script is executed against all databases is to pick an object (stored procedure, table, view, function) which was created as part of script execution, get the create date of that object and verify if it is showing the date and time when we ran the script. The challenge is to get the create date of a specific object from all databases at a time with little work.

Below is the code which will help in fetching the create date of the specified object (Stored Procedure, Table, View, Function) from all user databases on a instance. Pass the object name @ObjName in the 7th line of the code. Run the code and verify the create date from the output and make sure that the script was executed and created the object during the time the script was run.

-- Script Originally Written By: Keerthi Deep | http://www.SQLServerF1.com/

Set NOCOUNT ON
Declare @ObjName nvarchar(300)
declare @dbn nvarchar(200)

Set @ObjName = 'Object_Name' -- Specify the name of the Stored Procedure/ Table/View/Function

create table #DatabaseList(dbname nvarchar(2000)) 

Insert into #DatabaseList select name from sys.sysdatabases
where name not in ('master', 'msdb', 'model','tempdb')
order by name asc 

--select * from #DatabaseList
Create table #Output_table (DB nvarchar(200), crdate datetime, ObjectName nvarchar(200))
declare c1 cursor for select dbname from #DatabaseList open c1
Fetch next from c1 into @dbn
WHILE @@FETCH_STATUS = 0
BEGIN

declare @Query nvarchar(2048)
Set @Query = 'select ''' + @dbn + ''' as DBName, crdate, [name] from ' + @dbn + '.sys.sysobjects where name = ''' + @ObjName + ''''
--print @Query
Insert into #Output_table Exec sp_executesql @Query

FETCH NEXT FROM c1 into @dbn
END
CLOSE c1
DEALLOCATE c1

select * from #Output_table
Drop table #Output_table
Drop table #DatabaseList

Limitations:
This will work only if the object is created using create command, but will not work if Alter command is used.

Any suggestions are welcome.

Categories: DBA Blogs

PeopleTools 8.54 Sandbox

Duncan Davies - Mon, 2014-07-14 08:00

Following the announcement on Friday that Tools 8.54 was available, we dived straight in (whilst cursing under our breath that it was released at 6pm on Friday night on the weekend of the World Cup final!).

We download all of the media (which doesn’t take long on AWS – this is the 2.7GB tools 8.54.01 patch coming down in about a minute and a half):pt85401 download

 

At the time of GA you can’t perform an upgrade yet (that’s in the 8.54.02 patch) so our only option was a clean install.

Over the course of the weekend we managed to get a complete install running. The install was fairly painless, although there were a couple of times that we had to stop and think.

One example is during the install of WebLogic (now rebranded as Fusion Middleware):

weblogic

 

You can see it reports both the OS and Java version as incorrect. The OS we used was Windows 2012R2, which is a certified platform. The Java version we used is Java 8 and it was expecting Java 7. It let us continue but during the installation of the PIA the Java version caused an issue and we had to remove v8 and put v7 on.

Fluid aside, this feels like a big release. There are lots of intriguing new items that we noticed during the install that we didn’t have time to investigate and we don’t recall from the RVP or Pre-Release notes.

We like the jazzy new sign-on page:

pt854 signon

We’ll post more when we’ve dug a little deeper …


Next ORCLAPEX NOVA Meetup: July 17th

Scott Spendolini - Mon, 2014-07-14 06:37
The next Meetup for the ORCLAPEX NOVA Meetup Group will be this Thursday, July 17th at 7:00PM at Oracle Reston.  (Details: http://www.meetup.com/orclapex-NOVA/events/192592582/)

We're going to try the "Open Mic" format that has been wildly successful at KScope for the past few years.  The rules are quite simple: anyone can demonstrate their APEX-based solution for up to 10 minutes.  No need to reserve a spot or spend too much time planning.  And as always, no slides will be permitted - strictly demo.

Anyone and everyone is welcome to present - even if you have never presented before.  We're a welcoming group, so please don't be shy nor feel intimidated!  I've actually seen quite an amazing selection of APEX-based solutions at prior open mic nights from people who have never presented before, so I encourage everyone to give it a try.

While there is WiFi available at Oracle, it's always best to have a local copy of your demonstration, just in case you can't connect or the network is having issues.

See you Thursday night!

Oracle Social Network: Enabling Employee Engagement with Oracle Cloud Services

Usable Apps - Mon, 2014-07-14 01:00

Julien Laforêt (@julienlaforet), Procurement Sales Consultant, Oracle Social Network Business Leader, and User Experience Sales Ambassador tells us how the Oracle Social Network Cloud Service enables employee engagement and helps attract and maintain talent.

A recent Gallup poll revealed that 70% of employees surveyed disliked their jobs or were so completely disengaged that even incentives and extras were not sufficient to improve how they felt about work.

Disengaged employees means trouble for businesses in many areas

Can't contribute, won't contribute. Disengaged employees means the entire business suffers.

This statistic is important because we know that employees who are disengaged:

  • Do not exceed growth goals 
  • Rarely contribute to innovation  
  • Frequently limit their productivity 
  • Often share their thoughts and feelings on social media internally and externally, which contributes to negative publicity about the company 
  • Sometimes, they leave the company, meaning a loss of investment, incurred cost of replacement, and so on 

But, engaged and happy employees participate in ways that often yields more opportunities for the company itself to be successful, and opens the door to other talented people who are attracted to similar  high levels of work satisfaction.

Social media is a key that may help improve employee satisfaction and engagement. In this blog, we look at how to use it to enable positive employee engagement and results.

Communicating Internally

For any employee, growing their skills and knowledge, promoting their expertise and their successes, and collaborating across teams and networks in the company are important activities. These foster community and collaboration, a feeling of belonging to something larger than their current projects.

Most of the time, these kinds of growth opportunities and acknowledgements have only been possible in small social circles, for example, direct-line managers and immediate colleagues. How do you promote your employees to people in other regions or services?

Oracle Social Network allows people to connect and share in scalable, effective ways. It allows employees to create networks and:

  • Showcase their value and share their successes and expertise  
  • Participate in the life of the company: create, innovate, participate, share, improve  
  • Reduce the time spent managing emails and performing administrative tasks

Oracle Social Network Cloud Service

Showcasing Value and Sharing Successes

With public conversations on Oracle Social Network, an employee can create knowledge communities where all employees can contribute. Participating is easier than ever as employees uses the Oracle Social Network global search to quickly find people and conversations that align with their areas of interest, and then follow those people or join those conversations at any time.

Creating and Innovating

Oracle Social Network makes collaboration easy, reinforcing employees, their knowledge, and contributions as assets to a company. With employees creating, innovating, or proposing ideas, companies are more successful, employees feel valued, and unique talents of contributors are showcased in teams engaged in product innovation and competitor-killing ideas.

Nike, Inc. follows a similar model for their customers. Customers can propose designs for shoes, and the best “liked” designs might be awarded and launched in production, encourages customer engagement. Applied to employees, the model reveals that creative thinking and innovation is not limited to the R&D department in your company. Developing better products and refining processes and innovation through collaboration social process is game changer in a competitive business world.

Reducing Time Spent Managing Administrative Tasks

Oracle Social Network allows contributors to publish information to their walls and in conversations dedicated to a topic, where only followers and conversation members interested in the conversation participate. Unlike email notes distributed widely, using Oracle Social Network to publish information on walls instead allows employees to manage information overload and flooded inboxes. Information is published in dedicated identifiable conversations, easily located by employees who are empowered to join in. The conversation history is accessible to newcomers so that they can find everything that has been said earlier, a practice that prevents knowledge from being lost when employees leave a company, and it enables new employees to ramp up quickly.

Only Oracle Social Network allows private, secure conversations to be initiated directly from business applications and enables two-way tracking: from Oracle Social Network conversations to the original transaction and data in the application, and from the application to conversations related to a transaction or related data.

Initiating an OSN Conversation from an Application task flow

Initiating an OSN Conversation from inside an Oracle Applications Cloud task flow 

This practice removes the administrative overhead of exchanging information outside of the application's transactions, which occurs in disconnected silos, such as conversations between the requester and the buyer, project or transaction summary emails, reminders, and so on. Conversations in Oracle Social Network are contextual.

Employees can participate in business transaction-related conversations securely, too. Conversations may be private and information can be exchanged securely and in confidence among members of those conversations authorized to have such access.

OSN on simplified UI

Oracle Social Network is a great solution for tablets too. Oracle Applications Cloud Release 8 simplified UI shown. 

When you start using Oracle Social Network, you just might find yourself asking, “How did we run our business before Oracle Social Network?” Just like smartphones revolutionized our everyday lives, Oracle Social Network will revolutionize your everyday work-life for the better.

21st Century DBMS success and failure

Curt Monash - Mon, 2014-07-14 00:37

As part of my series on the keys to and likelihood of success, I outlined some examples from the DBMS industry. The list turned out too long for a single post, so I split it up by millennia. The part on 20th Century DBMS success and failure went up Friday; in this one I’ll cover more recent events, organized in line with the original overview post. Categories addressed will include analytic RDBMS (including data warehouse appliances), NoSQL/non-SQL short-request DBMS, MySQL, PostgreSQL, NewSQL and Hadoop.

DBMS rarely have trouble with the criterion “Is there an identifiable buying process?” If an enterprise is doing application development projects, a DBMS is generally chosen for each one. And so the organization will generally have a process in place for buying DBMS, or accepting them for free. Central IT, departments, and — at least in the case of free open source stuff — developers all commonly have the capacity for DBMS acquisition.

In particular, at many enterprises either departments have the ability to buy their own analytic technology, or else IT will willingly buy and administer things for a single department. This dynamic fueled much of the early rise of analytic RDBMS.

Buyer inertia is a greater concern.

  • A significant minority of enterprises are highly committed to their enterprise DBMS standards.
  • Another significant minority aren’t quite as committed, but set pretty high bars for new DBMS products to cross nonetheless.
  • FUD (Fear, Uncertainty and Doubt) about new DBMS is often justifiable, about stability and consistent performance alike.

A particularly complex version of this dynamic has played out in the market for analytic RDBMS/appliances.

  • First the newer products (from Netezza onwards) were sold to organizations who knew they wanted great performance or price/performance.
  • Then it became more about selling “business value” to organizations who needed more convincing about the benefits of great price/performance.
  • Then the behemoth vendors became more competitive, as Teradata introduced lower-price models, Oracle introduced Exadata, Sybase got more aggressive with Sybase IQ, IBM bought Netezza, EMC bought Greenplum, HP bought Vertica and so on. It is now hard for a non-behemoth analytic RDBMS vendor to make headway at large enterprise accounts.
  • Meanwhile, Hadoop has emerged as serious competitor for at least some analytic data management, especially but not only at internet companies.

Otherwise I’d say: 

  • At large enterprises, their internet operations perhaps excepted:
    • Short-request/general-purpose SQL alternatives to the behemoths — e.g. MySQL, PostgreSQL, NewSQL — have had tremendous difficulty getting established. The last big success was the rise of Microsoft SQL Server in the 1990s. That’s why I haven’t mentioned the term mid-range DBMS in years.
    • NoSQL/non-SQL has penetrated large enterprises mainly for a few specific use cases, for example the lists I posted for MongoDB or graph databases.
  • Internet-only companies have few inertia issues when it comes to database managers. They’ll consider anything they regard as being in their price ballpark (which is however often restricted to open source). I think part of the reason is that as quickly as they rewrite their applications, DBMS are vastly less “strategic” to them than they are to most larger enterprises.
  • The internet operations of large companies — especially large retailers — in many cases behave like internet-only companies, but in many other cases behave like the rest of the enterprise.

The major reasons for DBMS categories to get established in the first place are:

  • Performance and/or scalability (many examples).
  • Developer features (for example dynamic schema).
  • License/maintenance cost (for example several open source categories).
  • Ease of installation and administration (for example open source again, and also data warehouse appliances).

Those same characteristics are major bases for competition among members of a new category, although as noted above behemoth-loyalty can also come into play.

Cool-vs.-weird tradeoffs are somewhat secondary among SQL DBMS.

  • There’s not much of a “cool” factor, because new products aren’t that different in what they do vs. older ones.
  • There’s not a terrible “weird” factor either, but of course any smaller offering faces FUD, and also …
  • … appliances are anti-strategic for many buyers, especially ones who demand a smooth path to the cloud.)

They’re huge, however, in the non-SQL world. Most non-SQL data managers have a major “weird” factor. Fortunately, NoSQL and Hadoop both have huge “cool” cred to offset it. XML/XQuery unfortunately did not.

Finally, in most DBMS categories there are massive issues with product completeness, more in the area of maturity than that of whole product. The biggest whole product issues are concentrated on the matter of interoperating with other software — business intelligence tools, packaged applications (if relevant to the category), etc. Most notably, the handful of DBMS that are certified to run SAP share a huge market that other DBMS can’t touch. But BI tools are less of a differentiator — I yawn when vendors tell me they are certified for/partnered with MicroStrategy, Tableau, Pentaho and Jaspersoft, and I’m surprised at any product that isn’t.

DBMS maturity has a lot of aspects, but the toughest challenges are concentrated in two main areas:

  • Reliability, especially but not only in short-request use cases.
  • Performance across a great variety of use cases. I observe frequently that performance in best-case scenarios, performance in the lab and performance in real-world environments are much further apart than vendors like to think.

In particular:

  • Maturity demands seem to be much higher for SQL DBMS than for NoSQL.
    • I think this is one of several reasons NoSQL has been much more successful than NewSQL.
    • It’s why I think MarkLogic’s “Enterprise NoSQL” positioning is a mistake.
  • As for MySQL:
    • MySQL wasn’t close to reliable enough for enterprises to trust it until InnoDB became the default storage engine.
    • MySQL 5 point releases have added major features, or decent performance for major features. I’ll confess to having lost track of what’s been fixed and what’s still missing.
    • In saying all that I’m holding MySQL to a much higher maturity standard than I’m holding NoSQL — because that’s what I think enterprise customers do.
  • PostgreSQL “should” be doing a lot better than it is. I have an extremely low opinion of its promoters, and not just for personal reasons. (That said, the personal reasons don’t just apply to EnterpriseDB anymore. I’ve also run out of patience waiting for Josh Berkus to retract untruths he posted about me years ago.)
  • SAP HANA checks boxes for performance (In-memory rah rah rah!!) and whole product (Runs SAP!!). That puts it well ahead of most other newish SQL DBMS, purely analytic ones perhaps excepted.
  • Any other new short-request SQL DBMS that sounds like is has traction is also memory-centric.
  • Analytic RDBMS are in most respects held to lower maturity standards than DBMS used for write-intensive workloads. Even so, products in the category are still frequently tripped up by considerations of concurrent performance and mixed workload management.

Related links

There have been 1,470 previous posts in the 9-year history of this blog, many of which could serve as background material for this one. A couple that seem particularly germane and didn’t get already get linked above are:

21st Century DBMS success and failure

DBMS2 - Mon, 2014-07-14 00:37

As part of my series on the keys to and likelihood of success, I outlined some examples from the DBMS industry. The list turned out too long for a single post, so I split it up by millennia. The part on 20th Century DBMS success and failure went up Friday; in this one I’ll cover more recent events, organized in line with the original overview post. Categories addressed will include analytic RDBMS (including data warehouse appliances), NoSQL/non-SQL short-request DBMS, MySQL, PostgreSQL, NewSQL and Hadoop.

DBMS rarely have trouble with the criterion “Is there an identifiable buying process?” If an enterprise is doing application development projects, a DBMS is generally chosen for each one. And so the organization will generally have a process in place for buying DBMS, or accepting them for free. Central IT, departments, and — at least in the case of free open source stuff — developers all commonly have the capacity for DBMS acquisition.

In particular, at many enterprises either departments have the ability to buy their own analytic technology, or else IT will willingly buy and administer things for a single department. This dynamic fueled much of the early rise of analytic RDBMS.

Buyer inertia is a greater concern.

  • A significant minority of enterprises are highly committed to their enterprise DBMS standards.
  • Another significant minority aren’t quite as committed, but set pretty high bars for new DBMS products to cross nonetheless.
  • FUD (Fear, Uncertainty and Doubt) about new DBMS is often justifiable, about stability and consistent performance alike.

A particularly complex version of this dynamic has played out in the market for analytic RDBMS/appliances.

  • First the newer products (from Netezza onwards) were sold to organizations who knew they wanted great performance or price/performance.
  • Then it became more about selling “business value” to organizations who needed more convincing about the benefits of great price/performance.
  • Then the behemoth vendors became more competitive, as Teradata introduced lower-price models, Oracle introduced Exadata, Sybase got more aggressive with Sybase IQ, IBM bought Netezza, EMC bought Greenplum, HP bought Vertica and so on. It is now hard for a non-behemoth analytic RDBMS vendor to make headway at large enterprise accounts.
  • Meanwhile, Hadoop has emerged as serious competitor for at least some analytic data management, especially but not only at internet companies.

Otherwise I’d say: 

  • At large enterprises, their internet operations perhaps excepted:
    • Short-request/general-purpose SQL alternatives to the behemoths — e.g. MySQL, PostgreSQL, NewSQL — have had tremendous difficulty getting established. The last big success was the rise of Microsoft SQL Server in the 1990s. That’s why I haven’t mentioned the term mid-range DBMS in years.
    • NoSQL/non-SQL has penetrated large enterprises mainly for a few specific use cases, for example the lists I posted for MongoDB or graph databases.
  • Internet-only companies have few inertia issues when it comes to database managers. They’ll consider anything they regard as being in their price ballpark (which is however often restricted to open source). I think part of the reason is that as quickly as they rewrite their applications, DBMS are vastly less “strategic” to them than they are to most larger enterprises.
  • The internet operations of large companies — especially large retailers — in many cases behave like internet-only companies, but in many other cases behave like the rest of the enterprise.

The major reasons for DBMS categories to get established in the first place are:

  • Performance and/or scalability (many examples).
  • Developer features (for example dynamic schema).
  • License/maintenance cost (for example several open source categories).
  • Ease of installation and administration (for example open source again, and also data warehouse appliances).

Those same characteristics are major bases for competition among members of a new category, although as noted above behemoth-loyalty can also come into play.

Cool-vs.-weird tradeoffs are somewhat secondary among SQL DBMS.

  • There’s not much of a “cool” factor, because new products aren’t that different in what they do vs. older ones.
  • There’s not a terrible “weird” factor either, but of course any smaller offering faces FUD, and also …
  • … appliances are anti-strategic for many buyers, especially ones who demand a smooth path to the cloud.)

They’re huge, however, in the non-SQL world. Most non-SQL data managers have a major “weird” factor. Fortunately, NoSQL and Hadoop both have huge “cool” cred to offset it. XML/XQuery unfortunately did not.

Finally, in most DBMS categories there are massive issues with product completeness, more in the area of maturity than that of whole product. The biggest whole product issues are concentrated on the matter of interoperating with other software — business intelligence tools, packaged applications (if relevant to the category), etc. Most notably, the handful of DBMS that are certified to run SAP share a huge market that other DBMS can’t touch. But BI tools are less of a differentiator — I yawn when vendors tell me they are certified for/partnered with MicroStrategy, Tableau, Pentaho and Jaspersoft, and I’m surprised at any product that isn’t.

DBMS maturity has a lot of aspects, but the toughest challenges are concentrated in two main areas:

  • Reliability, especially but not only in short-request use cases.
  • Performance across a great variety of use cases. I observe frequently that performance in best-case scenarios, performance in the lab and performance in real-world environments are much further apart than vendors like to think.

In particular:

  • Maturity demands seem to be much higher for SQL DBMS than for NoSQL.
    • I think this is one of several reasons NoSQL has been much more successful than NewSQL.
    • It’s why I think MarkLogic’s “Enterprise NoSQL” positioning is a mistake.
  • As for MySQL:
    • MySQL wasn’t close to reliable enough for enterprises to trust it until InnoDB became the default storage engine.
    • MySQL 5 point releases have added major features, or decent performance for major features. I’ll confess to having lost track of what’s been fixed and what’s still missing.
    • In saying all that I’m holding MySQL to a much higher maturity standard than I’m holding NoSQL — because that’s what I think enterprise customers do.
  • PostgreSQL “should” be doing a lot better than it is. I have an extremely low opinion of its promoters, and not just for personal reasons. (That said, the personal reasons don’t just apply to EnterpriseDB anymore. I’ve also run out of patience waiting for Josh Berkus to retract untruths he posted about me years ago.)
  • SAP HANA checks boxes for performance (In-memory rah rah rah!!) and whole product (Runs SAP!!). That puts it well ahead of most other newish SQL DBMS, purely analytic ones perhaps excepted.
  • Any other new short-request SQL DBMS that sounds like is has traction is also memory-centric.
  • Analytic RDBMS are in most respects held to lower maturity standards than DBMS used for write-intensive workloads. Even so, products in the category are still frequently tripped up by considerations of concurrent performance and mixed workload management.

Related links

There have been 1,470 previous posts in the 9-year history of this blog, many of which could serve as background material for this one. A couple that seem particularly germane and didn’t get already get linked above are:

Categories: Other

Oracle encrypted table data found unencrypted in SGA

ContractOracle - Sun, 2014-07-13 21:29
When data needs to be kept private, or companies are worried about data leakage, then they often choose to store that data in encrypted columns in the table using Oracle Transparent Data Encryption. 

I wanted to see if that data was stored in the SGA in an unencrypted format.  I ran the following test from sqlplus.

CDB$ROOT@ORCL> create table credit_card_number(card_number char(16) encrypt);

Table created.

CDB$ROOT@ORCL> insert into credit_card_number values ('4321432143214321');

1 row created.

CDB$ROOT@ORCL> update credit_card_number set card_number = '5432543254325432' where card_number = '4321432143214321';

1 row updated.

CDB$ROOT@ORCL> VARIABLE cardnumber char(16);
CDB$ROOT@ORCL> EXEC :cardnumber := '6543654365436543';

PL/SQL procedure successfully completed.

CDB$ROOT@ORCL> update credit_card_number set card_number = :cardnumber where card_number = '5432543254325432';

1 row updated.

CDB$ROOT@ORCL> commit;

Now we search SGA for the data that should be encrypted to keep it private.  


[oracle@localhost shared_memory]$ ./sga_search 4321432143214321
USAGE :- sga_search searchstring


Number of input parameters seem correct.
SEARCH FOR   :- 4321432143214321
/dev/shm/ora_orcl_38895617_30 found string at 459100
4321432143214321
/dev/shm/ora_orcl_38895617_30 found string at 3244704
4321432143214321
/dev/shm/ora_orcl_38895617_29 found string at 2529984
4321432143214321
[oracle@localhost shared_memory]$ ./sga_search 5432543254325432
USAGE :- sga_search searchstring


Number of input parameters seem correct.
SEARCH FOR   :- 5432543254325432
/dev/shm/ora_orcl_38895617_30 found string at 459061
5432543254325432
/dev/shm/ora_orcl_38895617_30 found string at 4106466
5432543254325432
/dev/shm/ora_orcl_38895617_29 found string at 2075064
5432543254325432
/dev/shm/ora_orcl_38895617_29 found string at 2528552
5432543254325432
/dev/shm/ora_orcl_38895617_28 found string at 1549533
5432543254325432
[oracle@localhost shared_memory]$ ./sga_search 6543654365436543
USAGE :- sga_search searchstring


Number of input parameters seem correct.
SEARCH FOR   :- 6543654365436543
/dev/shm/ora_orcl_38895617_29 found string at 3801400
6543654365436543

The output shows that all 3 of the card_number values used in the demonstration can be found in SGA, sometimes in multiple locations.  Flushing the buffer cache did not clear the data from SGA, but flushing the shared pool did.  Further analysis is needed to confirm exactly where in the shared pool the unencrypted data is being stored to confirm if it is in sql statements, sql variables, or interim values kept by the encryption process.  Further testing is also needed to see if it is possible to avoid potential data leakage by using bind variables or wrapping sql in plsql.  In the meantime ... be aware that data you believe to be encrypted may actually be stored in memory in clear text visible to anyone with privileges to connect to the SGA.

Oracle TDE FAQ  :- http://www.oracle.com/technetwork/database/security/tde-faq-093689.html
States that "With TDE column encryption, encrypted data remains encrypted inside the SGA, but with TDE tablespace encryption, data is already decrypted in the SGA, which provides 100% transparency."
Categories: DBA Blogs

Partial Join Evaluation in Oracle 12c

Yann Neuhaus - Sun, 2014-07-13 21:21

Do you think that it's better to write semi-join SQL statements with IN(), EXISTS(), or to do a JOIN? Usually, the optimizer will evaluate the cost and do the transformation for you. And in this area, one more transformation has been introduced in 12c which is the Partial Join Evaluation (PJE).

First, let's have a look at the 11g behaviour. For that example, I use the SCOTT schema, but I hire a lot more employees in departement 40:

 

SQL> alter table EMP modify empno number(10);
Table altered.
SQL> insert into EMP(empno,deptno) select rownum+10000,40 from EMP,(select * from dual connect by level

 

Why department 40? I'll explain it below, but I let you think about it before. In the default SCOTT schema, there is a department 40 in DEPT table, but which has no employees in EMP. And the new transformation is not useful in that case.

 

11g behaviour

Now, I'm running the following query to check all the departments that have at least one employee:

I can write it with IN:

 

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------
SQL_ID  6y71msam9w32r, child number 0
-------------------------------------
select distinct deptno,dname from dept 
 where deptno in ( select deptno from emp)

Plan hash value: 1754319153

------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows | Buffers |
------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |      4 |      15 |
|*  1 |  HASH JOIN SEMI    |      |      1 |      4 |      4 |      15 |
|   2 |   TABLE ACCESS FULL| DEPT |      1 |      4 |      4 |       7 |
|   3 |   TABLE ACCESS FULL| EMP  |      1 |  15068 |    388 |       8 |
------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("DEPTNO"="DEPTNO")

 

or with EXISTS:

 

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------
SQL_ID  cbpa3zjtzfzrn, child number 0
-------------------------------------
select distinct deptno,dname from dept 
 where exists ( select 1 from emp where emp.deptno=dept.deptno)

Plan hash value: 1754319153

------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows | Buffers |
------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |      4 |      15 |
|*  1 |  HASH JOIN SEMI    |      |      1 |      4 |      4 |      15 |
|   2 |   TABLE ACCESS FULL| DEPT |      1 |      4 |      4 |       7 |
|   3 |   TABLE ACCESS FULL| EMP  |      1 |  15068 |    388 |       8 |
------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("DEPTNO"="DEPTNO")

 

Both are good. We didn't have to read the whole EMP table. I have 15000 rows in my table, I do a full scan on it, but look at the A-Rows: only 388 rows were actually read.

The HASH JOIN first read the DEPT table in order to build the hash table. So it already knows that we cannot have more than 4 distinct departments.

Then we do the join to EMP just to check which of those departments have an employee. But we can stop as soon as we find the 4 departments. This is the reason why we have read only 388 rows here. And this is exactly what a Semi Join is: we don't need all the matching rows, we return at most one row per matching pair.

Ok. What if we write the join ourselves?

 

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------
SQL_ID  2xjj9jybqja87, child number 1
-------------------------------------
select distinct deptno,dname from dept join emp using(deptno)

Plan hash value: 2962452962

-------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows | Buffers |
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      4 |     129 |
|   1 |  HASH UNIQUE        |      |      1 |  15068 |      4 |     129 |
|*  2 |   HASH JOIN         |      |      1 |  15068 |  14014 |     129 |
|   3 |    TABLE ACCESS FULL| DEPT |      1 |      4 |      4 |       7 |
|   4 |    TABLE ACCESS FULL| EMP  |      1 |  15068 |  14014 |     122 |
-------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("DEPT"."DEPTNO"="EMP"."DEPTNO")

 

Bad luck. We have to read all the rows. More rows and more buffers.

 

12c behaviour

Let's do the same in 12.1.0.1:

 

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------
SQL_ID  2xjj9jybqja87, child number 0
-------------------------------------
select distinct deptno,dname from dept join emp using(deptno)

Plan hash value: 1629510749

-------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows | Buffers |
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      4 |      14 |
|   1 |  HASH UNIQUE        |      |      1 |      4 |      4 |      14 |
|*  2 |   HASH JOIN SEMI    |      |      1 |      4 |      4 |      14 |
|   3 |    TABLE ACCESS FULL| DEPT |      1 |      4 |      4 |       7 |
|   4 |    TABLE ACCESS FULL| EMP  |      1 |  15068 |    388 |       7 |
-------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("DEPT"."DEPTNO"="EMP"."DEPTNO")

 

Same plan but less rows have been read. If we check the outlines, we see the new feature:

 

      PARTIAL_JOIN(@"SEL$58A6D7F6" "EMP"@"SEL$1")

 

And here is what we see in the optimizer trace:

 

OPTIMIZER STATISTICS AND COMPUTATIONS
PJE: Checking validity of partial join eval on query block SEL$58A6D7F6 (#1)
PJE: Passed validity of partial join eval by query block SEL$58A6D7F6 (#1)
PJE: Partial join eval conversion for query block SEL$58A6D7F6 (#1).
PJE: Table marked for partial join eval: EMP[EMP]#1

 

The hints that controls the feature are PARTIAL_JOIN and NO_PARTIAL_JOIN and there are enabled by _optimizer_partial_join_eval which appeared in 12c.

But of course, the optimization is useful only when we have all the values at the beginning of the table. This is why I added at least one employee in department 40. If there are some rows in DEPT that have no matching row in EMP, then Oracle cannot know the result before reaching the end of the table.

Oracle encryption wallet password found in SGA

ContractOracle - Sun, 2014-07-13 20:51
If companies are worried about data privacy or leakage, they are often recommended to encrypt sensitive data inside Oracle databases to stop DBAs from accessing it, and implement "separation of duties" so that only the application or data owner has the encryption keys or wallet password.  One method to encrypt data is to use Oracle Transparent Database Encryption which stores keys in the Oracle wallet protected by a wallet password.  Best practice dictates using a very long wallet password to avoid rainbow tables and brute force attacks, and keep the key and password secret.

I wrote a simple program to search for data in Oracle shared memory segments, and it was able to find the Oracle wallet password, which means anyone who can connect to the shared memory can get the wallet password and access the encrypted data.  The following demonstrates this :-

First open and close the wallet using the password :-


CDB$ROOT@ORCL> alter system set encryption wallet open identified by "verylongverysecretwalletpassword1";

System altered.

CDB$ROOT@ORCL> alter system set wallet close identified by "verylongverysecretwalletpassword1";

System altered.


Now search for the wallet password in SGA :-
oracle@localhost shared_memory]$ ./sga_search verylongverysecretwalletpassword1USAGE :- sga_search searchstring

Number of input parameters seem correct.SEARCH FOR   :- verylongverysecretwalletpassword1/dev/shm/ora_orcl_35258369_30 found string at 3473189verylongverysecretwalletpassword1
The search found the password in SGA, so it should be possible to analyse the memory structure that currently stores the known password, and create another program to directly extract passwords on unknown systems.  It may also be possible to find the password by selecting from v$ or x$ tables.  I have not done that analysis, so don't know how difficult it would be, but if the password is stored, it will be possible to extract it, and even if it is mixed up with a lot of other sql text and variables it would be very simple to just try opening the wallet using every string stored in SGA.
The password is still in SGA after flushing the buffer cache.
CDB$ROOT@ORCL> alter system flush buffer_cache;
System altered.

[oracle@localhost shared_memory]$ ./sga_search verylongverysecretwalletpassword1USAGE :- sga_search searchstring

Number of input parameters seem correct.SEARCH FOR   :- verylongverysecretwalletpassword1/dev/shm/ora_orcl_35258369_30 found string at 3473189verylongverysecretwalletpassword1

After flushing the shared pool the password is no longer available.  
CDB$ROOT@ORCL> alter system flush shared_pool;
System altered.

[oracle@localhost shared_memory]$ ./sga_search verylongverysecretwalletpassword1USAGE :- sga_search searchstring

Number of input parameters seem correct.SEARCH FOR   :- verylongverysecretwalletpassword1[oracle@localhost shared_memory]$ 
As this password really should be secret, Oracle really should not store it.   More research is needed to confirm if the password can be hidden by using bind variables, obfuscation, or wrapping it in plsql.
Categories: DBA Blogs

Master Data Services installation for SQL Server 2012

Yann Neuhaus - Sun, 2014-07-13 20:32

This posting is a tutorial for installing Master Data Services on your Windows Server 2012. Microsoft SQL Server Master Data Services (MDS) is a Master Management product from Microsoft, code-named Bulldog. It is the rebranding of the Stratature MDM product, titled +EDM and acquired in June 2007 by Microsoft. Initially, it was integrated for the first time in Microsoft SQL Server 2008 as an additional installer. But since SQL Server 2012, Master Data Services is integrated as a feature within the SQL Server installer.

 

Introduction

Master Data Services is part of the Enterprise Information Management (EMI) technologies, provided by Microsoft, for managing information in an enterprise.

EMI technologies include:

  • Integration Services
  • Master Data Services
  • Data Quality Services

 

Components

Master Data Services covers five main components:

  • MDS Configuration Manager tool: used to configure Master Data Services
  • MDS Data Manager Web Application: used essentially to perform administrative tasks
  • MDS Web Service: used to extend or develop custom solutions
  • MDS Add-in for Excel: used to manage data, create new entities or attributes …

 

SQL Server Editions & Versions

Master Data Services can be installed only with the following SQL Server Editions & Versions:

  • SQL Server 2008 R2 edition: Datacenter or Enterprise versions
  • SQL Server 2012 or SQL Server 2014 editions: Enterprise or BI versions

 

Master Data Services prerequisites in SQL Server 2012

First, Master Data Services is based on an application web named Master Data Manager Web Application, in order to perform administrative task, for example. This web application is hosted by Internet Information Services (IIS), so it is a necessary prerequisite.

Furthermore, to be able to display the content from the web application, you need Internet Explorer 7 or later (Internet Explorer 6 is not supported) with Silverlight 5.

Moreover, if you planned to use Excel with Master Data Services, you also need to install Visual Studio 2010 Tools for Office Runtime, plus the Master Data Services Add-in for Microsoft Excel.

Finally, often forgotten, but PowerShell 2.0 is required for Master Data Services.

 Let’s resume the requirements for Master Data Services:

  • Internet Information Services (IIS)
  • Internet Explorer 7 or later
  • Silverlight 5
  •  PowerShell 2.0
  • Visual Studio 2010 Tools for Office Runtime and Excel Add-in for Microsoft Excel (only if you plan to use Excel with Master Data Services).

 

Configuration at the Windows Server level

In the Server Manager, you have to activate the Web Server (IIS) Server Roles to be able to host the Master Data Web Application, as well as the .Net 3.5 feature.

For the Server Roles, you have to select:

  • Web Server (IIS)

For the Server Features, you have to select:

- .NET Framework 3.5 Features
  - .NET Framework 3.5
  - HTTP Activation
- .NET Framework 4.5 features
  - .NET Framework 4.5
  - ASP.NET 4.5
  - WCF Services
    - HTTP Activation
    - TCP Port Sharing

 

MDS_features_selection_2.png

 

For the IIS features selection, you have to select:

- Web Server
  - Common HTTP Features
    - Default Document
    - Directory Browsing
    - HTTP Errors
    - Static Content
  - Health and Diagnostics
    - HTTP Logging
    - Request Monitor
  - Performance
    - Static Content Compression
  - Security
    - Request Filtering
    - Windows Authentication
  - Application Development
    - .NET Extensibility
    - .NET Extensibility 4.5
    - ASP.NET 3.5
    - ASP.NET 4.5
    - ISAPI Extensions
    - ISAPI Filters
  - Management Tools
    - IIS Management Console

 

MDS_features_selection.png

MDS_features_selection_3.png 

 

Installation of SQL Server 2012

Master Data Services stores its data on a SQL Server database, so you need a SQL Server Engine installed.

Of course, SQL Server Engine can be installed on a different Windows Server. So the Windows Server with Master Data Services installed is used as a Front Server.

Then, in order to personalize the roles of your Master Data Services, you also need to install Management Tools.

At the features installation step, you have to select:

  • Database Engine Services
  • Management Tools
  • Master Data Services

 

Conclusion

At this point, Master Data Services should be installed with all the needed prerequisites.


MDS_confirm_prerequisites.png

 

However, Master Data Services cannot be used without configuring it. Three main steps need to be performed through the MDS Configuration Manager:

  • First, you have to create a MDS database
  • Then, you have to create a MDS web application hosted in IIS
  • Finally, you have to link the MDS database with the MDS web application

Deferrable RI – 2

Jonathan Lewis - Sun, 2014-07-13 12:46

A question came up on Oracle-L recently about possible locking anomalies with deferrable referential integrity constraints.

An update by primary key is taking a long time; the update sets several columns, one of which is the child end of a referential integrity constraint. A check on v$active_session_history shows lots of waits for “enq: TX – row lock contention” in mode 4 (share), and many of these waits also identify the current object as the index that has been created to avoid the “foreign key locking” problem on this constraint (though many of the waits show the current_obj# as -1). A possible key feature of the issue is that foreign key constraint is defined as “deferrable initially deferred”. The question is, could such a constraint result in TX/4 waits.

My initial thought was that if the constraint was deferrable it was unlikely, there would have to be other features coming into play.

Of course, when the foreign key is NOT deferrable it’s easy to set up cases where a TX/4 appears: for example you insert a new parent value without issuing a commit then I insert a new matching child, at that point my session will wait for your session to commit or rollback. If you commit my insert succeeds if you rollback my session raises an error (ORA-02291: integrity constraint (schema_name.constraint_name) violated – parent key not found). But if the foreign key is deferred the non-existence (or potential existence, or not) of the parent should matter.  If the constraint is deferrable, though, the first guess would be that you could get away with things like this so long as you fixed up the data in time for the commit.

I was wrong. Here’s a little example:


create table parent (
	id	number(4),
	name	varchar2(10),
	constraint par_pk primary key (id)
)
;

create table child(
	id_p	number(4)
		constraint chi_fk_par
		references parent
		deferrable initially deferred,
	id	number(4),
	name	varchar2(10),
	constraint chi_pk primary key (id_p, id)
)
;

insert into parent values (1,'Smith');
insert into parent values (2,'Jones');

insert into child values(1,1,'Simon');
insert into child values(1,2,'Sally');

insert into child values(2,1,'Jack');
insert into child values(2,2,'Jill');

commit;

begin
	dbms_stats.gather_table_stats(user,'parent');
	dbms_stats.gather_table_stats(user,'child');
end;
/

pause Press return

update child set id_p = 3 where id_p = 2 and id = 2;

If you don’t do anything after the pause and before the insert then the update will succeed – but fail on a subsequent commit unless you insert parent 3 before committing. But if you take advantage of the pause to use another session to insert parent 3 first, the update will then hang waiting for the parent insert to commit or rollback – and what happens next may surprise you. Basically the deferrability doesn’t protect you from the side effects of conflicting transactions.

The variations on what can happen next (insert the parent elsewhere, commit or rollback) are interesting and left as an exercise.

I was slightly surprised to find that I had had a conversation about this sort of thing some time ago, triggered by a comment to an earlier post. If you want to read a more thorough investigation of the things that can happen and how deferrable RI works then there’s a good article at this URL.

 


RAC Commands : 1 -- Viewing Configuration

Hemant K Chitale - Sun, 2014-07-13 05:58
Viewing the configuration of a RAC database

[root@node1 ~]# su - oracle
-sh-3.2$ srvctl config database -d RACDB
Database unique name: RACDB
Database name: RACDB
Oracle home: /u01/app/oracle/rdbms/11.2.0
Oracle user: oracle
Spfile: +DATA1/RACDB/spfileRACDB.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: RACSP
Database instances:
Disk Groups: DATA1,FRA,DATA2
Mount point paths:
Services: MY_RAC_SVC
Type: RAC
Database is policy managed
-sh-3.2$

So, we see that :
a) The database name is RACDB
b) It is a Policy Managed database (not Administrator Managed)
c) It is dependent on 3 ASM Disk Groups DATA1, DATA2, FRA
d) There is one service called MY_RAC_SVC configured
e) The database is in the  RACSP server pool
f) The database is configured to be Auto-started when Grid Infrastructure starts


Viewing the configuration of a RAC service

-sh-3.2$ srvctl config service -d RACDB -s MY_RAC_SVC
Service name: MY_RAC_SVC
Service is enabled
Server pool: RACSP
Cardinality: UNIFORM
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Service is enabled on nodes:
Service is disabled on nodes:
-sh-3.2$

So, we see that :
a) The service name is MY_RAC_SVC
b) The UNIFORM cardinality means that it is to run on all active nodes in the server pool
c) The server-side connection load balancing goal is LONG (for long running sessions)


Viewing the configuration of Server Pools

-sh-3.2$ srvctl config srvpool
Server pool name: Free
Importance: 0, Min: 0, Max: -1
Candidate server names:
Server pool name: Generic
Importance: 0, Min: 0, Max: -1
Candidate server names:
Server pool name: RACSP
Importance: 0, Min: 0, Max: 2
Candidate server names:
-sh-3.2$

So we see that :
a) The RACSP server pool is the only created (named) server pool
b) This server pool has a max of 2 nodes

Categories: DBA Blogs

A response to Bloomberg article on UCLA student fees

Michael Feldstein - Sat, 2014-07-12 13:56

Megan McArdle has an article that was published in Bloomberg this week about the growth of student fees. The setup of the article was based on a new “$4 student fee to pay for better concerts”.

To solve this problem, UCLA is introducing a $4 student fee to pay for better concerts. That illuminates a budgeting issue in higher education — and indeed among human beings more generally.

That $4 is not a large fee. Even the poorest student can probably afford it. On the other hand, collectively, UCLA’s student fees are significant: more than $3,500, or about a quarter of the mandatory cost of attending UCLA for a year.

Those fees are made up of many items, each trivial individually. Only collectively do they become a major source of costs for students and their families and potentially a barrier to college access for students who don’t have an extra $3,500 lying around.

I’m sympathetic to the argument that college often costs too much and that institutions can play revenue games to avoid the appearance of raising tuition. I also think that Megan is one of the better national journalists on the topic of the higher education finances.

UCLA Fees

However, this article is somewhat sloppy in a way that harms the overall message. I would like to clarify the student fees data to help show the broader point.

Let’s look at the actual data from UCLA’s web site. I assume that Megan is basing this analysis on in-state undergraduate full-time students. The data is listed per quarter, and UCLA has three quarters for a full academic year. I have summarized below summing three quarters into yearly data, and you can:

  • Hover over each measure to see the fee description from UCLA’s fee description page;
  • Click on each category that I added to see the component fees;
  • Sort either column; and
  • Choose which rows to keep or exclude.
  • NOTE: Static image above if you cannot see interactive graphics

UCLA Fees for In-State Undergrads (Total $3,749.97)

Learn About Tableau Some Clarifications Needed
  • The total of non-tuition fees is $3,750 per year, not $3,500; however, Megan is right that this represents “about a quarter of the mandatory cost of attending UCLA for a year” ($3,750 out of $14,970).
  • The largest single fee is the UC health insurance fee (UC-SHIP), which is more than half of the total non-tuition fees. This fact (noted by Michael Berman on Twitter) should have been pointed out, given the significant percentage of the total.
  • With the UC-SHIP at $1,938 and the student services fee at $972, I hardly consider these as “trivial individually”.
Broader Point on Budgeting

The article’s broader point is that using extraneous fees to create additional revenue leads to a flawed budgeting process.

As I’ve written before, this is a common phenomenon that you see among people who have gotten themselves into financial trouble — or, for that matter, people who are doing OK but complain that they don’t know where the money goes and can’t save for the big-ticket items they want. They consider each purchase individually, rather than in the context of a global budget, which means that they don’t make trade-offs. Instead of asking themselves “Is this what I want to spend my limited funds on, or would I rather have something else?” they ask “Can I afford this purchase on my income?” And the answer is often “Yes, I can.” The problem is that you can’t afford that purchase and the other 15 things that you can also, one by one, afford to buy on your income. This is how individual financial disasters occur, and it is also one way that college tuition is becoming a financial disaster for many families.

This point is very important. Look at the Wooden Center fee, described here (or by hovering over chart):

Covers repayment of the construction bond plus the ongoing maintenance and utilities costs for the John Wooden Recreation Center. It was approved by student referendum. The fee is increased periodically based on the Consumer Price Index.

To take Megan’s point, this fee “was approved by student referendum”, which means that UCLA has moved budgeting responsibility away from a holistic approach to saying “the students voted on it”. This makes no financial sense, nor does it make sense to shift bond repayment and maintenance and utilities cost onto student fees.

While this article had some sloppy reporting in terms of accurately describing the student fees, it does highlight an important aspect of the budget problems in higher education and how the default method is to shift the costs to students.

The post A response to Bloomberg article on UCLA student fees appeared first on e-Literate.

Downloading VirtualBox VM “Oracle Enterprise Manager 12cR4″

Marco Gralike - Sat, 2014-07-12 11:10
Strangely enough those cool VirtualBox VM machine downloads are nowadays a bit scattered on different Oracle places on http://otn.oracle.com and others. So in all that...

Read More

ADF 12c (12.1.3) Line Chart Overview Feature

Andrejus Baranovski - Sat, 2014-07-12 10:51
ADF 12c (12.1.3) is shipped with completely rewritten DVT components, there are no graphs anymore - they are called charts now. But there are much more, besides only a name change. Previous DVT components are still running fine, but JDeveloper wizards are not supporting them anymore. You should check ADF 12c (12.1.3) developer guide for more details, in this post I will focus on line chart overview feature. You should keep in mind, new DVT chart components do not work well with Google Chrome v.35 browser (supposed to be fixed in Google Chrome v.36) - check JDeveloper 12c (12.1.3) release notes.

Sample application - ADF12DvtApp.zip, is based on Employees data and displays line chart for the employee salary, based on his job. Two additional lines are displayed for maximum and minimum job salaries:


Line chart is configured with zooming and overview support. User can change overview window and zoom into specific area:


This helps a lot to analyse charts with large number of data points on X axis. User can zoom into peaks and analyse data range:


One important hint about new DVT charts - components should stretch automatically. Keep in mind -parent component (surrounding DVT chart) should be stretchable. As you can see, I have set type = 'stretch' for panel box, surrounding line chart:


Previous DVT graphs had special binding elements in Page Definition, new DVT charts are using regular table bindings - nothing extra:


Line chart in the sample application is configured with zooming and scrolling (there are different modes available - live, on demand with delay):


Overview feature is quite simple to enable - it is enough to add DVT overview tag to the line chart, and it works:

R12.2 :Modulus Check Validations for Bank Accounts

OracleApps Epicenter - Sat, 2014-07-12 08:45
The existing bank account number validations for domestic banks only check the length of the bank account number. These validations are performed during the entry and update of bank accounts.With R12.2 the account number validations for United Kingdom are enhanced to include a modulus check alongside the length checks. Modulus checking is the process of [...]
Categories: APPS Blogs

ORA-09925: Unable to create audit trail file

Oracle in Action - Sat, 2014-07-12 03:33

RSS content

I received this error message when I started my virtual machine and tried to logon to my database as sysdba to startup the instance.
[oracle@node1 ~]$ sqlplus / as sysdba

ERROR:
ORA-09925: Unable to create audit trail file
Linux Error: 30: Read-only file system
Additional information: 9925
ORA-09925: Unable to create audit trail file
Linux Error: 30: Read-only file system
Additional information: 9925

- I rebooted my machine and got following messages which pointed to some errors encountered during filesystem check and instructed to run fsck manually.

[root@node1 ~]# init 6

Checking filesystems

/: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
(i.e., without -a or -p options)
*** An error occurred during the filesystem check.
*** Dropping you to a shell; the system will reboot
*** when you leave the shell.
Give root password for maintenance
(or type Control-D to continue):

– I entered password for root to initiate filesystem check. As a result I was prompted multiple no. of times to allow fixing of  various filesystem errors.

(Repair filesystem) 1 # fsck
Fix(y)?

- After all the errors had been fixed, filesystem check was restarted

Restarting e2fsck from the beginning...

/: ***** FILE SYSTEM WAS MODIFIED *****
/: ***** REBOOT LINUX *****

- After filesystem had been finally checked to be correct, I exited for reboot to continue.

(Repair filesystem) 2 # exit

– After the reboot, I could successfully connect to my database as sysdba .

[oracle@node1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Sat Jul 12 09:21:52 2014

Copyright (c) 1982, 2009, Oracle. All rights reserved.

Connected to an idle instance.

SQL>

I hope this post was useful.

Your comments and suggestions are always welcome.

—————————————————————————————–

Related Links:

Home

Database Index

 

————-

 



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!You might be interested in this:  
Copyright © ORACLE IN ACTION [ORA-09925: Unable to create audit trail file], All Right Reserved. 2014.

The post ORA-09925: Unable to create audit trail file appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Swimming Progress

Tim Hall - Sat, 2014-07-12 02:59

While I was at BGOUG I went for swim each morning before the conference. That got me to thinking, perhaps I should start swimming again…

It’s been 4 weeks since I got back from the conference and I’ve been swimming very morning. It was a bit of a struggle at first. I think it took me 2-3 days to work up to a mile (1600M – about 9M short of a real mile). Since then I’ve been doing a mile each day and it’s going pretty well.

I’m pretty much an upper body swimmer it the moment. I kick my legs just enough to keep them from sinking, but don’t really generate any forward thrust with them. At this point I’m concentrating on my upper body form. When I think about it, my form is pretty good. When I get distracted, like when I am having to pass people, it breaks down a little. I guess you could say I am in state of “concious competence“. Over the next few weeks this should set in a bit and I can start working on some other stuff. It’s pointless to care too  much about speed at this point because if my form breaks down I end up having a faster arm turnover, but use more effort and actually swim slower. The mantra is form, form, form!

Breathing is surprisingly good. I spent years as a left side breather (every 4th stroke). During my last bout of swimming (2003-2008) I forced myself to switch to bilateral breathing, but still felt the left side was more natural. Having had a 6 year break, I’ve come back and both sides feel about the same. If anything, I would say my right side technique is slightly better than my left. Occasionally I will throw in a length of left-only or right-only (every 4th stroke) breathing for the hell of it, but at the moment every 3rd stroke is about the best option for me. As I get fitter I will start playing with things like every 5th stroke and lengths of no breathing just to add a bit of variety.

Turns are generally going pretty well. Most of the time I’m fine. About 1 in 20 I judge the distance wrong and end up having a really flimsy push off. I’m sure my judgement will improve over time.

At this point I’m taking about 33 minutes to complete a mile. The world record for 1500M short course (25M pool) is 14:10. My first goal is to get my 1600M time down to double the 1500M world record. Taking 5 minutes off my time seems like quite a big challenge, but I’m sure as I bring my legs into play and my technique improves my speed will increase significantly.

As I get more into the swing of things I will probably incorporate a bit of interval training, like a sprint length, followed by 2-3 at a more sedate pace. That should improve my fitness quite a lot and hopefully improve my speed.

For a bit of fun I’ve added a couple of lengths of butterfly after I finish my main swim. I used to be quite good at butterfly, but at the moment I’m guessing the life guards think I’m having a fit. It would be nice to be able to bang out a few lengths of that and not feel like I was dying. :)

I don’t do breaststroke any more, as it’s not good for my hips. Doing backstroke in a pool with other people in the lane sucks, so I can’t be bothered with that. Maybe on days when the pool is quieter I will work on it a bit, but for now the main focus is crawl.

Cheers

Tim…

PS. I reserve the right to get bored, give up and eat cake instead at any time… :)

Swimming Progress was first posted on July 12, 2014 at 9:59 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Finished first pass through Alapati 12c OCP upgrade book

Bobby Durrett's DBA Blog - Fri, 2014-07-11 17:29

I just finished reading Sam Alapati’s 12c OCP upgrade book for the first time and I really like it because of the content that it covered which I hadn’t discovered through my study of the Oracle manuals.  Also, it did a good job explaining some things that Oracle’s manuals left unclear.

After reading each chapter I took the end of chapter test and got between 60% and 75% of the questions right.  Next I plan to take the computer based test that was on the CD that came with the book and which covers both parts of the upgrade exam.

I did find minor errors throughout the book, but I still found it very useful especially after having already studied the same topics on my own without a study guide like this one to direct me.  The author’s insights into the test and the material it covers adds value because they guide me to the areas that I need to focus on.

- Bobby

Categories: DBA Blogs