Skip navigation.

Feed aggregator

ConfigTools Best Practices Whitepaper

Anthony Shorten - Tue, 2014-09-23 16:15

The ConfigTools facility allows customers to implement customizations to Oracle Utilities products. A Best Practices whitepaper has been released to provide implementers additional advice and techniques for this facility to use it effectively.

The whitepaper covers such topics as:

  • Overview of each ConfigTools objects such as Business Objects, Business Services, Query Zones, UI Maps, Data Areas etc including additional advice and techniques for efficient use.
  • Using Fields, Managed Content, Standard components, Lookups and Extended Lookups for building flexible solutions
  • Developing Multi-lingual solutions
  • etc

The whitepaper is available at My Oracle Support from ConfigTool Best Practices (Doc Id: 1929040.1)

Partner Webcast – Beyond the Dashboard with Oracle BI Publisher

The Reporting tools are widely used to support decision making and measure performance. The Business Intelligence tools, take the dashboard to the next level. It’s more than simply graphically...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Startup upgrade suppresses ORA-00955 on create table WRH$_SQL_PLAN

Bobby Durrett's DBA Blog - Tue, 2014-09-23 15:13

Today I was trying to see if upgrading from 11.2.0.2 to 11.2.0.4 would change the SYS.WRH$_SQL_PLAN table.  This table is large on our production system so I wanted to find out if some time-consuming update to this table would occur that would slow down our production upgrade but not be detected on our test systems.  We recently performed this upgrade on our development database and I was looking at the logs to see whether SYS.WRH$_SQL_PLAN was modified.  I found this curious entry (edited for brevity):

create table WRH$_SQL_PLAN
2  (snap_id           number        /* last snap id, used for purging */
3  ,dbid           number       not null
4  ,sql_id           varchar2(13)    not null
...
42   using index tablespace SYSAUX
43  ) tablespace SYSAUX
44  /

Table created.

The “Table created.” message sounds like the database created a new table without any errors.  But, looking at DBA_OBJECTS the table was not new.  So, I guessed that when you are running the catproc.sql script which includes the create table statement for SYS.WRH$_SQL_PLAN it must contain something that suppresses the error that you should get when you try to create a table and the table already exists:

ORA-00955: name is already used by an existing object

So, I opened my 11.2.0.3 test database using STARTUP RESTRICT  and ran @catproc.sql as SYSDBA and to my surprise I got the error just as you normally would:

 42   using index tablespace SYSAUX
 43  ) tablespace SYSAUX
 44  /
create table WRH$_SQL_PLAN
             *
ERROR at line 1:
ORA-00955: name is already used by an existing object

So, I decided to restart this database with STARTUP UPGRADE and rerun catproc.sql and as a result the error disappeared:

 40  ,constraint WRH$_SQL_PLAN_PK primary key
 41      (dbid, sql_id, plan_hash_value, id)
 42   using index tablespace SYSAUX
 43  ) tablespace SYSAUX
 44  /

Table created.

Cue the mysterious Twilight Zone music…

I guess this is a “feature” of the startup upgrade command but the “Table created.” message is kind of confusing.  The table isn’t really created if it exists.  But, I guess the good thing is that it doesn’t report an error.

– Bobby

 



Categories: DBA Blogs

Adapting to the new PeopleSoft Delivery Model

PeopleSoft Technology Blog - Tue, 2014-09-23 13:06

By now, most of you have heard that PeopleSoft change the way maintenance and enhancements are delivered with version 9.2 applications.  If you would like to find out more about the process and the life-cycle tools, lots of information can be found on the PeopleSoft Update Manager Home Page on My Oracle Support (Doc ID 1464619.1).  You can find details on how to set up the environment, what has changed, hardware and software pre-reqs and much, much more.  What is missing, is how the customer can get the most out of the process.

PeopleSoft has published a new white paper called Adapting to PeopleSoft Continuous Delivery.  In the paper it describes the benefits of the new life-cycle process and what organizations can do to benefit the most from the change.  Of course you can find the document on the PeopleSoft Information Portal, www.peoplesoftinfo.com,  or just click on the link above.  When you think about how big of a change this is, it's worth a read.

 Look for this to be a hot topic at this years Oracle Open World 2014 and beyond.

ADF Region Data Synchronisation with Change Event Policy

Andrejus Baranovski - Tue, 2014-09-23 11:30
This post applies for multiple ADF regions, based on the same Data Control. I will show you can avoid using ADF Contextual Events to synchronise two ADF regions, when both are based on the same Data Control and this Data Control is shared between the two.

Sample application contains two ADF Task Flows, both are using the same VO instance from shared Data Control:


Two ADF regions are implemented based on these TF's, one implements table component and another a form:


Data from both regions is synchronised automatically. Based on row selection in the table, form data from another region stays in synch:


Backwards synch works as well - change Salary attribute value in the form:


Salary attribute in the corresponding row from the table will be updated:


In order for this to work, make sure to set ChangeEventPolicy=ppr for Employees iterator in the first fragment page definition (this will ensure update when data is changed in the form and we want to see synch in the table):


Set the same property for the iterator in the second fragment page definition (this will ensure form data to be in synch when row selection is changed in the table):


Download sample application - RegionCommunicationApp.zip.

#Oracle Certification: Always go for the most recent one!

The Oracle Instructor - Tue, 2014-09-23 11:14

It is quite often that I encounter attendees in my Oracle University courses that strive to become OCP or sometimes even OCM, asking me whether they should better go for an older versions certificate before they take on the most recent. The reasoning behind those questions is mostly that it may be easier to do it with the older version. My advise is then always: Go for the most recent version! No Oracle Certification exam is easy, but the older versions certificate is already outdated. The now most recent one will become outdated also sooner as you may think :-)

OCP 12c upgrade

For that reason I really appreciate the option to upgrade from 9i/10g/11g OCA directly to 12c OCP as discussed in this posting. There is just no point in becoming a new 11g OCP now when 12c is there, in my opinion. What do you think?


Tagged: Oracle Certification
Categories: DBA Blogs

Oracle WebCenter & Oracle BPM @ OpenWorld 2014: Don’t-Miss Sessions, Demos, Hands-on Labs, and More

WebCenter Team - Tue, 2014-09-23 07:58

Blog by: Kellsey Ruppel, Principal Product Marketing Director, Oracle WebCenter

With more than 35 scheduled sessions, plus user group sessions, 10 live product demos, and 7 hands-on labs devoted to Oracle WebCenter and Oracle Business Process Management (Oracle BPM) solutions, Oracle OpenWorld 2014provides broad and deep insight into next-generation solutions that increase business agility, improve performance, and drive personal, contextual, and multichannel interactions.

Key Themes

"This year, we shine the spotlight on the many ways our customers are harnessing the power of cloud and mobile technology to power the digital business," says Oracle WebCenter Director of Product Management Jon Huang. "We also focus on how Oracle WebCenter and Oracle BPM add value for our applications customers, including both on-premises and software-as-a-service solutions."

Don't-Miss Strategy and Vision Sessions

To explore the latest advances and planned innovations in Oracle WebCenter and Oracle BPM solutions, Huang particularly recommends the following strategy and vision sessions.

Product Demos and Hands-on Labs

From more than a dozen combined product demos and hands-on labs, event organizers expect special interest in two demos and two hands-on labs devoted to the cloud.

Networking and Awards

The customer advisory boards for Oracle WebCenter and Oracle Business Process Management will meet on Sunday, September 28, followed by a cocktail event sponsored by Oracle partners TekStream and AVIO Consulting. 

Attendees should not miss the Oracle WebCenter and Oracle BPM Customer Appreciation Reception, sponsored by Oracle partners Aurionpro, AVIO Consulting, Bezzotech, Fishbowl Solutions, Keste, Redstone Content Solutions, TekStream, and VASSIT. RSVP required.

And on Tuesday, September 30, the 2014 Oracle Excellence Award Ceremony for Oracle Fusion Middleware Innovation (CON7029) honors organizations from around the globe that are using Oracle products to achieve significant business value.

Learn more and register now for Oracle OpenWorld 2014.

Consult the Oracle OpenWorld Focus On Documents for Oracle WebCenter andOracle Business Process Management for a complete list of related activities.

This content is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

Focus on User Groups at Oracle OpenWorld

David Haimes - Tue, 2014-09-23 07:22

Anyone who reads my blog regularly might be tired of me praising user groups, but I believe it is worth repeating myself.  The type of information sharing you get from user groups is unique and very valuable, it is important for Oracle to support that, be aware of it, but not interfere.  I have been involved with the Oracle Apps User Group via the GL SIG for several years now but I will never push my agenda, I listen, provide information or presentations that are requested and I learn.  Many of my colleagues will tell you the same.  So Sunday is User Group day at OpenWorld and I look forward to seeing a lot of familiar faces and learning something new.  I have a bit of a schedule crunch this year, 12 months ago I committed to run the 5k “Dolphin Dash” School fundraiser with my 8 year old son, I didn’t think for a minute it would fall on the same date as OpenWorld.  So I have to run a 5k at 9am and then dash into San Francisco to present at the GL SIG for 11am, if I arrive in my running gear, still sweating please accept my apologies.  I will stay around for more of the sessions and always I will be active on twitter, so you can find me that way too.

At the GL SIG I’ll be talking about and briefly showing the new Accounting Hub Reporting Cloud Service, which I am very excited about.  You will also hear from Lakshmi Sampath from Dell about SLA on their upgrade to R12.  The full agenda is below, I hope to see a lot of you there.

—————————————————————————————–

Agenda GL-SIG @ OpenWorld 2014 on Sunday, September 28th, 11.00am
Location – Moscone W – 3005

Agenda

  1. Introduction to the SIG
  2. Proactive support presentation
  3. Sponsor message (Excel4Apps)
  4. Lakshmi (SLA on Upgrade @ Dell)
  5. David Haimes new Accounting Hub Cloud Reporting Service
  6. Open questions

We will be joined by our Oracle colleagues to tell us about ProActive support and their leading practices to deliver content and help to the Oracle user community.
Excel4Apps will be helping us by sponsoring the meeting at this conference.

In the past many oracle apps customers have looked elsewhere for their reporting needs – until now – see a preview of “Accounting Hub Reporting Cloud Service”. See how many finance users are using their favorite tools for reporting – Smartview, Cubes, and related functionality to get their financials.  Fusion Account Hub Reporting Cloud Service is a new subscription service provides out of the box integration with EBS R12 General Ledger for reporting.  This session will provide an introduction to the new service, how it connects, works with EBS data, the reporting capabilities available and what it does and does not support.  Come listen to David Haimes, Senior Director, Financials Product Development t alk about the new service.

Hear what Lakshmi Sampath from Dell has to say about their upgrade to R12. The presentation covers the R12 Upgrade Case-Study at Dell providing details on what happens during upgrade to R12 in various subledgers (PO, AP, AR, PA, FA) with respect to SLA. This presentation will also provide details on strategies for data conversion to SLA model during upgrade, and cover critical lessons learned during the upgrade @ DELL.

—————————————————————————————–

 


Categories: APPS Blogs

New LMS Market Data: Edutechnica provides one-year update

Michael Feldstein - Tue, 2014-09-23 04:58

In Fall 2013 we saw a rich source of LMS market data emerge.

George Kroner, a former engineer at Blackboard who now works for University of Maryland University College (UMUC), has developed what may be the most thorough measurement of LMS adoption in higher education at Edutechnica (OK, he’s better at coding and analysis than site naming). This side project (not affiliated with UMUC) started two months ago based on George’s ambition to unite various learning communities with better data. He said that he was inspired by the Campus Computing Project (CCP) and that Edutechnica should be seen as complementary to the CCP.

The project is based on a web crawler that checks against national databases as a starting point to identify the higher education institution, then goes out to the official school web site to find the official LMS (or multiple LMSs officially used). The initial data is all based on the Anglosphere (US, UK, Canada, Australia), but there is no reason this data could not expand.

There is new data available in Edutechnica’s one-year update, with year-over-year comparisons available as well as improvements to the methodology. Note that the methodology has improved both in terms of setting the denominator and in terms of how many schools are included in the data collection.

The Fall 2014 data which now includes all schools with more than 800 enrollments:

There’s more data available on the site, including measures of the Anglosphere (combining US, UK, Canada and Australia data) as well as comparison tables for 2013 to 2014. Go read the whole post.

LMS Anglo 2014In the meantime, here are some initial notes on this data. Given the change in methodology, I will focus on major changes.

  • Blackboard’s BbLearn and ANGEL continue to lose market share in US -[1] Using the 2013 to 2014 tables (> 2000 enrollments), BbLearn has dropped from 848 to 817 institutions and ANGEL has dropped from 162 to 123. Using the revised methodology, Blackboard market share for > 800 enrollments now stands at 33.5% of institutions and 43.5% of total enrollments.
  • Moodle, D2L, and Sakai have no changes in US - Using the 2013 to 2014 tables (> 2000 enrollments), D2L has added only 2 schools, Moodle none, and Sakai 2 schools.
  • Canvas is the fasted growing LMS and has overtaken D2L - Using the 2013 to 2014 tables (> 2000 enrollments), Canvas grew ~40% in one year (from 166 to 232 institutions). For the first time, Canvas appears to have have larger US market share than D2L (13.7% to 12.2% of total enrollments using table above).
  • BbLearn is popular in the UK while Moodle is largest provider in Canada and Australia - The non-US numbers are worth reviewing, even without the same amount of detail as we have for US numbers.

While this data is very useful, I will again point out that no one to my knowledge has independently verified the accuracy of the data at this site. I have done sanity checks against Campus Computing and ITC data, but I do not have access to the Edutechnica specific mechanism for counting systems. In order to gain longer-term acceptance of these data sets, we will need some method to provide some level of verification.

In the meantime, enjoy the new market data.

Update: Allan Christie has a post up questioning the source data for Australia. I hope this information is used to improve the Edutechnica data set or at least leads to clarifications.

Put simply, it is generally accepted that there are 39 universities (38 public, 1 private) in Australia. Given the small number of universities and my knowledge of the sector I know that there are 20 (51%) universities which use Blackboard as their enterprise LMS, 16 (41%) use Moodle, and 3 (8%) use D2L. It is acknowledged that there are some departments within universities that use another LMS but according to Edutechnica’s methodology these were excluded from their analysis.

  1. Disclosure: Blackboard is a client of MindWires Consulting.

The post New LMS Market Data: Edutechnica provides one-year update appeared first on e-Literate.

Introduction to Oracle BI Cloud Service : Provisioning Data

Rittman Mead Consulting - Tue, 2014-09-23 04:00

In the first post in this series I looked at the new Oracle BI Cloud Service, which went GA over the weekend and which Rittman Mead have been using these past few weeks as part of a beta release. In the first post I looked at what BICS is and who its aimed at in this initial release, and went through the features at a high-level; over the rest of the week I’ll be looking at the features in-detail, starting today with the data upload and provisioning process. Here’s the links to the rest of the series, with the items getting updated over the week as I post each entry in the series:

As I mentioned in that first post, “Introduction to Oracle BI Cloud Service : Product Overview”, BICS in this initial release to my mind is aimed at departmental use-cases where someone wants to quickly upload and analyse an offline dataset and share the results with other members of their team. BICS comes bundled with Oracle Database Schema Service and 50GB of storage, and OBIEE in this setup reports just against this data source with no ability to reach-out dynamically to other data sources or blend those sources with the main one in Oracle’s cloud database. It’s aimed really at users with a single source of data to work with, who’ve probably obtained it as an export from some other system and just want to be able to report against it, though as we’ll see later in this post it is possible to link to other SaaS sources with a bit of PL/SQL wizardry.

So the first task you’re likely to perform when working with BICS is to upload some data to report on. There are three main options for uploading data to BICS, two of which are browser-based and aimed at end-users, and one that uses SQL*Developer and more aimed at devs. BICS itself comes with a menu items on the home page for uploading data, and this is what we’ll think users will use most as it’s built-into the tool and fairly prominent.

NewImage

Clicking on this menu item launches an ApEx application hosted in the Database Schema Service that comes with BICS, and which allows you to upload and parse XLS and delimited file-types to the database cloud instance and then store the contents in database tables.

NewImage

Oracle Database Schema Service also comes with Application Express (ApEx) as a front-end, and ApEx has similar tools for upload datasets into the service, with additional features for creating views and PL/SQL packages to process and manipulate the data, something we used in our beta program example to connect to Salesforce.com and download data using their REST API. In-theory you shouldn’t need to use these features much, but SIs and partners such as ourselves will no doubt use ApEx a lot to build out the loading infrastructure, data cleansing and other features that you might want for a packaged cloud app – so get your PL/SQL books out and brush-up on ApEx development.

NewImage

The other way to get data into BICS is to use Oracle SQLDeveloper, which has a special Oracle Cloud connector type that allows you to view and work with database objects as if they were regular database ones, and upload data to the cloud in the form of “carts”. I’d imagine these options will get extended over time, either by tools or utilities Oracle release for this v1.0 BICS release, or by BICS eventually supporting the full Oracle Database Instance Service that’ll support regular SQLNet connections from ETL tools.

NewImage

So once you’ve got some data uploaded into Database Schema Services, you’ll end up with a set of source tables from which you can create your BI Repository. Check back tomorrow for more details on how BICS’s new thin-client data modeller works and how you create your business model against this cloud data source, including how the repository editing and checkout process works in this new potentially multi-user development environment.

 

Categories: BI & Warehousing

How to measure Exadata SmartScan efficiency

Yann Neuhaus - Tue, 2014-09-23 03:09

A thread on OTN Forum about Exadata came to the following question: "But how can I monitor if it is effectively used or not?". This is a common question. There are 3 exclusive features coming with Exadata, and instance statistics can show their usage. Even better: two of them can even be checked on your current (non-Exadata) system. And that is good to foresee how Exadata can improve your workload.

Let's find how to measure the following feature efficiency:

  • Have reads eligible to SmartScan
  • Avoid I/O with Storage Index
  • Avoid transfert with offloading
Have reads eligible to SmartScan

First of all, SmartScan occurs only on direct-path reads. If you don't see 'cell smart table scan' and 'cell smart index scans' in your Top timed events, then SmartScan can do nothing for you. And you see that as 'direct path read' wait event when you are not in Exadata.

If those direct-path reads are not a significant part of your DB Time, then you have something else to do before going to Exadata. You should leverage direct-path reads: full table scans, parallel query, etc.

Then when you are on Exadata and 'cell smart table scan' and 'cell smart index scans' are used, then you can check the proportion of reads that are actually using SmartScan.

SmartScan input is: 'cell physical IO bytes eligible for predicate offload'. This is the amount of reads (in bytes) that are going to the SmartScan code. You have the total amount of reads as 'physical read total bytes' so you can compare it to know which part of your reads is subject to SmartScan.

If 'cell physical IO bytes eligible for predicate offload' / 'physical read total bytes' is small, then you have something to tune here. You want to do direct-path reads and you want to see 'TABLE ACCESS STORAGE' in the execution plan.

Not yet in Exadata? The Performance Analyzer can simulate it. The statistic is 'cell simulated physical IO bytes eligible for predicate offload.'

Avoid I/O with Storage Index

When you know that SmartScan is used or can be used on a significant part of your reads, then the first thing you want to do is to avoid physical I/O. Among the 'cell physical IO bytes eligible for predicate offload', some reads will not necessitate disc I/O at all, thanks to Storage Indexes. You have the volume in 'cell physical IO bytes saved by storage index'. Just compare that with the eligible volume and you know the amount of disk reads that have been saved by Storage Indexes. That is the most efficient optimization of SmartScan: you don't have to read them, you don't have to uncompress them, you don't have to filter them, you don't have to transfer them...

Avoid transfert with offloading

Then there is the proper offloading. The previous (Storage Indexes) addressed I/O elimination. This is the key feature for performance. Offloading addresses the transfer from storage to database servers. This is the key feature for scalability.

In the last decade, we replaced lot of direct attached disks with SAN. That was not for performance reasons. That was for maintainability and scalability. Having a shared storage system helps to allocate disk space when needed, get good performance by striping, get high availability by mirroring. The only drawback is the transfer time that is higher than direct attached disks. 

Exadata still has the scalable architecture of the SAN, but releases the transfer bottleneck with offloading (in addition fo the fast interconnect which is very efficient). What can be filtered early on storage cells do not have to be transferred: columns not in the select clause, rows outside of the where (or join) clause predicates.

And you can measure it as well. When you measure it on non-Exadata with the performance analyzer, you compare the SmartScan output, which is the 'cell simulated physical IO bytes returned by predicate offload', to the SmartScan input 'cell simulated physical IO bytes returned by predicate offload'. And this is a good estimation of the efficiency you can expect when going to Exadata.

When you are on Exadata, that may be different. Compressed data have to be uncompressed in order to apply the predicates and projection at the storage cells. Then the predicate/projection offloading input is: 'cell IO uncompressed bytes'. and you compare that to 'cell physical IO interconnect bytes returned by smart scan'

Summary

If you want to see Exadata SmartScan efficiency, just check an AWR report and compare the following:

cell physical IO bytes eligible
for predicate offload


      /  
   

 

physical read total bytes
 

     Goal:
     high % 

cell physical IO bytes saved
by storage index


     
      /  
   


 

cell physical IO bytes eligible
for predicate offload

     Goal:
     high %

cell physical IO interconnect bytes
returned by smart scan
 

      /      

 

cell IO uncompressed bytes

 

      Goal:
      small %
 

 

You probably wonder why I don't use the 'smart scan efficiency ratio' that we find at different places? They are often wrong for two reasons:

  • They compare 'cell physical IO interconnect bytes returned by smart scan' to 'cell physical IO interconnect bytes'. But the latter includes the writes as well, and because of ASM mirroring, writes are multipled when measured at interconnect level.

  • The 'cell physical IO interconnect bytes returned by smart scan' can't be compared with 'physical read total bytes' because the former has some data uncompressed. 

For that reason, we cannot use only a single ratio that covers all the SmartScan features.

This is why I always check the 3 pairs above in order to get a relevant picture. And two of them are available with the simulation mode (I'll blog about it soon).

 

 

OED 12c

Nuno Souto - Tue, 2014-09-23 02:51
Yeah...    Aka: Oracle Enterprise Damager... For those who might not know - yes indeed, a few of the so-called "cognoscenti" are not all-knowing! - we've been engaged in using grid control to monitor all our db servers for quite a while. That's both MSSQL and Oracle RDBMS servers.  For 4 years now! In a nutshell and to cut a very long story short:  - we started with 10g.  The lesser said Noonshttp://www.blogger.com/profile/04285930853937157148noreply@blogger.com2

SQL Server 2014: classic commit vs commit with delayed durability & I/Os

Yann Neuhaus - Mon, 2014-09-22 23:43

When you learn about SQL Server, you will often hear that a commit transaction is a synchronous operation and that you can trust it. In this blog post, I will provide some details about what we mean by synchronous behavior. The reason is that sometimes, when I talk about the new delayed durability feature provided by SQL Server 2014, there are some confusions. If you want more details on this new feature, please read the blog post of my colleague Stéphane Haby here. A quick shortcut is often the following: writing to the transaction log is synchronous, while writing with the new delayed durability feature is asynchronous.

First of all, you probably know that the buffer manager guarantees that the transaction log is written before the changes to the database are written. This is the famous protocol called Write-Ahead logging (aka WAL). Log records are not written directly to disk but first into the buffer cache and then flushed to the disk in a purely asynchronous manner. However, at the commit time the related thread must wait for the writes to complete to the point of the commit log record in the transaction log. This is the synchronous part of commit operation in order to meet the WAL protocol.

On the other hand, the new delayed durability feature allows the commit operation to be asynchronous (like writing to the transaction) but the big difference is that the related thread doesn’t have to wait until the commit log record is written in the transaction log. This new feature introduces some performance improvements, but as a caveat, there is the loss of data.

We can prove that both commit operations write asynchronously by using either the process monitor tool or by using a debugger and trying to catch the part of the code responsible for writing into the transaction log file.

I will use the following T-SQL script for this demonstration:

--> Commit transaction (without delayed durability option)

AdventureWorks2012; GO   -- Ensure DELAYED_DURABILITY is OFF for this test ALTER DATABASE adventureworks2012 SET DELAYED_DURABILITY = DISABLED; GO   -- Create table t_tran_delayed_durability IF OBJECT_ID(N't_tran_delayed_durability', 'U') IS NOT NULL        DROP TABLE t_tran_delayed_durability; GO     create table t_tran_delayed_durability (        id int identity ); GO   -- insert 1000 small transactions declare @i int = 1000   while @i 0 begin        insert t_tran_delayed_durability default values          set @i = @i - 1; end;

 

--> Commit transaction (with delayed durability enabled)

-- Ensure DELAYED_DURABILITY is ON for this test ALTER DATABASE adventureworks2012 SET DELAYED_DURABILITY = ALLOWED; GO   -- Create table t_tran_delayed_durability IF OBJECT_ID(N't_tran_delayed_durability', 'U') IS NOT NULL        DROP TABLE t_tran_delayed_durability; GO   create table t_tran_delayed_durability (        id int identity ); GO   -- insert 1000 small transactions declare @i int = 1000   while @i 0 begin        begin tran tran_1        insert t_tran_delayed_durability default values        commit tran tran_1 with (DELAYED_DURABILITY = on)          set @i = @i - 1; end;

 

Below, you will find an interesting picture of the process monitor trace output that shows the SQL Server file system activity that writes to the transaction log file in both cases.

 

--> Commit transaction (without delayed durability option)

 

blog_17_1_procmon_normal_transaction

 

You will notice that SQL Server uses the WriteFile() function to write to the transaction log for each commit operation (4096 bytes each). I will only show you a sample of the output, but you can imagine the final number of records you can have here. If we take a look at the process monitor stack you will notice that SQL Server uses the WriteFile() Windows function located in the Kernel32.lib library to write to the transaction log with an overlapped structure (in others words asynchronous I/O).

 

blog_17_3_procmon_stack

 

This test confirms what Bob Dorr explains in the Microsoft article about SQL Server I/Os and transaction log I/O.

 

--> Commit transaction (with delayed durability enabled)

 

blog_17_1_procmon_delayed_transaction

 

In this case, the same function is used by SQL Server with a big difference here: SQL Server will group some IO into chunks (in my case 16K, 48K, and 60K) before writing to disk. Cleary, there is less activity here (in my case 18 lines against approximatively 1000 lines for the first test).

We can also attach a debugger (for instance WinDbg) to the SQL Server process and set a breakpoint in the Kernel32!writefile() function for the calling thread in order to have more details about the execution stack. Note that the process monitor stack showed the module KERNELBASE.dll for the WriteFile() function but as mentioned by this Microsoft article kernelbase.dll gets functionality from kernel32.dll and advapi32.dll.

 

blog_17_1_windbg_stack_writefile

 

Both commit operations show the same stack except of course the number of executions.

To summarize, I wanted to show you that both commit operations (with and without delayed duration) are using asynchronous IO to write to the transaction log file. The big difference is that with the delayed durability option, SQL Server improves the log IO writes performance by deferring and grouping the IO into 60K chunks before writing to the disk. I hope this will help you understand more about SQL Server commit operations.

Oracle OEM Cloud Control 12.1.0.4 - agent upgrade & patch

Yann Neuhaus - Mon, 2014-09-22 20:09

The new Oracle OEM Cloud Control 12.1.0.4 release migration makes it necessary for the DBA to migrate the old agent version to 12.1.0.4. If your infrastructure has a huge number of agents and if you want to apply the agent patches to the upgraded agents, this might be a very time-consuming job. However, there is a way to realize the operation in just one shot.

In my example, we have an agent in version 12.1.0.3:

 

oracle@vmtestfusion01:/u00/app/oracle/agent12c/core/12.1.0.3.0/OPatch/ [agent12c] ./opatch lsinventory

Oracle Interim Patch Installer version 11.1.0.10.0

Copyright (c) 2013, Oracle Corporation.

All rights reserved.

Oracle Home       : /u00/app/oracle/agent12c/core/12.1.0.3.0

Central Inventory : /u00/app/oraInventory  

from           : /u00/app/oracle/agent12c/core/12.1.0.3.0/oraInst.loc

OPatch version   : 11.1.0.10.0

OUI version       : 11.1.0.11.0

Log file location : /u00/app/oracle/agent12c/core/12.1.0.3.0/cfgtoollogs/opatch/opatch2014-09-02_08-00-36AM_1.log

OPatch detects the Middleware Home as "/u00/app/oracle/Middleware/11g"

Lsinventory Output file location : /u00/app/oracle/agent12c/core/12.1.0.3.0/cfgtoollogs/opatch/lsinv/lsinventory2014-09-02_08-00-36AM.txt

Installed Top-level Products (1):EM Platform (Agent)                                                 12.1.0.3.0

There are 1 products installed in this Oracle Home.

Interim patches (2) :Patch 10203435     : applied on Sat Jun 22 08:51:24 CEST 2013

Unique Patch ID: 15915936.1   Created on 7 Feb 2013, 18:06:13 hrs PST8PDT  

Bugs fixed:     10203435

Patch 16087066     : applied on Sat Jun 22 08:51:22 CEST 2013

Unique Patch ID: 15928288  

Created on 4 Feb 2013, 04:52:18 hrs PST8PDT  

Bugs fixed:     13583799, 6895422

OPatch succeeded.

 

In the OMS environment, we have to download and copy the agent-side patches to $OMS_HOME/install/oneoffs/12.1.0.4.0/Generic.

In my example, I downloaded the 19002534 EM DB plugin bundle patch 12.1.0.6.1 (agent side):

 

oracle@vmtestoraem12c:/u01/app/oracle/MiddleWare_12cR4/oms/install/oneoffs/12.1.0.4.0/Generic/ [oms12c] ls

p19002534_121060_Generic.zip

 

The agent upgrade procedure will use this directory to apply the patch.

Let's upgrade the agent from 12.1.0.3 to 12.1.0.4 by using the Cloud Control console:

 

ag1

 

Select the agent to be upgraded:

 

ag2_copy

 

The new job screen lists the different steps:

 

ag3_copy

 

In the log file we can visualize the patch:

 

Tue Sep 2 08:07:26 2014 -

Found following valid patch files from the patch location which will be considered in this patching session :

Tue Sep 2 08:07:26 2014 - p19002534_121060_Generic.zip

Tue Sep 2 08:07:26 2014 - /u00/app/oracle/agent12c/core/12.1.0.4.0/bin/unzip -o p19002534_121060_Generic.zip -d /u00/app/oracle/agent12c/oneoffs >> /u00/app/oracle/agent12c/core/12.1.0.4.0/cfgtoollogs/agentDeploy/applypatchesonapplicablehome2014-09-02_08-07-26.log 2>&1

Archive: p19002534_121060_Generic.zip  

creating: /u00/app/oracle/agent12c/oneoffs/19002534/  

creating: /u00/app/oracle/agent12c/oneoffs/19002534/etc/  

creating: /u00/app/oracle/agent12c/oneoffs/19002534/etc/config/

inflating: /u00/app/oracle/agent12c/oneoffs/19002534/etc/config/actions.xml

…………

 

By checking the agent inventory, we verify the new upgraded agent has received the EM DB PLUGIN BUNDLE PATCH 12.1.0.6.1:

 

[agent12c] opatch lsinventory -oh /u00/app/oracle/agent12c/plugins/oracle.sysman.db.agent.plugin_12.1.0.6.0/

Oracle Interim Patch Installer version 11.1.0.10.4

Copyright (c) 2014, Oracle Corporation. All rights reserved.

Oracle Home       : /u00/app/oracle/agent12c/plugins/oracle.sysman.db.agent.plugin_12.1.0.6.0

Central Inventory : /u00/app/oraInventory

   from          : /u00/app/oracle/agent12c/plugins/oracle.sysman.db.agent.plugin_12.1.0.6.0//oraInst.loc

OPatch version   : 11.1.0.10.4

OUI version       : 11.1.0.12.0

Log file location : /u00/app/oracle/agent12c/plugins/oracle.sysman.db.agent.plugin_12.1.0.6.0/cfgtoollogs/opatch/opatch2014-09-02_10-09-32AM_1.log

OPatch detects the Middleware Home as "/u00/app/oracle/Middleware/11g"

Lsinventory Output file location : /u00/app/oracle/agent12c/plugins/oracle.sysman.db.agent.plugin_12.1.0.6.0/cfgtoollogs/opatch/lsinv/lsinventory2014-09-02_10-09-32AM.txt

Installed Top-level Products (1):

Enterprise Manager plug-in for Oracle Database                       12.1.0.6.0

There are 1 products installed in this Oracle Home.

Interim patches (1) :

Patch 19002534     : applied on Tue Sep 02 10:05:37 CEST 2014

Unique Patch ID: 17759438

Patch description: "EM DB PLUGIN BUNDLE PATCH 12.1.0.6.1 (AGENT SIDE)"

   Created on 17 Jun 2014, 09:10:22 hrs PST8PDT

   Bugs fixed:

     19002534, 18308719

 

This feature is very useful for massive agent upgrades, because the agent is upgraded and, in the same operation, the bundle patch is applied. You are also able to use the patch plan to apply bundle patches to multiple agents in one operation.

My presentations at OOW 2014 (See you there!)

Tanel Poder - Mon, 2014-09-22 17:16

Here’s where I will hang out (and in some cases speak) during the OOW:

Sunday, Sep 28 3:30pm – Moscone South – 310

Monday, Sep 29 8:30am – 4:00pm - Creativity Museum

  • I will mostly hang out at the OakTableWorld satellite event and listen to the awesome talks there.

Tuesday, Sep 30 10:00am – Creativity Museum

  • I will speak about Hacking Oracle 12c for an hour at OakTableWorld (random stuff about the first things I researched when Oracle 12c was released)
  • I also plan to hang out there for most of the day, so see you there!

Wednesday, Oct 1 – 3:00pm – Jillian’s

  • I’ll be at Enkitec’s “office” (read: we’ll have beer) in Jillian’s (on 4th St between Mission/Howard) from 3pm onwards on Wednesday, so, come by for a chat.
  • Right after Enkitec’s office hours I’ll head to the adjacent room for the OTN Bloggers meetup and this probably means more beer & chat.

Thursday, Oct 2 – 10:45am – Moscone South – 104

  • Oracle In-Memory Database In Action
  • In this presentation Kerry and I will walk you through the performance differences when swithching from an old DW/reporting system (on a crappy I/O subsystem) all the way to having your data cached in Oracle’s In-Memory Column Store – with all the Oracle 12.1.0.2’s performance bells and whistles enabled. It will be awesome – see you there! ;-)

 

Related Posts

big thanks to Jim Czuprynski for NEOOUG meeting presentations

Grumpy old DBA - Mon, 2014-09-22 16:56
Jim the smooth talking always motivated Oracle Ace Director did two great presentations for us here at NEOOUG on Friday September 19th.

His presentations can be found 12c SQL that almost tunes itself and 12c How hot is your data?

Thanks Jim!
Categories: DBA Blogs

Oracle OpenWorld and JavaOne 2014 Cometh

Oracle AppsLab - Mon, 2014-09-22 11:28

This time next week, we’ll be in the thick of the Oracle super-conference, the combination of Oracle OpenWorld and JavaOne.

This year, our team and our larger organization, Oracle Applications User Experience, will have precisely a metric ton of activities during the week.

For the first time, our team will be doing stuff at JavaOne too. Anthony (@anthonyslai) will be talking about the IFTTPi workshop we built for the Java team for MakerFaire back in May on Monday, and Tony will be showing those workshop demos in the JavaOne OTN Lounge at the Hilton all week.

If you’re attending either show or both, stop by, say hello and ask about our custom wearable.

Speaking of wearables, Ultan (@ultan) will be hosting a Wearables Meetup a.k.a. Dress Code 2.0 in the OTN Lounge at OpenWorld on Tuesday, September 30 from 4-6 PM. We’ll be there, and here’s what to expect:

  • Live demos of wearables proof-of-concepts integrated with the Oracle Java Cloud.
  • A wide selection of wearable gadgets available to try on for size.
  • OAUX team chatting about use cases, APIs, integrations, UX design, fashion and how you can use OTN resources to build your own solutions.

Update: Here are Bob (@OTNArchBeat) and Ultan talking about the meetup.

Here’s the list of all the OAUX sessions:

Oracle Applications Cloud User Experiences: Trends, Tailoring, and Strategy

Presenter: Jeremy Ashley, Vice President, Applications User Experience; Jatin Thaker, Senior Director, User Experience; and Jake Kuramoto, Director, User Experience

The Oracle Applications Cloud user experience design strategy is about simplicity, mobility, and extensibility. See what we mean by simplicity as we demo our latest cloud user experiences and show you only the essential information you need for your work. Learn how we are addressing mobility, by delivering the best user experience for each device as you access your enterprise data in the cloud. We’ll also talk about the future of enterprise experiences and the latest trends we see emerging in the consumer market. And finally, understand what we mean by extensibility after hearing a high-level overview of the tools designed for tailoring the cloud user experience. With this team, you will always get a glimpse into the future, so we know you will be inspired about the future of the cloud.

Session ID: CON7198
Date: Monday, September. 29, 2014
Time: 2:45 p.m. – 3:30 p.m.
Location: Moscone West – 3007

Learn How to Create Your Own Java and Internet of Things Workshop

Presenter: Anthony Lai, User Experience Architect, Oracle

This session shows how the Applications User Experience team created an interactive workshop for the Oracle Java Zone at Maker Faire 2014. Come learn how the combination of the Raspberry Pi and Embedded Java creates a perfect platform for the Internet of Things. Then see how Java SE, Raspi, and a sprinkling of user experience expertise engaged Maker Faire visitors of all ages, enabling them to interact with the physical world by using Java SE and the Internet of Things. Expect to play with robots, lights, and other Internet-connected devices, and come prepared to have some fun.

Session ID: JavaOne 2014, CON7056
Date: Monday, Sept. 29, 2014
Time: 4 p.m. – 5 p.m.
Location: Parc 55 – Powell I/II

Oracle HCM Cloud User Experiences: Trends, Tailoring, and Strategy

Presenters: Jeremy Ashley, Vice President, Applications User Experience, Oracle; Aylin Uysal, Director, Human Capital Management User Experience, Oracle

The Oracle Applications Cloud user experience design strategy is about simplicity, mobility, and extensibility. See what we mean by simplicity as we demo our latest cloud user experiences and show you only the essential information you need for your work. Learn how we are addressing mobility, by delivering the best user experience for each device as you access your enterprise data in the cloud. We’ll also talk about the future of enterprise experiences and the latest trends we see emerging in the consumer market. And finally, understand how you can extend with the Oracle tools designed for tailoring the cloud user experience. With this team, you will always get a glimpse into the future. Come and get inspired about the future of the Oracle HCM Cloud.

Session ID: CON8156
Date: Tuesday, Sept. 30, 2014
Time: 12:00 p.m. – 12:45 p.m.
Location: Palace – Presidio

Oracle Sales Cloud: How to Tailor a Simple and Efficient Mobile User Experience

Presenters: Jeremy Ashley, Vice President, Applications User Experience, Oracle; Killian Evers, Senior Director, Applications User Experience, Oracle

The Oracle Applications Cloud user experience design strategy is about simplicity, mobility, and extensibility. In this session, learn how Oracle is addressing mobility by delivering the best user experience for each device as you access your enterprise data in the cloud. Hear about the future of enterprise experiences and the latest trends Oracle sees emerging in the consumer market. You’ll understand what Oracle means by extensibility after getting a high-level overview of the tools designed for tailoring the cloud user experience, and you’ll also get a glimpse into the future of Oracle Sales Cloud.

Session ID: CON7172
Date: Wednesday, Oct. 1 2014
Time: 4:30 p.m. – 5:15 p.m.
Location: Moscone West – 2003

Oracle Applications Cloud: First-Time User Experience

Presenters: Laurie Pattison, Senior Director, User Experience; and Mindi Cummins, Principal Product Manager, both of Oracle

So you’ve bought and implemented Oracle Applications Cloud software. Now you want to get your users excited about using it. Studies show that one of the biggest obstacles to meeting ROI objectives is user acceptance. Based on working directly with thousands of real users, this presentation discusses how Oracle Applications Cloud is designed to get your users excited to try out new software and be productive on a new release ASAP. Users say they want to be productive on a new application without spending hours and hours of training, experiencing death by PowerPoint, or reading lengthy manuals. The session demos the onboarding experience and even shows you how a business user, not a developer, can customize it.

Session ID: CON7972
Date: Thursday, Oct. 2, 2014
Time: 12 p.m. – 12:45 p.m.
Location: Moscone West – 3002

Using Apple iBeacons to Deliver Context-Aware Social Data

Presenters: Anthony Lai, User Experience Architect, Oracle; and Chris Bales, Director, Oracle Social Network Client Development

Apple’s iBeacon technology enables companies to deliver tailored content to customers, based on their location, via mobile applications. It will enable social applications such as Oracle Social Network to provide more relevant information, no matter where you are. Attend this session to see a demonstration of how the Oracle Social Network team has augmented the mobile application with iBeacons to deliver more-context-aware data. You’ll get firsthand insights into the design and development process in this iBeacon demonstration, as well as information about how developers can extend the Oracle Social Network mobile applications.

Session ID: Oracle OpenWorld 2014, CON8918
Date: Thursday, Oct. 2, 2014
Time: 3:15 p.m. – 4 p.m.
Location: Moscone West – 2005

Hope to see you next week.Possibly Related Posts:

2014 Annual Bloggers Meetup

OTN TechBlog - Mon, 2014-09-22 11:22

The Annual Oracle Bloggers Meetup, one of our favorite events of OpenWorld, is happening at usual place and time thanks to Oracle Technology Network and Pythian.

What: Oracle Bloggers Meetup 2014

When: Wed, 1-Oct-2014, 5:30pm

Where: Main Dining Room, Jillian’s Billiards @ Metreon, 101 Fourth Street, San Francisco, CA 94103 (street view). Please comment with “COUNT ME IN” if coming — we need to know the attendance numbers.

Read more are at Alex Gorbachev's latest blog post. 


PeopleSoft and Web Browsers – The Guide

Duncan Davies - Mon, 2014-09-22 08:00

browsersThe topic of PeopleSoft/PeopleTools versions and web browsers is often a complicated one, yet it’s an issue that every client will face when they either upgrade PeopleTools or move to a new Application version that contains a Tools increase.

Cedar have recently been asked by a client for some assistance to get a definitive answer to the important questions and we thought it would be useful to share this information. We’ve put together a white paper that shows you the relevant browser versions for PeopleTools 8.54 and PeopleTools 8.53 (i.e. the versions that customers are likely to be upgrading to over the next year or so):

Cedar Consulting White Paper – PeopleSoft and Web Browsers

We hope that it saves you some time during your next upgrade.

 


OOW - Focus On Support and Services for Siebel CRM

Chris Warticki - Mon, 2014-09-22 08:00
Focus On Support and Services for Siebel CRM   Thursday, Oct 02, 2014

Conference Sessions

Customer Success Story: State of Michigan, MAGI Case Study
Beth Long, Project Manager, Cgi Technologies and Solutions Inc.
Sue Doby, Senior Director, Public Sector Consulting, Oracle
9:30 AM - 10:15 AM Marriott Marquis - Salon 1/2/3* CON2747 Best Practices for Maintaining Siebel CRM
PREM Lakshmanan, Senior Director Customer Support, Oracle
Iain Mcgonigle, Senior Director, Customer Support, Oracle
12:45 PM - 1:30 PM Moscone West - 3009 CON8314    My Oracle Support Monday Mix

Monday, Sep 29

Join us for a fun and relaxing happy hour at the annual My Oracle Support Monday Mix. This year’s gathering is Monday, September 29 from 6:00 to 8:00 p.m. at the ThirstyBear Brewing Company – just a 3 minute walk from Moscone Center. Admission is free for Premier Support customers with your Oracle OpenWorld badge. Visit our web site for more details: http://www.oracle.com/goto/mondaymix 6:00 PM - 8:00 PM ThirstyBear Brewing Company Oracle Support Stars Bar & Mini Briefing Center

Monday, Sep 29

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 6:00 PM Moscone West Exhibition Hall, 3461 and 3908

Tuesday, Sep 30

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 6:00 PM Moscone West Exhibition Hall, 3461 and 3908

Wednesday, Oct 01

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 3:45 PM Moscone West Exhibition Hall, 3461 and 3908

To secure a seat in a session, please use Schedule Builder to add to your Schedule.