Feed aggregator

Audit logon and logoff of specific users eg sys and system

Tom Kyte - Mon, 2018-01-29 18:06
Hi, I need to write a trigger to audit logon and logoff activities of specific users 'SYS' and 'SYSTEM'. I have one trigger but its not inserting records into the table mentioned in the trigger. Please, help me to fix the issue.Your help will be...
Categories: DBA Blogs

How to unpivot table data with out displaying column names

Tom Kyte - Mon, 2018-01-29 18:06
Hi Tom, Am working on Oracle DB and below query <code> select t1.id as dbid, t2.mid as askid, t3.m2idd as amid from table1 t1, table2 t2, table3 t3 where t1.actid = t2.senid and t2.denid = t2.mkid ...
Categories: DBA Blogs

SQL query that returns the difference between the latest

Tom Kyte - Mon, 2018-01-29 18:06
Problem Statement [question redacted] Comment from Connor, Jan 29 1018 ================================ You can see from our initial answer that were thrilled that you provided us a complete test case. However, when things seem too good to b...
Categories: DBA Blogs

SQL Query Optimization

Tom Kyte - Mon, 2018-01-29 18:06
We have a table called PROTOKOLL, with the following definition: <code> CREATE TABLE "PROTOKOLL" ( "UUID" CHAR(36 BYTE) NOT NULL ENABLE, "VERSIONSNUMMER" VARCHAR2(20 BYTE), "TYP" CHAR(1 BYTE) NOT NULL ENABLE, "BENUTZER_ID" VARCHAR2(25...
Categories: DBA Blogs

Spectre and Meltdown Vulnerability and Oracle Utilities Products

Anthony Shorten - Mon, 2018-01-29 16:18

As you may or may not be aware a set of hardware based security vulnerabilities known as Spectre/Spectre and Meltdown have been identified. Vendors are quickly issuing software patches to address these hardware based vulnerabilities. Oracle has issued a number of patches to address this issue in it January 2018 patchsets.

Customers should refer to Addendum to the January 2018 CPU Advisory for Spectre and Meltdown (Doc Id: 2347948.1) for details of the patches available to address this issue and the state of patches for other products.

At this time, no patches are expected for Oracle Utilities products as the vulnerabilities are addressed by applying the patches outlined in the above article. It is highly recommended that Oracle Utilities customers apply patches outlined in that article to protect their systems. For customer's on non-Oracle platforms, it is recommended to refer to the relevant vendor site for any operating system or related patches for those platforms.

Edge Conference 2018 is coming - Technical Sessions

Anthony Shorten - Mon, 2018-01-29 16:16

It is that time of year again, Customer Edge conference time. This year we will be once again holding a Technical stream which focuses on the Oracle Utilities Application Framework and related products. Once again, I will be holding the majority of the sessions at the various conferences.

The sessions this year are focused around giving valuable advice as well as giving a window into our future plans for the various technologies we are focusing upon. As normal, there will be a general technical session covering our road map as well as specific set of session targeting important topics. The technical sessions planned for this year include:

Session Overview Reducing Your Storage Costs Using Information Life-cycle Management With the increasing costs of maintaining storage and satisfying business data retention rules can be challenging. Using Oracle Information Life-cycle Management solution can help simplify your storage solution and hardness the power of the hardware and software to reduce storage costs. Integration using Inbound Web Services and REST with Oracle Utilities Integration is a critical part of any implementation. The Oracle Utilities Application Framework has a range of facilities for integrating from and to other applications. This session will highlight all the facilities and where they are best suited to be used. Optimizing Your Implementation Implementations have a wide range of techniques available to implement successfully. This session will highlight a group of techniques that have been used by partners and our cloud implementations to reduce Total Cost Of Ownership. Testing Your On-Premise and Cloud Implementations Our Oracle Testing solution is popular with on premise implementations. This session will outline the current testing solution as well as outline our future plans for both on premise and in the cloud. Securing Your Implementations With the increase in cybersecurity concerns in the industry, a number of key security enhancements have made available in the product to support simple or complex security setups for on premise and cloud implementations. Turbocharge Your Oracle Utilities
Product Using the Oracle In-Memory Database Option
The Oracle Database In-Memory options allows for both OLTP and Analytics to run much faster using advanced techniques. This session will outline the capability and how it can be used in existing on premise implementations to provide superior performance. Mobile Application Framework Overview The Oracle Utilities Application Framework has introduced a new Mobile Framework for use in the Oracle Utilities products. This session gives an overview of the mobile framework capabilities for future releases. Developing Extensions using Groovy Groovy has been added as a supported language for on premise and cloud implementations. This session outlines that way that Groovy can be used in building extensions. Note: This session will be very technical in nature. Ask Us Anything Session Interaction with the customer and partner community is key to the Oracle Utilities product lines. This interactive sessions allows you (the customers and partners) to ask technical resources within Oracle Utilities questions you would like answered. The session will also allow Oracle Utilities to discuss directions and poll the audience on key initiatives to help plan road maps.

This year we have decided to not only discuss capabilities but also give an idea of how we use those facilities in our own cloud implementations to reduce our operating costs for you to use as a template for on-premise and hybrid implementations.

For customers and partners interested in attending the USA Edge Conference registration is available.

 

Avoid Blind SQL Call from ADF Task Flow Method

Andrejus Baranovski - Mon, 2018-01-29 15:42
Keep an eye open on ADF Task Flow Method Call activities where methods from ADF Bindings are called. JDEV 12c sets deferred refresh for ADF binding iterators related to TF Method Call activities and this causing blind SQL to be executed. Blind SQL - query without bind variables.

Let me explain the use case, so that it will be more clear what I'm talking about.

Common example - TF initialization method call where data is prepared. Typically this involves VO execution with bind variables:


Such method call could invoke binding operation either directly (pay attention - bind variable value is set):


Or through Java bean method using API:


My example renders basic UI form in the fragment, after TF method call was invoked:


If you log SQL queries executed during form rendering, you will see two queries instead of expected one. First query is executed without bind variables, while second gets correct bind variable assigned:


What is the cause for first query without bind variables? It turns out - iterator (with setting Refresh = deferred) from page definition mapped with TF method call is causing this. Somehow iterator is initialized not at the right time, when bind variable is not assigned yet and this causing blind SQL call:


Workaround is to set Refresh = never:


With Refresh = never, only one query is executed as expected, with bind variable assigned:


This may look minor, but trust me - with complex queries such fix could be a great help for performance tuning. Avoid executing SQL queries without bind variables.

Download sample application - ADFTFCallBindingApp.zip.

Result Cache: when *not* to use it

Yann Neuhaus - Mon, 2018-01-29 15:39

I encountered recently a case where result cache was incorrectly used, leading to high contention when the application encountered a peak of load. It was not a surprise when I’ve seen that the function was called with an ‘ID’ as argument, which may have thousands of values in this system. I mentioned to the software vendor that the result cache must be used only for frequently calling the function with same arguments, not for random values, even if each value have 2 or 3 identical calls. And, to detail this, I looked at the Oracle Documentation to link the part which explains when the result cache can be used and when it should be avoided.

But I’ve found nothing relevant. This is another(*) case where the Oracle Documentation is completely useless. Without explaining how a feature works, you completely fail to get this feature used. Most people will not take the risk to use it, and a few will use it in the wrong place, before definitely blacklisting this feature.

(*) By another case, I’m thinking about Kamil Stawiarski presentation about Pragma UDF and the lack of useful documentation about it.

Oracle documentation

So this is what I’ve find in the Database Performance Tuning Guide about the Benefits of Using the Server Result Cache

  1. The benefits of using the server result cache depend on the application
  2. OLAP applications can benefit significantly from its use.
  3. Good candidates for caching are queries that access a high number of rows but return a small number, such as those in a data warehouse.

So, this is vague (‘depends’, ‘can benefit’, ‘good candidates’). And doesn’t help to decide when it can be used.
The ‘access a high number of rows but return a small number’ is an indication why cache hits can benefit. However, there is no mention of the most important things, which are :

  • The cache result is invalidated for any DML on the tables the result relies on.
  • The cache miss, when the result is invalidated is expensive
  • The cache miss, when the result is not in the result cache is expensive
  • The ‘expensive’ here is a scalability issue: not detected in unit tests, but big contention when load increases
Real things to know

The first thing to know is that the Result Cache memory is protected by a latch:

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 2 0 0 0 0
0000000060047870 Result Cache: SO Latch 0 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

This latch has no children:

SQL> select * from v$latch_children where name like '%Result Cache%';
 
no rows selected

Only one latch to protect the whole result cache: concurrent sessions – even for different functions – have to serialize their access on the same latch.

This latch is acquired in exclusive mode when the session has to write to the result cache (cache miss, invalidation,…) or in shared mode – since 11gR2 when reading only. This has been explained by Alex Fatkulin http://afatkulin.blogspot.ch/2012/05/result-cache-latch-in-11gr2-shared-mode.html.

This means that, whatever the Oracle Documentation says, the benefit of result cache comes only at cache hit: when the result of the function is already there, and has not been invalidated. If you call the same function with always the same parameter, frequently, and with no changes in the related tables, then we are in the good case.

But if there was a modification of one of the tables, even some rows that have nothing to do with the result, then you will have an overhead: exclusive latch get. And if you call the function with new values for the arguments, that’s also a cache miss which has to get this exclusive latch. And if you have multiple sessions experiencing a cache miss, then they will spin on CPU to get the exclusive latch. This can be disastrous with a large number of sessions. I have seen this kind of contention for hours with connection pools set to 100 sessions when the call to the function is frequent with different values.

To show it, I create a demo table (just to have a dependency) and a result_cache function:

SQL> create table DEMO as select rownum n from xmltable('1 to 1000');
Table created.
 
SQL> create or replace function F(n number) return number result_cache as begin for i in (select * from DEMO where DEMO.n=F.n) loop return i.n; end loop; end;
2 /
Function created.

I have just restarted the instance and my latch statistics are reset:

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 2 0 0 0 0
0000000060047870 Result Cache: SO Latch 0 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

Result Cache Hit

This will call the function always with the same argument, and no change in the table it relies on:
SQL> declare n number; begin for i in 1..1e3 loop n:=n+f(1); end loop; end;
2 /
PL/SQL procedure successfully completed.

So, the first call is a cache miss and the 999 next calls are cache hits. This is the perfect case for Result Cache.

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 1009 0 0 0 0
0000000060047870 Result Cache: SO Latch 1 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

So, that’s about 1000 latch gets. With cache hits you get the latch once per execution, and this is a shared latch, so no contention here.
You want to see check that it is a shared latch? Just set a breakpoint with gdb on the ksl_get_shared_latch function (up to 12.1 because 12.2 uses ksl_get_shared_latch_int) and print the arguments (as explained by Stefan Koehler and Frits Hoogland):

As my RC latch is at address 00000000600477D0 I set a beakpoint on ksl_get_shared_latch where the first argument is 0x600477d0 and display the other arguments:

break ksl_get_shared_latch
condition 1 $rdi == 0x600477d0
commands
silent
printf "ksl_get_shared_latch laddr:%x, willing:%d, where:%d, why:%d, mode:%d\n", $rdi, $rsi, $rdx, $rcx, $r8
c
end

Then one call with cache hit displays:

ksl_get_shared_latch laddr:600477d0, willing:1, where:1, why:5358, mode:8

Mode 8 is shared: many concurrent sessions can do the same without waiting. Shared is scalable: cache hits are scalable.

Cache miss – result not in cache

Here each call will have a different value for the argument, so that they are all cache misses (except the first one):

SQL> declare n number; begin for i in 1..1e3 loop n:=n+f(i); end loop; end;
2 /
PL/SQL procedure successfully completed.

Now the ‘RC latch’ statistics have increased further:

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 6005 0 0 0 0
0000000060047870 Result Cache: SO Latch 1 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

This is about 5000 additional latch gets, which means 5 per execution. And, because it writes, you can expect them to be exclusive.

Here is my gdb script output when I call the function with a value that is not already in cache:

ksl_get_shared_latch laddr:600477d0, willing:1, where:1, why:5358, mode:8
ksl_get_shared_latch laddr:600477d0, willing:1, where:1, why:5347, mode:16
ksl_get_shared_latch laddr:600477d0, willing:1, where:1, why:5358, mode:16
ksl_get_shared_latch laddr:600477d0, willing:1, where:1, why:5374, mode:16

Mode 16 is exclusive. And we have 3 of them in addition to the shared one. You can imagine what happens when several sessions are running this: spin and wait, all sessions on the same resource.

Cache miss – result in cache but invalid

I run the same again, where all values are in cache now:

SQL> declare n number; begin for i in 1..1e3 loop n:=n+f(i); end loop; end;
2 /
PL/SQL procedure successfully completed.

So this is only 1000 additional gets:

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 7005 0 0 0 0
0000000060047870 Result Cache: SO Latch 1 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

The function depends on DEMO table, and I do some modifications on it:

SQL> insert into DEMO values (0)
1 row created.
SQL> commit;
Commit complete.

This has invalidated all previous results. A new run will have all cache miss:

SQL> declare n number; begin for i in 1..1e3 loop n:=n+f(i); end loop; end;
2 /
PL/SQL procedure successfully completed.

And this is 5000 additional gets:

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 12007 0 0 0 0
0000000060047870 Result Cache: SO Latch 1 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

So what?

The important thing to know is that each cache miss requires an exclusive access to the Result Cache, multiple times. Those must be avoided. The Result Cache is good for a static set of result. It is not a short-term cache to workaround an application design where the function is called two or three times with the same values. This is, unfortunately, not explained in the Oracle Documentation. But it becomes obvious when we look at the implementation, or when we load test it with multiple sessions. The consequence can be this kind of high contention during minutes or hours:

Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
----------------------------------------- ------------ ----------- ------ ------
latch free 858,094 1,598,387 1863 78.8
enq: RC - Result Cache: Contention 192,855 259,563 1346 12.8

Without either the knowledge of the implementation, or relevant load tests, the risk is that a developer stays on his good results in unit testing, and implement Result Cache in each function. The consequence will be seen too late, in production, at a time of load peak. If this happens to you, you can disable the result cache (DBMS_RESULT_CACHE.BYPASS(TRUE);) but the risk is to have performance degradation in the ‘good cases’. Or recompile the procedures with removed RESULT_CACHE, but you may bring a new contention on library cache then.

 

Cet article Result Cache: when *not* to use it est apparu en premier sur Blog dbi services.

New Web ADI Update Available for EBS 12.1.3

Steven Chan - Mon, 2018-01-29 11:48

Oracle Web Applications Desktop Integrator (Web ADI) allows you to use Microsoft Office applications such as Excel, Word, and Project to upload data to Oracle E-Business Suite. For example, you can use Excel to create formatted spreadsheets on your desktop to download, view, edit, validate, and upload Oracle E-Business Suite data. 

We have just released a Web ADI update for EBS 12.1.3 with new functionality:

  • Digital Signature Support

    Some security options within Microsoft Excel allow files containing macros to be opened only if the files are digitally signed to identify them as coming from a trusted source. You can now set a profile option to have Oracle Web Applications Desktop Integrator affix a digital signature to the spreadsheets you create. Users will then be able to open the spreadsheets created by Oracle Web Applications Desktop Integrator with the higher Microsoft Excel security settings.

  • Desktop Integration Framework

    The Oracle E-Business Suite Desktop Integration Framework user interface is enhanced to let you define Java contents, an uploader, and an importer for an integrator, as well as to let you test an integrator definition and view an integrator definition in read-only mode.

You can download the update here:

This is a cumulative bundle patch update that includes all previously-released Web ADI updates for EBS 12.1.3.

References

Related Articles

Categories: APPS Blogs

Office Depot Pivots to a New Future with Oracle Cloud Applications

Oracle Press Releases - Mon, 2018-01-29 07:00
Press Release
Office Depot Pivots to a New Future with Oracle Cloud Applications Leading office products retailer to modernize its legacy systems in the cloud and accelerate omnichannel business success

Redwood Shores, Calif.—Jan 29, 2018

Office Depot, Inc. (NASDAQ:ODP), a leading omni-channel provider of business services, products and technology, selected Oracle Cloud Applications to modernize its IT systems and transform its business processes to meet the demands of today’s digital economy. By enhancing the performance and outcomes of key supply chain, HR, and financial management functions with Oracle Cloud Applications, Office Depot will be able to accelerate its omnichannel growth, simplify its IT infrastructure, and lower its costs.

To enable strategic business model changes and achieve future growth plans, Office Depot needed a comprehensive solution that could enhance its current infrastructure and extend its existing IT investments. After a competitive review, Office Depot selected Oracle Supply Chain Management (SCM) Cloud, Oracle Human Capital Management (HCM) Cloud, and Oracle Enterprise Performance Management (EPM) Cloud to speed time to market and control costs.

“Office Depot is pivoting away from being a traditional office products retailer to a broader omnichannel services and products company and that causes a lot of complexity given our existing technology,” said Damon Venger, senior director, IT Applications, Office Depot. “We evaluated multiple vendors, but only Oracle could deliver the end-to-end solutions we needed. Oracle Cloud applications continue to evolve and expand—ensuring they can grow with us over time. This was important when looking for a company that could be a true partner and understand our unique business needs.”

To support high-order volumes and a large range of products, Office Depot chose Oracle Supply Chain Management (SCM) Cloud for an end-to-end business solution that can help get products to market faster and offer real-time insight into day-to-day logistics, procurement, and forecasting. With Oracle HCM Cloud, Office Depot will be empowered with insightful and actionable data to manage its global workforce for a more positive and consistent employee experience worldwide, while eliminating individual and disjointed point solutions. Oracle Enterprise Performance Management (EPM) Cloud will help transform the finance function —from planning to financial close — to enable data-driven, strategic decision making, timely and accurate financial reporting, and lower IT complexity.

“Office Depot continues to digitally transform its business to better serve the needs of its constituents — from employees to customers,” said Steve Miranda, executive vice president of applications development, Oracle. “With Oracle Cloud Applications, the company will be well equipped to capitalize on new opportunities for growth, while delivering a differentiated, compelling service to its customers.”

Additional Information
To learn more about Oracle EPM Cloud, follow @OracleEPMCloud on Twitter or Facebook, or read the Modern Finance Leader blog.

More information on Oracle HCM Cloud can be found on the Modern HR in the Cloud blog, follow @OracleHCM on Twitter or Facebook.

For additional information on Oracle SCM Cloud, visit FacebookTwitter or the Oracle SCM blog.

Contact Info
Jennifer Yamamoto
Oracle PR
916-761-9555
jennifer.yamamoto@oracle.com
Rebecca Rakitin
Office Depot, Inc.
561-438-1450
Rebecca.Rakitin@officedepot.com
About Office Depot, Inc.

Office Depot, Inc. is a leading provider of office supplies, business products and services delivered through an omnichannel platform.

The company had 2016 annual sales of approximately $11 billion, employed approximately 38,000 associates, and served consumers and businesses in North America and abroad with approximately 1,400 retail stores, award-winning e-commerce sites and a dedicated business-to-business sales organization – with a global network of wholly owned operations, franchisees, licensees and alliance partners. The company operates under several banner brands including Office Depot®, OfficeMax®, BizBox, CompuCom®, Complete Office and Grand&Toy®. The company’s portfolio of exclusive product brands include TUL®, Foray®, Brenton Studio®, Ativa®, WorkPro®, Realspace® and Highmark®.

Office Depot, Inc.’s common stock is listed on the NASDAQ Global Select Market under the symbol “ODP.”

Office Depot, Foray, Ativa and Realspace are trademarks of The Office Club, Inc. OfficeMax, TUL, Brenton Studio, WorkPro and Highmark are trademarks of OMX, Inc. CompuCom is a trademark of CompuCom Systems, Inc. and Complete Office is a trademark of Complete Office Solutions, LLC. Grand&Toy is a trademark of Grand & Toy, LLC in Canada. ©2017 Office Depot, Inc. All rights reserved. Any other product or company names mentioned herein are the trademarks of their respective owners.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation.

Talk to a Press Contact

Jennifer Yamamoto

  • 916-761-9555

Rebecca Rakitin

  • 561-438-1450

Case Study – 1

Jonathan Lewis - Mon, 2018-01-29 04:45

It has been some time since I wrote an article walking through the analysis of information on an AWR report, but a nice example appeared a few weeks ago on Twitter that broke a big AWR picture into a sequence of bite-sized chunks that made a little story so here it is, replayed in sync with my ongoing thoughts. The problem started with the (highly paraphrased) question – “How could I get these headline figures when all the ‘SQL ordered by’ sections of the report show captured SQL account for 0.0% of Total?”. The report was a 30 minute snapshot from 11.2.0.3, and here’s the Load Profile:As you can see, the database was doing quite a lot of work as measured by the physical reads and writes, the number of block changes and size of redo, and the fact that the average CPU usage by the foreground sessions in this instance accounts for 9.5 CPUs. Generally speaking the 49 seconds per second of DB time is also a fairly good hint,when combined with the other numbers, that the instance is busy but, in the absence of any other information, that could be one session holding a blocking lock with 48 other sessions waiting for the lock to be released.

There are a couple of unusual (though not impossible) features to this profile. Given the amount of work the figures for user calls and executes look very small – again not technically impossible, just unlikely in “normal” database processing given the apparent workload – and bear in mind that the “SQL ordered by ..” figures seem to account for none of the workload. Similarly the figures for logical reads and block changes are a little unusual (for “normal” processing) – on average this instance seems to have modified every block it visited (without any work showing up in the captured SQL).

Next in the set of images was the Time Model:As you can see, the foreground time (DB time) is 85,944 seconds or which foreground CPU usage (DB CPU) is 16,735 seconds – with about 69,000 seconds unaccounted ! THere’s virtually no time spend on PL/SQL or SQL, and Rman doesn’t even make an appearance  (I mention rman specifically because there was at least one version of Oracle where the rman time was accidentally lost from this summary).

So what does the Top Timed Events look like:

It’s nice to see that this is consistent with the Load Profile: the DB CPU matches, and there’s a lot of physical reads (and a quick check says that 6,560,642/1,800 = 3,644 … which is pretty close to the 3,746 physical reads per second in the Load Profile).  There’s one major anomaly here, though: the huge number of (and time spent on) latch: row cache objects. and even though it’s not the single largest component of time it’s the most obvious one to pursue so the next bit of the report to check is the Dictionary Cache Statistics, with the Tablespace IO Stats and Segments by Physical Reads to follow. I don’t have an image for the dictionary cache stats, but the answer to the question “What were all the rowcache object gets for?” was: “dc_tablespaces (214,796,434)” – which (probably) told me everything I needed to know.

I could show you the specific Instance Activity statistic that I wanted to see next, but I’ll postpone that for a moment and jump to the Tablespace IO Stats – which we were planning to do and might have done anyway if we hadn’t known the rocket science behind massive number of gets on dc_tablespaces.

That’s a horrendous number of (single block) reads of the undo tablespace – and why would they be happening ? The instance engaged in some massive rollback activity (and the transactions being rolled back are on objects in the GIRO tablespace – which is why it is also suffering a lot of single block reads) and this is the point where we jump to the relevant Instance Activity statistic to confirm the claim:

There are two other “undo records applied” statistics, but we don’t need to think about them – the match between the count of records applied and the gets on the dc_tablespaces latch is nearly perfect. Almost everything that this instance is doing is rolling back – there must have been some enormous data changes (possibly only one, possibly parallel-enabled) that failed in the previous half hour and now the entire mess is being cleaned up.

One little detail to note – the “undo records applied” per second is 122,355, but the Load Profile reported 247,885 “Block changes” per second. The two figures are consistent with each other. Each application of an undo record is two block changes – the first when you modify the original data block, the second when you update the undo record itself to flag it as “applied”:  122,355 * 2  = 244,710, which is a good match for 247,855.

Final Thoughts

There is a second reason why you could see lots of gets on dc_tablespaces – but I think it was a bug in 9i relating to temporary tablespace groups. The phenomenon as a side effect of rolling back was something I discovered in the 8i timeline and I’ve not actually checked what an AWR report really would look like if I forced a massive rollback to take place as the only workload across the interval – so here’s a quick test I constructed and ran to finish the thread off:


set timing on

create table t1 as
with g as (select rownum id from dual connect by level <= 1000)
select rownum id, rownum n1, rpad ('x',150) padding from g,g
;

create index t1_i1 on t1(id);

begin
        for i in 1..1000000 loop
                update t1 set n1 = n1 where id = i;
        end loop;
end;
/

alter system flush buffer_cache;

pause take AWR snapshot from another session

rollback;

prompt take AWR snapshot from another session

On the small VM I used to run the test it took a little over 60 seconds to run the update, and the same again to complete the rollback. The “DB time” show in the Time Model section of the AWR report was 62.0 seconds, while the “sql execute elapsed time” was 3.9 seconds (which was the SQL executed while capturing the AWR data).

Conclusion

This was a problem where the database seemed to have done a lot of work that couldn’t be attributed to an SQL. While I had a little rocket science up my sleeve that may have allowed me to identify the source more quickly and with more confidence than the average DBA all I’ve done in this note it highlight a couple of oddities and big numbers that anyone could have spotted, and followed a couple of simple guesses:

a) DB time was large, but sql (and pl/sql) time was non-existent

b) Top timed events were I/O and latches – so identify the latch

c) The specific latch pointed towards tablespaces – so check the Tablespace I/O and note the UNDO tablespace

d) Look at any Instance Activity statistics about “undo”.

e) Job done – but a quick test case is the icing on the cake.

 

 

Creating Indexes on Custom Table

Tom Kyte - Sun, 2018-01-28 23:46
Hi Tom, I have below query <code> select a.order_number from xxdm.XXARX_INV_SOURCE_TBL_28_jan1 a,XXARX_INV_CNV_TBL_28th_jan1 b where a.trx_number = b.trx_number and a.customer_number_source = b.customer_number_source and a.trx_number = :t...
Categories: DBA Blogs

ContractOracle.com is for sale

ContractOracle - Sun, 2018-01-28 20:30
If you would like to purchase this domain, email Robert.Geier@ContractOracle.com

This domain is perfect if you are a contractor working with Oracle software. 
Categories: DBA Blogs

Oracle Offline Persistence Toolkit - Simple GET Response Example with JET

Andrejus Baranovski - Sun, 2018-01-28 11:56
We have new tool from Oracle which can help to simplify offline logic implementation for JS apps.  In this post I will describe how to use Oracle Offline Persistence Toolkit with Oracle JET. However Offline Persistence is not constrained by JET usage only, this toolkit is available on NPM and can be integrated with other JS solutions.

I should emphasise - offline toolkit primary role is to enable mobile hybrid apps to work offline. In my opinion, toolkit usage doesn't stop here. It can enable user to continue his work, when internet connection is available, but back-end server goes down. Technically user would remain online in this case, but in practice application will be broken - no response from back-end for REST calls. Offline persistence toolkit could help to solve such cases - user could continue working with local cache, until back-end is down.

If you want to learn how offline toolkit works and how to use its API, go to GitHub page - check readme, add it to your JET app and try to run/test. Hands-on is the best way to learn something new.

I will share few hints and sample app.

As per readme, first of all you should add Offline Persistence Toolkit and PouchDB modules from NPM. Run these commands within JET app directory:

1. npm install @oracle/offline-persistence-toolkit

2. npm install pouchdb pouchdb-find

Next you should follow four simple configuration steps and enable JET app to be able to access offline toolkit API.

Step 1 (standard, when adding any additional module)

Add paths to newly added modules in main.js require block:


Step 2 (standard, when adding any additional module)

Add paths to newly added modules in main-release-paths.js:


Step 3 (standard, when adding any additional module)

Added modules would not be copied to build directory automatically. We need to define copying in oraclejet-build.js. Modules should go to build directory. If you need to copy files from given folder and subfolders, use ** for src:


Build content is located in web directory. Offline toolkit and PouchDB modules should be copied to build directory:


Step 4

Initialize window.PouchDB variable in main.js:


Configuration is complete, now we can use Offline Persistence Toolkit API. Add persistence store manager and other modules:


Simplest option is to rely on default fetch listener from offline toolkit. We need to register store factory and map endpoint which we want to cache with persistence manager. When back-end is available - call will go to back-end and response will be cached. Next time, of back-end is not available - data will be fetched from cache. Toolkit intercepts HTTP(-S) request and stores response, if end-point was configured to be listened:


I'm testing offline toolkit with simple Employees REST end-point call from JET. Toolkit allows to execute this call successfully, even if there is no back-end or no connection (of course - if same call was executed at least once before):


UI part is simple - displaying list, when data is fetched:


Data is fetched, we are online:


Offline toolkit will work, if REST response doesn't include Cache-Control header. Make sure there is no Cache-Control header set in response:


ADF BC REST by default sets Cache-Control header, you can remove it Filter class (defined in ADF BC REST app):


Now I turned my connection to be offline, clicked on Get List button - JS calls REST and instead of getting network error, it executes successfully and returns data from cache through offline toolkit functionality:


You should open details for network call and check initiator. You will see that all calls mapped to offline endpoint are going through persistenceManager.js:


Let's double check - may be we are tricked somehow? Remove offline toolkit registration API in the code and re-run application:


As expected - network error is received and fetch fails. This proves - offline toolkit works :)


Sample JET application with offline toolkit configuration is available on GitHub (run ojet restore and ojet serve).

Upgrade OEM 12.1.0.5 OMS to OEM13.2

Amis Blog - Sun, 2018-01-28 09:00

Despite the potential of Oracle Management Cloud, I’m a fan of Oracle Enterprise Manager, with its small imperfections. So when noticing a 12c Oracle Enterprise Manager on Linux, and targets on Windows, I took the challenge of modernizing the customer’s environment a bit. And the starting point , 12.1.0.5 for OMS and 12.1.0.2 database as repository wasn’t that bad at all. This post is purely about upgrading the OMS. Upgrading agents is a different story with Windows as target.

Preparation

I think upgrading an OMS is all about preparation, so I read the upgrade guide carefully, and still it’s almost inevitable to miss something (also in this case).

Roughly the plan for my situation (single OMS, 1 instance repository, Oracle Linux):

  1. Patching the Opatch to 12.1.0.1.7 or above.
  2. Setting the parameter optimizer_adaptive_features in the repository  to FALSE in advance.
  3. Patch the database with patch 160419 (= cumulative patch 22291127) including of running the datapatch utility
  4. Enable the ‘DELETE TARGET’ – audit setting
  5. Download the required plugin’s to a temporary directory (I did’nt do this by the way at my first attempt, had to start the upgrade again…)
  6. Copy the EMkey to the repository
  7. Backup config of OMS
  8. Upgrade
  9. Run root script
  10. Upgrade central agent
Execution 1. Patching Opatch.

Downloaded patch 12.2.0.1.11.

Plain and simple. Renamed the Opatch directory, unzip p6880880_121010_Linux-x86-64, and ready to go.

Check with ‘opatch version’ if it’s done.

2. Setting the parameter ‘optimizer_adaptive_features’.

alter system set optimizer_adaptive_features=false scope=both

3. Patch the database with patch 160419

However this patch is superseded, I stuck with the plan (very conservative):

image

Stop database – unfortunately also stop OMS of course.

Check at conflicts:

opatch prereq CheckConflictAgainstOHWithDetail -ph ./

And patch 22291127 with ‘opatch apply’

Post-check

opatch lsinventory –detail

Running the datapatch utility.

Start database and oms.

cd $ORACLE_HOME/OPatch

./datapatch –verbose

Unfortunately an error appears:

Datapatch fails with “catconInit: database is not open on the default instance” (Doc ID 2003488.1)

Solution: temporarily rename glogin.sql

Cd $ORACLE_HOME/sqplus/admin

Rename glogin.sql temp_glogin.sql

Start datapatch utility again

Cd $ORACLE_HOME/OPatch

./datapatch –verbose

Validating logfiles…

Patch 22291127 apply: SUCCESS

logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/22291127/19694308/22291127_apply_OCCMREP_2018Jan16_11_26_58.log (no errors)

SQL Patching tool complete on Tue Jan 16 11:27:25 2018

Restore the glogin.sql

Cd $ORACLE_HOME/sqplus/admin

Rename temp_glogin.sql glogin.sql

Check the following log files in $ORACLE_BASE/cfgtoollogs/sqlpatch/22291127/<unique patch ID> for errors:

22291127_apply_<database SID>_<CDB name>_<timestamp>.log.

Didn’t need an RMAN upgrade, skipped this step.

4. Enable the ‘DELETE TARGET’ – audit setting

Cd /u01/app/oracle/oms12c/oms/bin

./emcli update_audit_settings -operations_to_enable=”DELETE_TARGET” -audit_switch=”ENABLE” -file_prefix=”aud”

5. Download the required plugin’s to a temporary directory

In my case, I forgot to downoad a compatible ODA plugin the first time.

image

Downloaded the 13.1.1.1.0_oracle.dba.odba_2000_0.opar from the plugin download pagina.

6. Copy the EMkey to the repository

/u01/app/oracle/oms12c/oms/bin/emctl config emkey -copy_to_repos [-sysman_pwd <sysman_pwd>]

Check this with

/u01/app/oracle/oms12c/oms/bin/emctl status emkey

The EMKey is configured properly, but is not secure.

Secure the EMKey by running “emctl config emkey -remove_from_repos”.

7. Backup OMS config

/u01/app/oracle/oms12c/oms/bin/emctl exportconfig oms -sysman_pwd Oracle123# -dir /home/oracle/job

8. Upgrade

Shutdown OMS and central agent

/u01/app/oracle/oms12c/oms/bin/emctl stop oms –all

/u01/app/oracle/agent12c/agent_inst/bin/emctl stop agent

Start upgrade with the parameter for the downloaded plugin

./em13200_linux64.bin “PLUGIN_LOCATION=/u01/orasoftware/OMS_13.2”

Mostly it’s next,next, hereby only the mentionable screens.

A warning about ports, ignored this.

image

The summary screen:

image

And.. Upgrade

During the upgrade I had an error, related to the ODA plugin:

EM 13c: Upgrading Enterprise Manager Cloud Control OMS from 12.1.0.5 To 13.2 Fails in Repository Configuration with ORA-00942 Error for Table EM_ODBA_ODASYS_TAGS_E (Doc ID 2340514.1)

Solution:

as sysman: alter table EM_ODBA_ODASYS_TAGS rename to EM_ODBA_ODASYS_TAGS_E

And retry the repository upgrade.

Finished:

image

9. Running the root script

~]# /u01/app/oracle/oms132/allroot.sh

Starting to execute allroot.sh ………

Starting to execute /u01/app/oracle/oms132/root.sh ……

/etc exist

/u01/app/oracle/oms132

Finished execution of /u01/app/oracle/oms132/root.sh ……

OMS done:

10. Upgrade central agent

From the Setup menu, select Manage Cloud Control, then select Upgrade Agents.

Add the required agent, and upgrade.

image

Don’t forget to cleanup the agent as a post-upgrade task.

image

And done….

Regardz.

Resources

Upgrade guide: https://docs.oracle.com/cd/E73210_01/EMUPG/toc.htm

Plugin download: http://www.oracle.com/technetwork/oem/enterprise-manager/downloads/oem-plugins-2882950.html

Upgrade fails, cause plugin ODA: Upgrading Enterprise Manager Cloud Control OMS from 12.1.0.5 To 13.2 Fails in Repository Configuration with ORA-00942 Error for Table EM_ODBA_ODASYS_TAGS_E (Doc ID 2340514.1)

Datapatch failure: Datapatch fails with “catconInit: database is not open on the default instance” (Doc ID 2003488.1)

The post Upgrade OEM 12.1.0.5 OMS to OEM13.2 appeared first on AMIS Oracle and Java Blog.

Temporary Table Questions

Tom Kyte - Sun, 2018-01-28 04:26
Tom, I have a few questions abt temporary table: 1. Which type of temporary table is better to use in a multi-user applications, a global temporary table with on commit preserve rows or a global temporary table without on commit preserve r...
Categories: DBA Blogs

Testing Oracle SQL online

Yann Neuhaus - Sat, 2018-01-27 16:38

Want to test some DDL, a query, check an execution plan? You need only a browser. And you can copy-paste, or simply link, your test-case in a forum, a tweet, an e-mail, a tweet. Here is a small list (expecting to grow from your comments) of free online services which can run with an Oracle Database: SQL Fiddle, Rextester, db<>fiddle and Oracle Live SQL

SQL Fiddle

SQL Fiddle let you build a schema and run DDL on the following databases:

  • Oracle 11gR2
  • Microsoft SQL Server 2014
  • MySQL 5.6
  • Postgres 9.6 and 9.3
  • SQLLite (WebSQL and SQL.js)

As an Oracle user, the Oracle 11gR2 is not very useful as it is a version from 2010. But there’s a simple reason for that: that’s the latest free version – the Oracle XE Edition. And a free online service can run only free software. Now that Oracle plans to release an XE version every year, this should be better soon.

Example: http://sqlfiddle.com/#!4/42960/1/0

CaptureSQLfiddle

Rextester

Rextester is a service to compile code online, in a lot of languages and also the following databases:

  • Oracle 11gR2
  • Microsoft SQL Server 2014
  • MySQL 5.7
  • PostgreSQL 9.6

Example: http://rextester.com/QCYJF41984

Rextester has also an API where you can run a query and get a JSON answer:

$ curl -s --request POST --data 'LanguageChoice=35 Program=select * from dual' http://rextester.com/rundotnet/api
{"Warnings":null,"Errors":null,"Result":"\u003ctable class=\"sqloutput\"\u003e\u003ctbody\u003e\u003ctr\u003e\u003cth\u003e\u0026nbsp;\u0026nbsp;\u003c/th\u003e\r\n\u003cth\u003eDUMMY\u003c/th\u003e\r\n\u003c/tr\u003e\r\n\u003ctr\u003e\u003ctd\u003e1\u003c/td\u003e\r\n\u003ctd\u003eX\u003c/td\u003e\r\n\u003c/tr\u003e\r\n\u003c/tbody\u003e\u003c/table\u003e\r\n","Stats":"absolute service time: 1,37 sec","Files":null}

The answer has the result as an HTML table:

$ curl -s --request POST --data 'LanguageChoice=35 Program=select * from dual' http://rextester.com/rundotnet/api | jq -r .Result
<table class="sqloutput"><tbody><tr><th> nbsp; nbsp;</th>
<th>DUMMY</th>
</tr>
<tr><td>1</td>
<td>X</td>
</tr>
</tbody></table>

Here is my SELECT * FROM DUAL:

$ curl -s --request POST --data 'LanguageChoice=35 Program=select * from dual' http://rextester.com/rundotnet/api | jq -r .Result | lynx -dump -stdin
DUMMY
1 X

Capturerextester

db<>fiddle

db<>fiddle has a very nice interface, easy to link and easy to paste to StackOverflow (click on ‘markdown’)

  • Oracle 11gR2
  • SQL Server 2014 2016 2017, and even 2017 Linux version.
  • MariaDB 10.2
  • SQLite 3.8
  • PostgreSQL 8.4 9.4 9.6 10

Example: http://dbfiddle.uk/?rdbms=oracle_11.2&fiddle=948a067dd17780ca65b01243751c2cb0

Capturedbfiddle

Oracle Live SQL

Finally, you can also run on the latest release of Oracle, with a service provided by Oracle itself: Live SQL.

  • Oracle 12cR2 (an early build from October 2016)

Example: https://livesql.oracle.com/apex/livesql/s/f6ydueahcslf66dlynagw9s3w

CaptureLiveSQL

 

Cet article Testing Oracle SQL online est apparu en premier sur Blog dbi services.

Bash: How to trim/truncate variables / get substrings

Dietrich Schroff - Sat, 2018-01-27 12:53
In a comment to one of the most useless commands postings was shown how to use the rev command to get the last component of  the path out of a log file:
cat log | grep pattern | rev | cut -d '/' -f 1 | revIf you only have one variable filled with a path, you can get the last component very easy:
schroff@zerberus:~$ myvar=/ab/cd/ef/gh
schroff@zerberus:~$ echo ${myvar##*\/}
gh?

## removes the longest matching string for the pattern "*\/" from the beginning (the slash has to be escaped with a backslash).

schroff@zerberus:~$ echo ${myvar#*\/}
ab/cd/ef/gh# removes the shortest matching pattern. Here only the starting "/"

% and %% removes everything from the end up to the pattern:
schroff@zerberus:~$ echo ${myvar%\/*}
/ab/cd/ef
schroff@zerberus:~$ echo ${myvar%%\/*}Here the content of the bash manpage:
       ${parameter#word}
       ${parameter##word}
Remove matching prefix pattern.  The word is expanded to produce a pattern just
as in pathname expansion.  If the pattern matches the beginning of the value of
parameter,  then the result of the expansion is the expanded value of parameter
with the shortest matching pattern (the ``#'' case)  or  the  longest  matching
pattern (the ``##'' case) deleted.  If parameter is @ or *, the pattern removal
operation is applied to each positional parameter in turn, and the expansion is
the resultant list.  If parameter is an array variable subscripted with @ or *,
the pattern removal operation is applied to each member of the array  in  turn,
and the expansion is the resultant list.

       ${parameter%word}
       ${parameter%%word}
Remove matching suffix pattern.  The word is expanded to produce a pattern just
as in pathname expansion.  If the pattern matches a  trailing  portion  of  the
expanded  value  of parameter, then the result of the expansion is the expanded
value of parameter with the shortest matching pattern (the ``%'' case)  or  the
longest  matching  pattern  (the ``%%'' case) deleted.  If parameter is @ or *,
the pattern removal operation is applied to each positional parameter in  turn,
and  the  expansion  is  the resultant list.  If parameter is an array variable
subscripted with @ or *, the pattern removal operation is applied to each  mem‐
ber of the array in turn, and the expansion is the resultant list.

How to capture user and encrypted password to be used in alter user statement after duplication of database

Tom Kyte - Sat, 2018-01-27 10:06
We perform a duplicate database from prod to test weekly and have a script that captures the current user permissions on dev via the dbms.metadata.get_ddl package that generates DDL to another file to be used after the duplication to re-create any us...
Categories: DBA Blogs

SQLcl spool numeric field as Text

Tom Kyte - Sat, 2018-01-27 10:06
Hello Tom, I love SQLcl for spooling data to a CSV-File for using in Excel. But i have trouble with comma inside a field. This is my SQL-Statement: <code> SET LINESIZE 3000 SET TRIMSPOOL ON SET PAGESIZE 0 SET SERVEROUTPUT ON SET VERIFY...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator