Feed aggregator

You Scratch Your Head And Ponder Why It Is You Go With Maximum Core Count Xeons. I Can’t Explain That, But This Might Help.

Kevin Closson - Tue, 2016-06-14 00:36

Folks that have read my blog for very long know that I routinely point out that Intel Xeon processors with fewer cores (albeit same TDP) get more throughput per core. Recently I had the opportunity to do some testing of a 2-socket host with 6-core Haswell EP Xeons (E5-2643v3) connected to networked all-flash storage. This post is about host capability so I won’t be elaborating on the storage. I’ll say that it was block storage, all-flash and networked.

Even though I test myriads of systems with modern Xeons it isn’t often I get to test the top-bin parts that aren’t core-packed.  The Haswell EP line offers up to 18-core parts in a 145w CPU.  This 6-core part is 135w and all cores clock up to 3.7GHz–not that clock speed is absolutely critical for Oracle Database performance mind you.

Taking It For a Spin

When testing for Oracle OLTP performance the first thing to do is measure the platform’s ability to deliver random single-block reads (db file sequential read). To do so I loaded 1TB scale SLOB 2.3 in the single-schema model. I did a series of tests to find a sweet-spot for IOPS which happened to be at 160 sessions. The following is a snippet of the AWR report from a 5-minute SLOB run with UPDATE_PCT=0. Since this host has a total of 12 cores I should think 8KB read IOPS of 625,000 per second will impress you. And, yes, these are all db file sequential reads.

load-profile-625K-IOPS

At 52,093 IOPS per CPU core I have to say this is the fastest CPU I’ve ever tested. It takes a phenomenal CPU to handle this rate of db file sequential read payload. So I began to wonder how this would compare to other generations of Xeons. I immediately thought of the Exadata Database Machine data sheets.

Before I share some comparisons I’d like to point out that there was a day when the Exadata data sheets made it clear that IOPS through the Oracle Database buffer cache costs CPU cycles–and, in fact, CPU is often the limiting factor. The following is a snippet from the Exadata Database Machine X2 data sheet that specifically points out that IOPS are generally limited by CPU. I know this. It is, in fact, why I invented SLOB way back in the early 2000s. I’ve never seen an I/O testing kit that can achieve more IOPS per DB CPU than is possible with SLOB.

X2-IOPS-CPU

Oracle stopped using this foot note in the IOPS citations for Exadata Database Machine starting with the X3 generation. I have no idea why they stopped using this correct footnote. Perhaps they thought it was a bit like stating the obvious. I don’t know. Nonetheless, it is true that host CPU is a key limiting factor in a platform’s ability to cycle IOPS through the SGA. As an aside, please refer to this post about calibrate_io for more information about the processor ramifications of SGA versus PGA IOPS.

So, in spite of the fact that Oracle has stopped stating the limiting nature of host CPU on IOPS, I will simply assert the fact in this blog post. Quote me on this:

Everything is a CPU problem

And cycling IOPS through the Oracle SGA is a poster child for my quotable quote.

I think the best way to make my point is to simply take the data from the Exadata Database Machine data sheets and put it in a table that has a row for my E5-2643v3 results as well. Pictures speak thousands of words. And here you go:

Exadata-compare-12c-HSW_EP

AWR Report

If you’d like to read the full AWR report from the E5-2643v3 SLOB test that achieved 625,000 IOPS please click on the following link: AWR (click here).

References

X2 data sheet
X3 data sheet
X4 data sheet
X5 data sheet
X6 data sheet

 


Filed under: oracle

Recursive WITH, part III: IS_LEAF

OraFAQ Articles - Mon, 2016-06-13 14:25

The CONNECT BY syntax provides a useful pseudocolumn, CONNECT_BY_ISLEAF, which identifies leaf nodes in the data: it’s 1 when a row has no further children, 0 otherwise. In this post, I’ll look at emulating this pseudocolumn using recursive WITH.

read more

The Cost of Doing Nothing

Kubilay Çilkara - Mon, 2016-06-13 12:18
For a business to become optimally successful, it absolutely must incorporate a quality life-cycle management system.  This begs the question:  Why do so many vendors miss the mark when it comes to providing the necessary updates and enhancements?  Developers and software companies should embrace their respective ALM systems as their staunch allies; and progressive IT organizations stay well ahead of the game by using progressive technology and best practices to ensure that high-quality products are on time and on budget while remaining fully compliant.  The goal of any ALM supplier should be to cater to its clients by properly supporting them by staying abreast of platform enhancements and being familiar with new languages, new devices, mobile demands, ever-changing compliance regulations and other real-time demands that must be continually addressed.

The bottom line remains:  in order for development leaders to not only survive, but thrive, they must make the transition to the most-updated ALM solution, possible.  Surprisingly, however, development leaders can be hesitant to utilize a modern ALM solution; but the cost of doing nothing can be more expensive than one might imagine. 

There are a handful of misguided reasons why an updated ALM solution might not be employed.  A few of those fallacies can include the following:

A New ALM Solution Would Be Too Cost-Prohibitive

Being the lead dog and staying ahead of the pack in this competitive world is absolutely paramount which is why a vendor must provide the crucial components such as product enhancements, platform updates, etc.  Research reveals some unsettling data:

  • 84% of IT projects become overdue or over budget
  • 31% of IT projects never reach completion due to being canceled
  • Completed IT projects deliver, on average, only 42% of their expected benefits

Accuracy and efficiency become the name of the game as it applies to profit; but developers' profit margins will be sorely compromised without up-to-date functionality and access to current tools via an up-to-date ALM system.  Additionally, if no automated system is integrated, IT will be forced to spend a good deal of valuable time addressing compliance-related issues; and that can be costly. 

Your vendor's R&D department should certainly be acutely aware of new trends in the industry as well as responsive to customer’s requests.  A coveted ALM solution will incorporate 1) on-board templates for compliance reporting 2) compatibility and remote access with any mobile device 3) tools such as dashboards, real-time reports & analytics and automated work-flows –  all, of which, enable every team-member to stay up-to-date. 

The cost of doing nothing can take a titanic toll when one considers that not meeting app-release time-lines as well as opportunities that become lost in the shuffle plus valuable time addressing compliance concerns and/or audits all cost a business, big-time!  The question, then, becomes obvious:  You believe you can't afford the integration of a modern ALM solution – but can you afford NOT to??

Our Current ALM Solution Seems to be Working Fine
In order to effectively, efficiently and optimally monitor, manage and coordinate all the people, processes, changes and dynamics that are intricately involved in the application life-cycle, utilizing the most sophisticated ALM solution is key!  Development personnel feel the demands of deploying functionality and fixes, very quickly.  The IT setting is extremely complex; and in this environment, database servers, web servers, diverse clientele and every type of mobile device equate to sophisticated development and release processes.  All this must be intricately orchestrated without a hitch; and a modern ALM solution is what it takes to fully ensure a flawless and seamless operations in every department. 

With the most modern ALM solution, users can enjoy the ease at which systems and work-flows come together in addition to the minimization of production errors and the maximization of collaboration efforts.  Then, imagine all this coupled with data access from any mobile device, compliance reports with point-and-click ease and automation processes that are as easy as child's play.

Older ALM solutions are just that 'old' and with that, comes the inability for an archaic solution tool to offer the newest of technologies which equates to lost time due to fixing bad code and dealing with coding errors, as only a single example.  And then, of course, there is the lost revenue.  In the end, the growth of a company is stifled.  Again, a modern ALM solution keeps a business in position as the 'alpha' and leader of the competitive pack since the people and processes involved are all humming like a fine-tuned engine – no, barricades, no inefficiency and virtually no errors.

Transitioning to a New ALM Would Be Too Time-Consuming

How one chooses a vendor can make the difference between reaping the benefits of a dedicated and seasoned professional with an unparalleled product that he or she is excited to share, verses a vendor whose interest in your goals and progress is marginal, at best.  Assuming the right vendor has been selected, the time required to get the system fully running will be miniscule.  Personnel can very quickly enjoy immediate visibility, coordination and management across distributed systems, teams and tools.  In the end, previously-lost revenue due to outdated ALM systems becomes a distant memory since teams will no longer contend with drawn-out, manual processes but will, now, have the updated abilities to very quickly communicate, collaborate, update etc. regarding any and all application projects. 

Not one single team-member needs to concern him or herself with transitioning into an updated system.  A committed vendor will make sure the necessary and expected support is entirely available for everyone involved.  Again, in the end, any time invested in becoming familiar with a new ALM solution will begin to immediately pay for itself due to optimized usability involving real-time visibility, flexibility, accuracy and automation.

Our Current ALM Serves Only Development

When a business chooses stagnation over progress, it can become the 'kiss of death' for the organization.  Because technology will never slow down or even reach an apex, a business absolutely must stay on track with innovative ideas, processes and insights.  An integrated ALM system ensures that users can take full advantage of managing, in real-time, every aspect of development and delivery.  A top-tier ALM solution will provide instantaneous updates on every component ranging from code to work-flow to dashboards and everything in-between and beyond.  Smarter, more-insightful decisions become common-place among everyone involved – whether development personnel, auditors, programmers, etc.  Since DevOps departments evolve and advance in the enterprise, so too, must the ALM system by functioning as the centralized collaborative arena where inter-department communications are available whenever and wherever required.

After it's all said and done, switching to a modern ALM solution will, realistically, save money over the long haul since time is being dramatically saved – and time is money!  Those few words serve as a cliché as well as a fact.  Whether one is speaking of departments collaborating on changes at any level, or enhanced visibility that maximizes work-flow or whether one is talking about users gaining advanced capabilities resulting in succinct, precise and quick decision-making, it all adds up, once again, to saving copious amounts of time which translates into saving impressive amounts of revenue.

A reliable vendor will provide the kind of support one would expect from a supplier that operates as a top-tier contender in the industry.  Vendor support should include:

  • Access to the most up-dated interfaces and devices
  • Assistance with any existing OS
  • Intervention for all platforms, on which, code is being developed
  • Mobile and web development
  • Out-of-the-box plug-ins to converge with other tools
  • Compliance-report templates
  • Delivery of single-screen visibility with all IT involvement
  • Adjustable point-and-click distribution and deployment and mobile functionality with everything

It is an ever-changing business climate where technology is king.  And...

                                            Adaptation equals growth and growth equals SUCCESS!    


About the author: Daniel Magid is Rocket’s IBM i solution leader and Director of the Rocket Application Lifecycle Management (ALM) and DevOps lab. Having started his career at IBM in 1981 in the midrange computer division, Daniel brings to Rocket Software more than 30 years of experience in the IBM midrange marketplace. Prior to coming to Rocket as part of the acquisition of Aldon in 2011, Daniel was Aldon’s CEO and Chief Product Strategist. Daniel led the growth of Aldon from a small 4 person consulting company to the largest provider of ALM and DevOps solutions in the IBM i market. Daniel is a recognized expert in application development and DevOps in the IBM i market and a well-known presence at IBM i conferences.




Categories: DBA Blogs

Gluent New World #04: Next Generation Oracle Database Architectures using Super-Fast Storage with James Morle

Tanel Poder - Mon, 2016-06-13 11:30

It’s time to announce the 4th episode of Gluent New World webinar series by James Morle! James is a database/storage visionary and has been actively contributing to Oracle database scene for over 20 years – including his unique book Scaling Oracle 8i that gave a full-stack overview of how different layers of your database platform worked and performed together.

The topic for this webinar is:

When the Rules Change: Next Generation Oracle Database Architectures using Super-Fast Storage

Speaker:

  • James Morle has been working in the high performance database market for 25 years, most of which has been spent working with the Oracle database. After 15 years running Scale Abilities in the UK, he now leads the Oracle Solutions at DSSD/EMC in Menlo Park.

Time:

  • Tue, Jun 21, 2016 12:00 PM – 1:15 PM CDT

Abstract:

  • When enabled with revolutionary storage performance capabilities, it becomes possible to think differently about physical database architecture. Massive consolidation, simplified data architectures, more data agility and reduced management overhead. This presentation, based on the DSSD D5 platform, includes performance and cost comparison with other platforms and shows how extreme performance is not only for extreme workloads.

Register here:

This should be fun! As usual, I’ll be asking some questions myself and the audience can ask questions too. See you soon!

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

Running Big Data Discovery Shell and Jupyter Notebook on Big Data Lite VM 4.5

Rittman Mead Consulting - Mon, 2016-06-13 09:19

New in Big Data Discovery 1.2 is the addition of BDD Shell, an integration point with Python. This exposes the datasets and BDD functionality in a Python and PySpark environment, opening up huge possibilities for advanced data science work on BDD datasets, particularly when used in conjunction with Jupyter Notebooks. With the ability to push back to Hive and thus BDD data modified in this environment, this is important functionality that will make BDD even more useful for navigating and exploring big data.

The Big Data Lite virtual machine is produced by Oracle for demo and development purposes, and hosts all the components that you’d find on the Big Data Appliance, all configured and integrated for use. Version 4.5 was released recently, which included BDD 1.2. In this article we’ll see how to configure BDD Shell on Big Data Lite 4.5 (along with Jupyter Notebooks), and in a subsequent post dive into how to actually use them.

Setting up BDD Shell on Big Data Lite

You can find the BDD Shell installation document here.

Login to BigDataLite 4.5 (oracle/welcome1) and open a Terminal window. The first step is to download Anaconda, which is a distribution of Python that also includes “[…] over 100 of the most popular Python, R and Scala packages for data science” as well as Jupyter notebook, which we’ll see in a moment.

cd ~/Downloads/
wget http://repo.continuum.io/archive/Anaconda2-4.0.0-Linux-x86_64.sh

Then install it: (n.b. bash is part of the command to enter)

bash Anaconda2-4.0.0-Linux-x86_64.sh

Accept the licence when prompted, and then select a install location – I used /u01/anaconda2 where the rest of the BigDataLite installs are

Anaconda2 will now be installed into this location:
/home/oracle/anaconda2

  - Press ENTER to confirm the location
  - Press CTRL-C to abort the installation
  - Or specify a different location below

[/home/oracle/anaconda2] >>> /u01/anaconda2

After a few minutes of installation, you’ll be prompted to whether you want to prepend Anaconda’s location to the PATH environment variable. I opted not to (which is the default) since Python is used elsewhere on the system and by prepending it it’ll take priority and possibly break things.

Do you wish the installer to prepend the Anaconda2 install location
to PATH in your /home/oracle/.bashrc ? [yes|no]
[no] >>> no

Now edit the BDD Shell configuration file (/u01/bdd/v1.2.0/BDD-1.2.0.31.813/bdd-shell/bdd-shell.conf) in your favourite text editor to add/amend the following lines:

SPARK_EXECUTOR_PYTHON=/u01/anaconda2/bin/python
LOCAL_PYTHON_HOME=/u01/anaconda2

Amend the path if you didn’t install Anaconda into /u01

In the same configuration file, add/amend:

SPARK_HOME=/usr/lib/spark/
SPARK_EXTRA_CLASSPATH=/usr/lib/oozie/oozie-sharelib-yarn/lib/spark/spark-avro_2.10-1.1.0-cdh5.7.0.jar

Now run the BDD Shell setup:

/u01/bdd/v1.2.0/BDD-1.2.0.31.813/bdd-shell/setup.sh

This should succeed:

[bigdatalite.localdomain] Validating pre-requisites...
[bigdatalite.localdomain] Validation Success
[bigdatalite.localdomain] Setting up BDD Shell...
[bigdatalite.localdomain] Setup Success
[oracle@bigdatalite Downloads]$

Assuming it does, you can now launch the shell bdd-shell.sh:

[oracle@bigdatalite Downloads]$ /u01/bdd/v1.2.0/BDD-1.2.0.31.813/bdd-shell/bdd-shell.sh
WARNING: User-defined SPARK_HOME (/usr/lib/spark) overrides detected (/usr/lib/spark/).
WARNING: Running spark-class from user-defined location.
spark.driver.cores is set but does not apply in client mode.
Welcome to
     ___   ___   ___       __   _     ____  _     _
    | |_) | | \ | | \     ( (` | |_| | |_  | |   | |
    |_|_) |_|_/ |_|_/     _)_) |_| | |_|__ |_|__ |_|__

SparkContext available as sc, HiveContext available as sqlContext.
BDD Context available as bc.

>>>

From the BDD Shell you can interact with BDD, for example to list out the datasets currently defined in the Catalog:

>>> bc.datasets().count
17
>>> for ds in bc.datasets():
...     print ds
...

media_demo_customer     edp_cli_edp_2c7f41ee-65bf-43ac-8bb4-5b6b59a55d75        edp_cli_edp_2c7f41ee-65bf-43ac-8bb4-5b6b59a55d75        Hive    default.media_demo_customer

movie_genre     default_edp_7d4c18a5-6f02-4067-9f63-91f950078b1e        default_edp_7d4c18a5-6f02-4067-9f63-91f950078b1e        Hive    default.movie_genre

media_demo_customer     default_edp_89c616b6-aa10-4827-aa82-1e9c3fcc419e        default_edp_89c616b6-aa10-4827-aa82-1e9c3fcc419e        Hive    default.media_demo_customer

Whilst BDD Shell is command-line based, there’s also the option to run Jupyter Notebooks (previous iPython Notebooks) which is a web-based interactive “Notebook”. This lets you build up scripts exploring and manipulating the data within BDD, using both Python and Spark. The big advantage of this over the command-line interface is that a ‘Notebook’ enables you to modify and re-run commands, and then once correct, retain them as a fully functioning script for future use.

To launch it, run:

cd /u01/bdd/v1.2.0/BDD-1.2.0.31.813/bdd-shell
/u01/anaconda2/bin/jupyter-notebook --port 18888

Important points to note:

  • It’s important that you run this from the bdd-shell folder, otherwise the BDD shell won’t initialise properly
  • By default Jupyter uses 8888, which is already in use on BigDataLite by Hue, so use a different one by specifying --port
  • Jupyter by default only listens locally, so you need to either be using BigDataLite desktop to run Firefox, or use port-forwarding if you want to access Jupyter from your local web browser.

Go to http://localhost:18888 in your web browser, and you should see the default Jupyter screen with a list of files:

In the next article, we’ll see how to use Jupyter Notebooks with Big Data Discovery, and get an idea of just how powerful the combination can be.

The post Running Big Data Discovery Shell and Jupyter Notebook on Big Data Lite VM 4.5 appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Bitmap Counts

Jonathan Lewis - Mon, 2016-06-13 06:40

A question came up on the Oracle-L list server a few days ago about a query whose plan showed several bitmap operations. The problem was that the A-Rows column reported by a call to dbms_xplan.display_cursor() was showing numbers that semed to be far too small. In fact the query was producing a parallel execution plan, so the “actuals” for the parallel server operations were reporting zeros because the OP had used the “allstats last” formatting option rather than just “allstats” – but the numbers were still far too small even after this error had been corrected.

This was a detail I’d not noticed before but there was an obvious(footnote 1) guess to explain why this apparent anomaly had appeared, viz: A-Rows counts rows in the rowsource for a table, index entries for a B-tree rowsource, and strings of bits when producing bitmaps. But even the most obvious guess should be checked so here’s some code that will  (at least) corroborate the hypothesis:


rem     Script:         bitmap_counts.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2016

create table t1
nologging
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        rownum as number(8,0)                             id,
        mod(rownum - 1,2) as number(8,0)                  n2,
        mod(rownum - 1,100) as number(8,0)                n100,
        lpad(rownum,10,'0') as varchar2(10)               v1,
        lpad('x',100,'x') as varchar2(100)                padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6
; 
create bitmap index t1_b2 on t1(n2) nologging;
create bitmap index t1_b100 on t1(n100) nologging;

begin 
        dbms_stats.gather_table_stats(
                ownname    => user,
                tabname    =>'T1',
                method_opt => 'for all columns size 1'
        );
end;
/

I’ve created a table that includes two columns that I’ve indexed with bitmap indexes. Because of the size of the table and the pattern of the data each distinct value for the two columns will require multiple bitmap index entries in the bitmap index.

The multiple bitmap chunks are important but before I comment further, here’s what the program does after creating the data:


select  index_name, distinct_keys, num_rows, clustering_factor
from    user_indexes
where   table_name = 'T1'
order by
        index_name
;

alter session set statistics_level = all;
set serveroutput off

select
        count(*)
from    t1
where   n2 = 0
and     n100 between 20 and 29
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

The first query reports some index stats – to confirm my comment about multiple index entries per key – the second query is the one that’s going to give me a useful plan. Let’s just check the results of the two queries first – the index stats are the only ones needing any comment:


INDEX_NAME           DISTINCT_KEYS   NUM_ROWS CLUSTERING_FACTOR
-------------------- ------------- ---------- -----------------
T1_B100                        100        800               800
T1_B2                            2         94                94


  COUNT(*)
----------
     50000

Index t1_b100 reports 800 index entries – the bitmap chunk for each value had to be split into 8 rowid ranges; we’re interested in 10 key values from this index.

Index t1_b2 shows 94 index entries – the bitmap for each value had to be split into 47 rowid ranges: we’re interested in one key value from this index.

And this is what the plan shows:


-----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name    | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |         |      1 |        |      1 |00:00:00.01 |      69 |       |       |          |
|   1 |  SORT AGGREGATE              |         |      1 |      1 |      1 |00:00:00.01 |      69 |       |       |          |
|   2 |   BITMAP CONVERSION COUNT    |         |      1 |  55455 |      2 |00:00:00.01 |      69 |       |       |          |
|   3 |    BITMAP AND                |         |      1 |        |      2 |00:00:00.01 |      69 |       |       |          |
|*  4 |     BITMAP INDEX SINGLE VALUE| T1_B2   |      1 |        |     47 |00:00:00.01 |      25 |       |       |          |
|   5 |     BITMAP MERGE             |         |      1 |        |      2 |00:00:00.01 |      44 |  1024K|   512K|  291K (0)|
|*  6 |      BITMAP INDEX RANGE SCAN | T1_B100 |      1 |        |     80 |00:00:00.01 |      44 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("N2"=0)
   6 - access("N100">=20 AND "N100"<=29)


I can’t explain why the in-memory manipulation of the the bitstrings apparently produces two bitstrings at operations 3 and 5, but given the index stats we can understand that the 47 “rows” reported for operation 4 allow for one of the two key values in the index and the 80 “rows” reported for operation 6 allows for 10 of the key values in the index.   Q.E.D.

Bonus commentary

If you want to see table row counts (or their equivalent) appearing in a bitmap plan you’ll need to run a query that does a “bitmap conversion to rowids”, e.g.:


select
        count(n100)
from    t1
where   n2 = 0
and     n100 between 20 and 29
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name             | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                  |      1 |        |      1 |00:00:05.04 |      67 |       |       |          |
|   1 |  SORT AGGREGATE                |                  |      1 |      1 |      1 |00:00:05.04 |      67 |       |       |          |
|*  2 |   VIEW                         | index$_join$_001 |      1 |  55455 |  50000 |00:00:04.94 |      67 |       |       |          |
|*  3 |    HASH JOIN                   |                  |      1 |        |  50000 |00:00:04.75 |      67 |    25M|  4150K|   27M (0)|
|   4 |     BITMAP CONVERSION TO ROWIDS|                  |      1 |  55455 |    500K|00:00:00.94 |      25 |       |       |          |
|*  5 |      BITMAP INDEX SINGLE VALUE | T1_B2            |      1 |        |     47 |00:00:00.01 |      25 |       |       |          |
|   6 |     BITMAP CONVERSION TO ROWIDS|                  |      1 |  55455 |    100K|00:00:00.19 |      42 |       |       |          |
|*  7 |      BITMAP INDEX RANGE SCAN   | T1_B100          |      1 |        |     80 |00:00:00.01 |      42 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("N100"<=29 AND "N2"=0 AND "N100">=20))
   3 - access(ROWID=ROWID)
   5 - access("N2"=0)
   7 - access("N100">=20 AND "N100"<=29)


Note how 80 “rows” at operation 7 turn into 100,000 rows at operation 6, and 47 “rows” at opration 5 turn into 500,000 rows at operation 4.

I’ve tested 11.2.0.4 and 12.1.0.2 with this code and they both behave the same way. Interestingly the final query (count with index hash join) took about 0.12 seconds to run with rowsource execution statistics disabled, but roughly 5 seconds with statistics enabled – and the extra time was all in the hash join.

Footnote 1:

“Obvious”: provided you’ve been working with the relevent bits of the Oracle software for several years(footnote 2) or have been reading the right books; even then something that’s “obvious” isn’t necessarily correct. ‘Oh, obvious,’ said Granny [Weatherwax]. ‘I’ll grant you it’s obvious. Trouble is, just because things are obvious doesn’t mean they’re true.’ – Wyrd Sisters: Terry Pratchett.

Footnote 2:

‘Oh, it’s largely intuitive, Archchancellor,’ said Ponder.  ‘Obviously you have to spend a lot of time learning it first, though.’ – Hogfather: Terry Pratchett

 


PeopleSoft Security Patches

The process of applying security patches starts with identifying which patches to apply. For PeopleSoft, security patches need to be considered for both the application and the major technical components. The application of security patches, referred to by Oracle as Critical Patch Updates (CPUs), for one component DO NOT apply security patches for the other components.

For example, PeopleTools CPU patches DO NOT include database CPUs – applying one will not automatically apply nor include the other. The same holds for WebLogic and Tuxedo CPU patches.

CPUs for PeopleTools releases are provided for up to 24 months after the next minor release is generally available. The following table will assist in analyzing your PeopleTools CPU levels certification status with other key PeopleSoft technical components:

PeopleTools

(PT)

PT Generally Available Date

PT CPU Delivered through

Database

Certifications

WebLogic

Certifications

Tuxedo

Certification

PT8.51

9/10/10

Jan 2014

11.2.0.4

10.3.6.0

10.3.0.0

PT8.52

10/28/11

Jan 2015

11.2.0.4

10.3.6.0

10.3.0.0

PT8.53

2/1/13

7/19/16

11.2.0.4

12.1.0.2

10.3.6.0

11.1.3.0

11.1.1.2

PT8.54

7/11/14

12/4/17

11.2.0.4

12.1.0.2

12.1.3.0

12.1.2.0

12.1.1.0

PT8.55

12/4/15

TBD

11.2.0.4

12.1.0.2

12.1.3.0

12.1.1.0

12.1.3.0

  • WebLogic 10.3.6.x is supported through December 2018
  • WebLogic 12.1.2.0 is supported through 6/2016
  • WebLogic 12.1.3.0 is supported through 12/2017 and will be the terminal release of 12.1.x
  • Tuxedo support dates: 10.3 12/2016, 12.1.3 in 2020 all 1
  • 1.x and 12.1.1 end in 2018

If you have questions, please contact us at info@integrigy.com

Michael A. Miller, CISSP-ISSMP, CCSP

REFERENCES

PeopleSoft Database Security

PeopleSoft Security Quick Reference

 

Oracle PeopleSoft, Oracle Critical Patch Updates
Categories: APPS Blogs, Security Blogs

Data Recovery Advisor (11g)

Hemant K Chitale - Mon, 2016-06-13 03:13
Here's my YouTube video on using the 11g Data Recovery Advisor from the RMAN command line.

Other videos are in my YouTube Channel.

.
.
.

Categories: DBA Blogs

database option - tuning part 2

Pat Shuff - Mon, 2016-06-13 02:07
In our last entry we looked at the results of a sql tuning advisor. We used SQL Developer to execute our code and create a tuning advisory for the code that we executed. We could have gone through Enterprise Manager and done the same thing but done this historically rather on live data. In this blog entry we will analyze the same results using the Enterprise Manager Express that comes with the database as a service in the Oracle Cloud. To connect to this service we need to first open up the network ports to enable connection to port 1158. This is done through the database console or we could do this with ssh tunneling of port 1158 to our database target ip address.

Once we have access to port 1158 we can connect to the Enterprise Manager Express by going to the ip address of our server, in this instance 129.152.134.189 which we got from the database console, and connect to https://129.152.134.189:1158/em. Note that we might get a security exception since the certificate is self signed. We need to add an exception and connect to this service. When we are prompted for a login we connect as sys with sysdba rights. Note that we can not do this on Amazon RDS since we do not have sys access to the database in this service.

When we click on the Performance link at the top of the screen we can look at the overall performance of our system and drill down in to reports to get more information.

If we scroll down to the bottom we see a link called Advisor Central. We can follow this link and look at all of the tuning advisors that have been run and examine the results.

We can select a previously run tuning advisor and look at the recommendations that we did from SQL Developer. When we dive into the report we get a little different style report.

Note that the SQL profile recommends a 19.8% savings if we change the profile. If we click on the Recommendations and expand the information as a table rather than a graph we see that the pk_dept reference takes up a good amount of time and if we could get rid of it since it is not referenced it will speed up the select statements. If we click on the compare explain plans we can see how much of a speed up we will get if we implement the new plan. What I don't see from this is the recommendation to drop the dept d reference that we got from SQL Developer.

Note that the recommendation does state "Consider removing the disconnected table or view from this statement or add a join condition which refers to it" but does not specifically recommend removing dept d from the select statement as is done in SQL Developer.

If we wanted to expand upon use of the tuning advisor we could follow along the Oracle by Example Tuning Tutorial and look at how to initiate tuning advisories through Enterprise Manager. The first thing done in this tutorial is to initiate nightly tuning tasks by going into the server and enabling the Automated Maintenance Tasks. You first click the Configure button then click Configure the Automatic SQL Tuning button. After you change the Automatic Configuration of SQL Profiles to yes and click Apply you have a checkbox window to select the dates to run the tuning tasks.

Once we have this defined we can execute the following code as sys

set echo on

drop user ast cascade;
create user ast identified by ast;
grant dba to ast;
alter system flush shared_pool;
--
-- Turn off AUTOTASK
--
alter system set "_enable_automatic_maintenance"=0;
--
-- Clear out old executions of auto-sqltune
--
exec dbms_sqltune.reset_tuning_task('SYS_AUTO_SQL_TUNING_TASK');
--
-- Drop any profiles on AST queries
--
declare
  cursor prof_names is
    select name from dba_sql_profiles where sql_text like '%AST%';
begin
  for prof_rec in prof_names loop
    dbms_sqltune.drop_sql_profile(prof_rec.name);
  end loop;
end;
/
This creates a user ast. I recommend changing the password to something more complex.

We can then run a series of malformed select statements to generate some synthetic load to report upon and correct. Note that we do this as the ast user and not sys.

set echo off

select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;

Once we have the workload created we can kick off the sql tuning advisor with some different code.

set echo on

exec dbms_workload_repository.create_snapshot;

variable window varchar2(20);

begin
 select upper(to_char(sysdate,'fmday'))||'_WINDOW' into :window from dual;
end;
/

print window;

--
-- Open the corresponding maintenance window, but with other clients disabled
--

alter system set "_enable_automatic_maintenance"=1
/

exec dbms_auto_task_admin.disable( -
  'auto optimizer stats collection', null, :window);

exec dbms_auto_task_admin.disable( -
  'auto space advisor', null, :window);

exec dbms_scheduler.open_window(:window, null, true);

--
-- Close the maintenance window when sqltune is done
--

exec dbms_lock.sleep(60);

declare
  running number;
begin

  loop
    select count(*)
    into   running
    from   dba_advisor_executions
    where  task_name = 'SYS_AUTO_SQL_TUNING_TASK' and
           status = 'EXECUTING';

    if (running = 0) then
      exit;
    end if;

    dbms_lock.sleep(60);
  end loop;

  dbms_scheduler.close_window(:window);

end;
/

alter system set "_enable_automatic_maintenance"=0
/

--
-- Re-enable the other guys so they look like they are enabled in EM.
-- Still they will be disabled because we have set the underscore.
--

exec dbms_auto_task_admin.enable( -
  'auto optimizer stats collection', null, :window);

exec dbms_auto_task_admin.enable( -
  'auto space advisor', null, :window);

Note that this executes with some errors but still generates a good sql tuning advisor report. If we look back at the Advisor Central we can dive into the report and see what happened.

We can get an 8 second speedup by reformatting the sql select statements. This might or might not be worthy of tuning based on how many times we execute the code.

In summary, we have alternate ways of looking at sql tuning as well as a say of looking at historic data. We turned on automatic tuning reports which does consume more resources but if we have extra cpu cycles we can benefit from the reports. The Enterprise Manager Express that comes with database as a service is a very powerful tool. It is not a centralized utility like a centralized Enterprise Manager but can be used to automate and record reports for a single database. This service is only installed with the platform as a service and must be manually added and configured if you install your own database manually on top of infrastructure as a service. Having this common management interface is a huge benefit to DBAs who are asked to manage and maintain instances in the cloud. The Enterprise Manager used in the cloud is the exact same version that is used for an on-premise system. If you choose to install and configure a central Enterprise Manager server you can attach to instances in your data center as well as instances in the cloud. The only requirement is that you have file level access and sys/root access to install the agent.

Links for 2016-06-12 [del.icio.us]

Categories: DBA Blogs

8 queens chess problem: solution in Oracle SQL

XTended Oracle SQL - Sun, 2016-06-12 21:16

This is just another solution of this problem for a chessboard, but you can choose any size of the checkerboard:

with 
 t as (select level i, cast(level as varchar2(1)) c from dual connect by level<=&d)
,x(l,s,n) as (
       select 1 l, c s, chr(97)||c||' ' from t
       union all
       select l+1, x.s||t.c, n||chr(98+l)||i||' '
       from x
            join t
                 on instr(s,c)=0
                    and not exists(select 0 from dual 
                                   where L+1 - t.i = level - substr(s,level,1)
                                      or L+1 + t.i = level + substr(s,level,1)
                                   connect by level<=length(s))
       where L<&d
 )
select n
from x
where l=&d

8x8:
SQL> @tests/f
Size[8]: 8

N
--------------------------------------------------------------------------------
a1 c5 d8 e6 f3 g7 h2 i4
a1 c6 d8 e3 f7 g4 h2 i5
a1 c7 d4 e6 f8 g2 h5 i3
a1 c7 d5 e8 f2 g4 h6 i3
a2 c4 d6 e8 f3 g1 h7 i5
a2 c5 d7 e1 f3 g8 h6 i4
a2 c5 d7 e4 f1 g8 h6 i3
a2 c6 d1 e7 f4 g8 h3 i5
a2 c6 d8 e3 f1 g4 h7 i5
a2 c7 d3 e6 f8 g5 h1 i4
a2 c7 d5 e8 f1 g4 h6 i3
a2 c8 d6 e1 f3 g5 h7 i4
a3 c1 d7 e5 f8 g2 h4 i6
a3 c5 d2 e8 f1 g7 h4 i6
a3 c5 d2 e8 f6 g4 h7 i1
a3 c5 d7 e1 f4 g2 h8 i6
a3 c5 d8 e4 f1 g7 h2 i6
a3 c6 d2 e5 f8 g1 h7 i4
a3 c6 d2 e7 f1 g4 h8 i5
a3 c6 d2 e7 f5 g1 h8 i4
a3 c6 d4 e1 f8 g5 h7 i2
a3 c6 d4 e2 f8 g5 h7 i1
a3 c6 d8 e1 f4 g7 h5 i2
a3 c6 d8 e1 f5 g7 h2 i4
a3 c6 d8 e2 f4 g1 h7 i5
a3 c7 d2 e8 f5 g1 h4 i6
a3 c7 d2 e8 f6 g4 h1 i5
a3 c8 d4 e7 f1 g6 h2 i5
a4 c1 d5 e8 f2 g7 h3 i6
a4 c1 d5 e8 f6 g3 h7 i2
a4 c2 d5 e8 f6 g1 h3 i7
a4 c2 d7 e3 f6 g8 h1 i5
a4 c2 d7 e3 f6 g8 h5 i1
a4 c2 d7 e5 f1 g8 h6 i3
a4 c2 d8 e5 f7 g1 h3 i6
a4 c2 d8 e6 f1 g3 h5 i7
a4 c6 d1 e5 f2 g8 h3 i7
a4 c6 d8 e2 f7 g1 h3 i5
a4 c6 d8 e3 f1 g7 h5 i2
a4 c7 d1 e8 f5 g2 h6 i3
a4 c7 d3 e8 f2 g5 h1 i6
a4 c7 d5 e2 f6 g1 h3 i8
a4 c7 d5 e3 f1 g6 h8 i2
a4 c8 d1 e3 f6 g2 h7 i5
a4 c8 d1 e5 f7 g2 h6 i3
a4 c8 d5 e3 f1 g7 h2 i6
a5 c1 d4 e6 f8 g2 h7 i3
a5 c1 d8 e4 f2 g7 h3 i6
a5 c1 d8 e6 f3 g7 h2 i4
a5 c2 d4 e6 f8 g3 h1 i7
a5 c2 d4 e7 f3 g8 h6 i1
a5 c2 d6 e1 f7 g4 h8 i3
a5 c2 d8 e1 f4 g7 h3 i6
a5 c3 d1 e6 f8 g2 h4 i7
a5 c3 d1 e7 f2 g8 h6 i4
a5 c3 d8 e4 f7 g1 h6 i2
a5 c7 d1 e3 f8 g6 h4 i2
a5 c7 d1 e4 f2 g8 h6 i3
a5 c7 d2 e4 f8 g1 h3 i6
a5 c7 d2 e6 f3 g1 h4 i8
a5 c7 d2 e6 f3 g1 h8 i4
a5 c7 d4 e1 f3 g8 h6 i2
a5 c8 d4 e1 f3 g6 h2 i7
a5 c8 d4 e1 f7 g2 h6 i3
a6 c1 d5 e2 f8 g3 h7 i4
a6 c2 d7 e1 f3 g5 h8 i4
a6 c2 d7 e1 f4 g8 h5 i3
a6 c3 d1 e7 f5 g8 h2 i4
a6 c3 d1 e8 f4 g2 h7 i5
a6 c3 d1 e8 f5 g2 h4 i7
a6 c3 d5 e7 f1 g4 h2 i8
a6 c3 d5 e8 f1 g4 h2 i7
a6 c3 d7 e2 f4 g8 h1 i5
a6 c3 d7 e2 f8 g5 h1 i4
a6 c3 d7 e4 f1 g8 h2 i5
a6 c4 d1 e5 f8 g2 h7 i3
a6 c4 d2 e8 f5 g7 h1 i3
a6 c4 d7 e1 f3 g5 h2 i8
a6 c4 d7 e1 f8 g2 h5 i3
a6 c8 d2 e4 f1 g7 h5 i3
a7 c1 d3 e8 f6 g4 h2 i5
a7 c2 d4 e1 f8 g5 h3 i6
a7 c2 d6 e3 f1 g4 h8 i5
a7 c3 d1 e6 f8 g5 h2 i4
a7 c3 d8 e2 f5 g1 h6 i4
a7 c4 d2 e5 f8 g1 h3 i6
a7 c4 d2 e8 f6 g1 h3 i5
a7 c5 d3 e1 f6 g8 h2 i4
a8 c2 d4 e1 f7 g5 h3 i6
a8 c2 d5 e3 f1 g7 h4 i6
a8 c3 d1 e6 f2 g5 h7 i4
a8 c4 d1 e3 f6 g2 h7 i5

92 rows selected.

[collapse]

It works quite fast:
8*8 ~ 0.1s
9*9 ~ 0.6s
10*10 ~4s

script for sqlplus
set arrays 1000;
col n for a80;
accept d prompt "Size[8]: " default 8;
with 
 t as (select/*+inline*/ level i, cast(level as varchar2(2)) c from dual connect by level<=&d)
,x(l,s,n) as (
       select 1 l, c s, chr(97)||c||' ' from t
       union all
       select l+1, x.s||t.c, n||chr(98+l)||i||' '
       from x
            join t
                 on instr(s,c)=0
                    and not exists(select 0 from dual 
                                   where L+1 - t.i = level - substr(s,level,1)
                                      or L+1 + t.i = level + substr(s,level,1)
                                   connect by level<=length(s))
       where L<&d
 )
select n
from x
where l=&d
/
col n clear;

[collapse]
Categories: Development

GoldenGate 12.2 cases and quotes

Michael Dinh - Sun, 2016-06-12 19:10

Avoid using quotes and case sensitivities if you can.

Process Abends : OGG-00919 Error in COLMAP clause referring to : @GETENV(“GGHEADER”…. (Doc ID 1635935.1)

1. Replace double quote with single quote.

Change reference is GETENV statement to use single quotes rather than double quotes, as example is :

@GETENV(‘GGHEADER’,’BEFOREAFTERINDICATOR’)

2. Specify NOUSEANSISQLQUOTES in GLOBALS parameter.

The default is now USEANSISQLQUOTES.

USEANSISQLQUOTES

When capturing and mapping object names, such as table names, Oracle GoldenGate always recognizes double-quoted strings as case-sensitive object names,
regardless of whether USEANSISQLQUOTES or NOUSEANSISQLQUOTES is specified.

With USEANSISQLQUOTES enabled, Oracle GoldenGate treats a string within double quotes as a case-sensitive column name, and it treats a string within single quotes as a literal.

Demo:

$ ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.1 OGGCORE_12.2.0.1.0_PLATFORMS_151211.1401_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Dec 12 2015 00:54:38
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.

GGSCI (arrow.localdomain) 1> INFO CREDENTIALSTORE

Reading from ./dircrd/:

Default domain: OracleGoldenGate

  Alias: ggs_user
  Userid: ggs_admin

Other domains:

  admin

To view other domains, use INFO CREDENTIALSTORE DOMAIN 

GGSCI (arrow.localdomain) 2> DBLOGIN USERIDALIAS GGS_USER

ERROR: Alias 'GGS_USER' not found in credential store..

GGSCI (arrow.localdomain) 3> DBLOGIN USERIDALIAS ggs_user

Successfully logged into database.

GoldenGate 12.2 discard vs exception table

Michael Dinh - Sun, 2016-06-12 11:44

I have seen several blogs demonstrating using database exception table for error handling.

Truth be told, I have never used the exception table method and curious to know what’s the difference.

Version 12.2.0.1.1 OGGCORE_12.2.0.1.0_PLATFORMS_151211.1401_FBO

First, there’s manual work to create database exception table for error handling.

Let’ me know how long it takes you to configure database exception table for error handling.

Second, with new features, you may need to update database exception table for error handling.

Third, there is no values for bind variables.

Discard functionality is built-in and with OGG 12.2 and the missing delete can easily be converted into insert.

Which method will you be implementing and why?

$ cat dirrpt/r_hawk.dsc

Oracle GoldenGate Delivery for Oracle process started, group R_HAWK discard file opened: 2016-06-11 13:04:48.441744
Current time: 2016-06-11 13:09:07

OCI Error ORA-26787: The row with key ("CUST_CODE", "ORDER_DATE", "ORDER_ID", "PRODUCT_CODE") = (JANE, 1995-11-11:13:52:00, 256, PLANE) does not exist in table DEMO.TCUSTORD
ORA-01403: no data found

Operation failed at seqno 0 rba 190519
Discarding record on action DISCARD on error 1403
Problem replicating DEMO.TCUSTORD to DEMO.TCUSTORD
Record not found
Mapping problem with delete record (target format)...
*
CUST_CODE = JANE
000000: 4a 41 4e 45                                     |JANE            |

ORDER_DATE = 1995-11-11 13:52:00
000000: 31 39 39 35 2d 31 31 2d 31 31 20 31 33 3a 35 32 |1995-11-11 13:52|
000010: 3a 30 30                                        |:00             |

PRODUCT_CODE = PLANE
000000: 50 4c 41 4e 45                                  |PLANE           |

ORDER_ID = 256
000000: 32 35 36                                        |256             |

PRODUCT_PRICE = 133300.00
000000: 31 33 33 33 30 30 2e 30 30                      |133300.00       |

PRODUCT_AMOUNT = 1
000000: 31                                              |1               |

TRANSACTION_ID = 100
000000: 31 30 30                                        |100             |

*
Current time: 2016-06-11 13:09:07

OCI Error ORA-26787: The row with key ("CUST_CODE", "ORDER_DATE", "ORDER_ID", "PRODUCT_CODE") = (WILL, 1994-09-30:15:33:00, 144, CAR) does not exist in table DEMO.TCUSTORD
ORA-01403: no data found

Operation failed at seqno 0 rba 190727
Discarding record on action DISCARD on error 1403
Problem replicating DEMO.TCUSTORD to DEMO.TCUSTORD
Record not found
Mapping problem with delete record (target format)...
*
CUST_CODE = WILL
000000: 57 49 4c 4c                                     |WILL            |

ORDER_DATE = 1994-09-30 15:33:00
000000: 31 39 39 34 2d 30 39 2d 33 30 20 31 35 3a 33 33 |1994-09-30 15:33|
000010: 3a 30 30                                        |:00             |

PRODUCT_CODE = CAR
000000: 43 41 52                                        |CAR             |

ORDER_ID = 144
000000: 31 34 34                                        |144             |

PRODUCT_PRICE = 17520.00
000000: 31 37 35 32 30 2e 30 30                         |17520.00        |

PRODUCT_AMOUNT = 3
000000: 33                                              |3               |

TRANSACTION_ID = 100
000000: 31 30 30                                        |100             |

*
Current time: 2016-06-11 13:09:07

OCI Error ORA-26787: The row with key ("CUST_CODE") = (JANE) does not exist in table DEMO.TCUSTMER
ORA-01403: no data found

Operation failed at seqno 0 rba 190905
Discarding record on action DISCARD on error 1403
Problem replicating DEMO.TCUSTMER to DEMO.TCUSTMER
Record not found
Mapping problem with delete record (target format)...
*
CUST_CODE = JANE
000000: 4a 41 4e 45                                     |JANE            |

NAME = ROCKY FLYER INC.
000000: 52 4f 43 4b 59 20 46 4c 59 45 52 20 49 4e 43 2e |ROCKY FLYER INC.|

CITY = DENVER
000000: 44 45 4e 56 45 52                               |DENVER          |

STATE = CO
000000: 43 4f                                           |CO              |

*
Current time: 2016-06-11 13:09:07

OCI Error ORA-26787: The row with key ("CUST_CODE") = (WILL) does not exist in table DEMO.TCUSTMER
ORA-01403: no data found

Operation failed at seqno 0 rba 191043
Discarding record on action DISCARD on error 1403
Problem replicating DEMO.TCUSTMER to DEMO.TCUSTMER
Record not found
Mapping problem with delete record (target format)...
*
CUST_CODE = WILL
000000: 57 49 4c 4c                                     |WILL            |

NAME = BG SOFTWARE CO.
000000: 42 47 20 53 4f 46 54 57 41 52 45 20 43 4f 2e    |BG SOFTWARE CO. |

CITY = SEATTLE
000000: 53 45 41 54 54 4c 45                            |SEATTLE         |

STATE = WA
000000: 57 41                                           |WA              |

*
Process Abending : 2016-06-11 13:09:24

select * from exceptions

ARROW:(GGS_ADMIN@thor):PRIMARY> @pr "select * from exceptions";
REP_NAME                      : R_HAWK
TABLE_NAME                    : DEMO.TCUSTORD
ERRNO                         : 1403
DBERRMSG                      : OCI Error ORA-01403: no data found, SQL 
OPTYPE                        : DELETE
ERRTYPE                       : DB
LOGRBA                        : 52
LOGPOSITION                   : 25819152
COMMITTIMESTAMP               : 11-JUN-16 01.17.37.000416 PM
-------------------------
REP_NAME                      : R_HAWK
TABLE_NAME                    : DEMO.TCUSTORD
ERRNO                         : 1403
DBERRMSG                      : OCI Error ORA-01403: no data found, SQL 
OPTYPE                        : DELETE
ERRTYPE                       : DB
LOGRBA                        : 52
LOGPOSITION                   : 25819772
COMMITTIMESTAMP               : 11-JUN-16 01.17.37.000416 PM
-------------------------
REP_NAME                      : R_HAWK
TABLE_NAME                    : DEMO.TCUSTMER
ERRNO                         : 1403
DBERRMSG                      : OCI Error ORA-01403: no data found, SQL 
OPTYPE                        : DELETE
ERRTYPE                       : DB
LOGRBA                        : 52
LOGPOSITION                   : 25821712
COMMITTIMESTAMP               : 11-JUN-16 01.17.37.000416 PM
-------------------------
REP_NAME                      : R_HAWK
TABLE_NAME                    : DEMO.TCUSTMER
ERRNO                         : 1403
DBERRMSG                      : OCI Error ORA-01403: no data found, SQL 
OPTYPE                        : DELETE
ERRTYPE                       : DB
LOGRBA                        : 52
LOGPOSITION                   : 25822128
COMMITTIMESTAMP               : 11-JUN-16 01.17.37.000416 PM
-------------------------

PL/SQL procedure successfully completed.

Convert DELETE into INSERT

Not all the columns’ value are available and because I did not configured properly.

ARROW:(DEMO@thor):PRIMARY> desc DEMO.TCUSTORD
 Name                                                  Null?    Type
 ----------------------------------------------------- -------- ------------------------------------
 CUST_CODE                                             NOT NULL VARCHAR2(4)
 ORDER_DATE                                            NOT NULL DATE
 PRODUCT_CODE                                          NOT NULL VARCHAR2(8)
 ORDER_ID                                              NOT NULL NUMBER
 PRODUCT_PRICE                                                  NUMBER(8,2)
 PRODUCT_AMOUNT                                                 NUMBER(6)
 TRANSACTION_ID                                                 NUMBER

ARROW:(DEMO@thor):PRIMARY> select count(*) from  DEMO.TCUSTORD
  2

ARROW:(DEMO@thor):PRIMARY> select count(*) from  DEMO.TCUSTORD;

  COUNT(*)
----------
         0

ARROW:(DEMO@thor):PRIMARY> insert into DEMO.TCUSTORD ("CUST_CODE", "ORDER_DATE", "ORDER_ID", "PRODUCT_CODE")
  2  values ('JANE', '1995-11-11:13:52:00', 256, 'PLANE');

1 row created.

ARROW:(DEMO@thor):PRIMARY> select count(*) from  DEMO.TCUSTORD;

  COUNT(*)
----------
         1

ARROW:(DEMO@thor):PRIMARY>

How To Control Row Removal in ADF BC

Andrejus Baranovski - Sat, 2016-06-11 11:28
There is a flaw in ADF BC remove operation. Row could be successfully removed in ADF BC - it dissapears from UI as well, but if there is DB integrity constraint violation - row is not removed in DB and error message is displayed to the user. This could be misleading to the user, he sees message about failed removal, but at the same time row is not present anymore.

Easier to explain with example. Let's imagine we want to remove IT_PROG job (there are employees assigned with this job and row removal would fail in DB):


On delete, row is removed from ADF BC and UI, but operation fails in DB and error is returned:


ADF BC completes row removal before row is really removed in DB. This is why ADF BC doesn't really know about failed row removal and is not able to keep it.

Technically such behavior is valid, but hardly understandable for business user. If row is not removed it should stay in the application.

We could solve it by overriding doDML method and executing row removal through custom method defined in VO implementation class. In doDML we can catch DML constraint exception during delete and execute refresh for removed row. Removed row is accessible in doDML, it is not dead yet. Calling refresh would allow to fix row state. We should throw exception and catch it in VO implementation class method, to refresh rowset and set back current row:


Custom method to execute remove in VO implementation class. Current row key is saved before remove and restored in case of exception. We must call executeQuery to refresh rowset, before setting back current row. In this example, commit is called right after row is removed. This allows to produce DB constraint error and process it in the context of VO. You may implement similar logic from global commit method, in such case you would need to include information about VO to be refreshed into exception message (raised in doDML):


Try to remove the same IT_PROG row now:


Failed row removal message is displayed and row remains in the application:


Download sample application - ADFDeleteControlApp.zip.

GoldenGate 12.2 checkprm and oggerr

Michael Dinh - Sat, 2016-06-11 10:05

At first, I was wondering what’s the true benefit of the new feature (checkprm) since there exists CHECKPARAMS which does this.

For someone lazy like me, it means not having to modify code to add and remove CHECKPARAMS from GoldenGate parameter files.

Once, there was oerr and now there is oggerr to look up error codes.

Reference: Oracle GoldenGate Release Notes for 12c (12.2.0.1)

$ checkprm dirprm/e_hawk.prm

2016-06-11 07:29:47  INFO    OGG-02095  Successfully set environment variable ORACLE_SID=hawk.

2016-06-11 07:29:47  INFO    OGG-02095  Successfully set environment variable ORACLE_HOME=/u01/app/oracle/product/11.2.0/se_1.

(e_hawk.prm) line 15: Parameter [UPDATEMETADATA] is not valid for this configuration.

2016-06-11 07:29:47  INFO    OGG-10139  Parameter file dirprm/e_hawk.prm:  Validity check: FAIL.

$ cat dirprm/e_hawk.prm

EXTRACT e_hawk
-- CHECKPARAMS
USERIDALIAS ggs_user
EXTTRAIL ./dirdat/aa
INCLUDE ./dirprm/global_ggenv_se.inc
-- max_sga_size 1G per IE or IR process then add 25%, not to exeed 3.5G
-- max_sga_size > 3.5G not recommended
-- parallelism 1 is for standard edition database
TRANLOGOPTIONS INTEGRATEDPARAMS (max_sga_size 128, parallelism 1)
LOGALLSUPCOLS
UPDATERECORDFORMAT COMPACT
DDL INCLUDE ALL &
EXCLUDE OBJTYPE 'SEQUENCE' &
EXCLUDE UNMAPPED OBJNAME GGS_ADMIN.*
DDLOPTIONS UPDATEMETADATA, REPORT
WARNLONGTRANS 15m, CHECKINTERVAL 5m
TABLE DEMO.*;

$ checkprm dirprm/e_hawk.prm

2016-06-11 07:30:31  INFO    OGG-02095  Successfully set environment variable ORACLE_SID=hawk.

2016-06-11 07:30:31  INFO    OGG-02095  Successfully set environment variable ORACLE_HOME=/u01/app/oracle/product/11.2.0/se_1.

2016-06-11 07:30:31  INFO    OGG-10139  Parameter file dirprm/e_hawk.prm:  Validity check: PASS.

Runtime parameter validation is not reflected in the above check.

$ oerr ora 01403

01403, 00000, "no data found"
// *Cause: No data was found from the objects.
// *Action: There was no data from the objects which may be due to end of fetch.

$ oggerr ogg 06439

06439, 00000, "No unique key is defined for table {0}. All viable columns will be used to represent the key, but may not guarantee uniqueness. KEYCOLS may be used to define the key."
// *{0}: tableObjName (Object name)
// *Cause:  No unique key is defined for the specified table.
// *Action: Use a KEYCOLS clause in TABLE or MAP to define a key that contains
//          unique values. Using a defined key ensures uniqueness and improves
//          performance.

$ oggerr ogg 01296

01296, 00000, "Error mapping from {0} to {1}"
// *{0}: sourceTable (Qualified table name)
// *{1}: targetTable (Qualified table name)
// *Cause:  The mapping of the specified source and target tables failed.
// *Action: Examine the accompanying messages that provide details about the
//          mapping failure, and resolve the problem based on those messages.
//          If the problem persists, contact Oracle Support.

Putting SQL in the corner with Javascript in SQLCL

Kris Rice - Fri, 2016-06-10 15:03
Here's a pretty small javascript file that allows for running sql in the background of your current sqlcl session.  This is a trivial example of a sql script that has a sleep in it to simulate something taking time. It also prints the SID to show it's a different connection than the base. select 'DB SID ' ||sys_context('USERENV','SID') || ' is going to sleep' bye from dual; begin

Tuning SQL with Javascript in SQLCL

Kris Rice - Fri, 2016-06-10 15:03
In case anyone missed it, #sqlcl has the ability to run javascript. This opens a lot of options.  Here's a simple example that shows how to using javascript. Open a new Database Connection Collect stats on the base connection Do work on the main connection Collect stats again Only Print the stats that changed In SQL Developer, the autotrace feature has for a long time selected the session

Carbonated SQLCL - aka SODA support

Kris Rice - Fri, 2016-06-10 15:03
     Oracle Database 12.1.0.2 introduced JSON support.  This is supported in REST Data Services already and now in the latest SQLCL.  This means ORDS can serve up the access to client applications and developers can have a command line way to interact and test features/queries/... in a nut shell a shell for JSON document storage. To use this feature the database will be to have patch #20885778

SQLCL - Blob loading ALL the files

Kris Rice - Fri, 2016-06-10 15:03
The github repo for all the things the DB Tools team does will be posted here https://github.com/oracle/Oracle_DB_Tools This will include sqlcl , sqldev, modeler, ords , and who knows what other things we have up our sleeves to post.  This repo is going to be made up of  examples and getting started things like a new extention for sqldev, custom transformation for modeler, ords integration with

AskTom's print_table - SQLCL version

Kris Rice - Fri, 2016-06-10 15:03
There was an idea logged over on community.oracle.com to add AskTom style print_table to sqlcl. Here's how easy it is now that there's scripting in sqlcl.  This is a simple js file that loops over the rows and columns and prints them.  When using the executeReturnListofList the first row is the name of the columns for reference.  With that data there it's quite simple to customize any output.

Pages

Subscribe to Oracle FAQ aggregator