Feed aggregator

Oracle Linux 7 update 5 preview available for download

Wim Coekaerts - Sat, 2018-03-03 11:38

Oracle Linux 7 update 5 is in the works... and in order to give users a free sneak preview of what's coming we put a preview release out on OTN. We typically haven't done this in the past and just always released new update versions as they become generally available but some users like to try things out early on.

[disclaimer] This is a  preview - do not use production - [/disclaimer] ...

If you run into issues, do let us know, so we can make sure they are addressed before the GA release.

You can download it from OTN here.

Classification - Machine Learning Chatbot with TensorFlow

Andrejus Baranovski - Sat, 2018-03-03 10:10
Visual conversation flow is a first thing to create, when you want to build chatbot. Such flow will help to define proper set of intents along with dialog path. Otherwise it is very easy to get lost in conversation transitions and this will lead to chatbot implementation failure. Our chatbot for medical system doesn't make any decisions, instead it helps user to work with enterprise system. It gets user input and during conversation leads to certain API call - which at the end triggers enterprise system to execute one or another action. If user is looking for patient blood pressure results, chatbot will open blood pressure module with patient ID. If user wants to edit or review blood pressure results in general, chatbot will load blood pressure results module without parameters. This kind of chatbot is very helpful in large and complex enterprise systems, this helps to onboard new users much quicker without extra training for system usage. Example of visual conversation flow for chatbot:


Conversation intents can be logged in JSON file. Where you should list conversation patterns mapped with tags, responses and contextual information. Chatbot is not only about machine learning and user input processing, very important is to handle conversation contextual flow and usually this is done outside of machine learning area in another module. We will look into it later. Machine learning with neural network is responsible to allow chatbot to calculate tag probability based on user input. In other words - machine learning helps to bring the best matching tag for current sentence, based on predefined intents patterns. As long as we get probability for the intent tag - we know what user wants, we can set conversation context and in the next user request - react based on current context:


TensorFlow runs neural network, which trains on supplied list of intents. Each training run may produce different learning results, you should check total loss value - lower value, better learning result. Probably you will run training multiple times to get optimal learning model:


TensorFlow can save learned model to be reusable by classification API. REST interface which calls classification API is developed as separate TensorFlow module. REST is handled by Flask library installed into TensorFlow runtime:


Classification function gets user input from REST call and runs it through TensorFlow model. Results with higher probability than defined by threshold are collected into ordered array and returned back. We have classification function without REST annotation for local tests within TensorFlow runtime:


Let's see how classification works, result of classification will drive next action for the chatbot. Each classification request returns matched tag and probability. User input is not identical to the patterns defined in intents, thats why matching probability may differ - this is core part of machine learning. Neural network constructed with TensorFlow, based on learned model, assumes the best tag for current user input.

User input "Checking blood pressure results for patient". This input can be related to both tags blood_pressure_search and blood_pressure, but classification decides higher probability for the first option, and this is correct. Similar for user input "Any recommendations for adverse drugs?":


Through REST endpoint we can call classification function outside of TensorFlow environment. This will allows us to maintain conversation context outside TensorFlow:


Useful resources:

- TensorFlow notebooks and intents JSON are available on GitHub repository.
- Excellent article about Contextual Chatbots with TensorFlow
- My previous post about Red Samurai chatbot

SQL with Apache Spark, easy!

Kubilay Çilkara - Sat, 2018-03-03 03:10
Reading about cluster computing developments like Apache Spark and SQL I decided to find out.

What I was after was to see how easy is to write SQL in Spark-SQL. In this micro-post I will show you how easy is to SQL a JSON file.

For my experiment I will use my chrome_history.json file which you can download from your chrome browser using the extension www.JSON-XLS.com. To run the SQL query on PySpark on my laptop I will use the PyCharm IDE. After little bit of configuration on PyCharm, setting up environments (SPARK_HOME), there it is: It only takes 3 lines to be able SQL query a JSON document in Spark-SQL.

(click image to enlarge)



Think of the possibilities with SQL, the 'cluster' partitioning and parallelisation you can achieve

Links:
Apache Spark: https://spark.apache.org/downloads.html
PyCharm: https://www.jetbrains.com/pycharm/
Categories: DBA Blogs

Check row was skipped by FOR UPDATE SKIP LOCKED

Tom Kyte - Fri, 2018-03-02 19:46
Hi Tom, Logic of the SP is to return one Batch ID at a time to application for processing and same Batch ID should not be returned back to other instance if that batch is picked by an instance. And the application server is in a multi clustered en...
Categories: DBA Blogs

Wait event acknowledge over PGA limit

Tom Kyte - Fri, 2018-03-02 19:46
Hi Tom, we just upgraded our RAC oracle (two nodes) from 12.1 to 12.2. During our load testing, we found the top wait event is acknowledge over PGA limit. I increased pga_aggregate_target from 500m to 2g. Still saw this wait event. I checke...
Categories: DBA Blogs

Can't create materialized view using left join and group by

Tom Kyte - Fri, 2018-03-02 19:46
I'm trying to create a materialized view that joins two tables then groups by various ids but I keep running into this error: <code>ORA-12015: Neither ROWIDs and nor primary key constraints are supported for queries.</code> I tried this: ...
Categories: DBA Blogs

Error in Encrypting a CLOB data type using dbms_crypto package

Tom Kyte - Fri, 2018-03-02 19:46
Hi Tom, I've shared a function's script which basically tries to encrypt a CLOB datatype using the bdms_crypto package. The function is created successfully. But it throws me a run time error stating 'ORA-01405: fetched column value is NULL'. I...
Categories: DBA Blogs

MySQL 8.0.5 community edition preview for Oracle Linux 7 for ARM64 (preview)

Wim Coekaerts - Fri, 2018-03-02 13:00

We just published a build of MySQL 8.0.5 community server on yum.oracle.com, In the ARM64 "latest" repo.

We released an Oracle Linux 7.4 based preview for ARM64 servers a little while ago. See the announcement here.

A publicly available free download. No auth or access keys.

Once you have OL7.4 preview installed, you can get MySQL 8 going using the usual yum commands.

# yum install mysql-community-server

Conditional SQL – 5

Jonathan Lewis - Fri, 2018-03-02 06:49

Here’s a note that has been sitting around for more than 3 years (the draft date is Jan 2015), waiting for me to finish it off; and in that time we’ve got a new version of Oracle that changes the solution to the problem it presented. (I also managed to write “Conditional SQL –  6” in the intervening period !)

This posting started with a question on the OTN (now ODC) database forum about an execution plan used by 11.2.0.3.  Here’s a model to represent the data and the query:

rem
rem     Script:         null_plan_4.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2015
rem
rem     Last tested
rem             12.2.0.1
rem             12.1.0.2
rem             11.2.0.4
rem

with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4 -- > comment here to avoid format issue
)
select
        rownum  catentry_id,
        case
                when mod(rownum-1,100) > 0 then mod(rownum-1,100)
        end     member_id,
        case
                when trunc((rownum-1)/100) > 0 then trunc((rownum-1)/100)
        end     partnumber,
        rpad('x',100)   padding
from
        generator,
        generator
where
        rownum <= 100 * 100 -- > comment here to avoid format issue
;

execute dbms_stats.gather_table_stats(user,'catentry');

create unique index cat_i0 on catentry(member_id, partnumber) compress 1;
--  create        index cat_i1 on catentry(member_id, partnumber, 0) compress 1;
--  create        index cat_i2 on catentry(partnumber, member_id, 0) compress 1;

variable b1 number
variable b2 number
variable b3 number
variable b4 number

begin
        :b1 := 22;
        :b2 := 1;
        :b3 := 44;
        :b4 := 1;
end;
/

select
        catentry_id
from
        catentry
where
        (   partnumber= :b1
         or (0 = :b2 and partnumber is null)
        )
and     (    member_id= :b3
         or (0 = :b4 and member_id is null)
        )
;

select * from table(dbms_xplan.display_cursor);

==============================================================================

------------------------------------------------------------------------------
| Id  | Operation         | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |          |       |       |    23 (100)|          |
|*  1 |  TABLE ACCESS FULL| CATENTRY |     1 |    10 |    23   (5)| 00:00:01 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter((("PARTNUMBER"=:B1 OR ("PARTNUMBER" IS NULL AND 0=:B2))
              AND ("MEMBER_ID"=:B3 OR ("MEMBER_ID" IS NULL AND 0=:B4))))

The question this example raised was, effectively: “Why didn’t Oracle use bind peeking to work out that the best plan for this query – with these input values – was an index range scan?”

The basic answer to this question is this: “No matter how clever Oracle can be with bind peeking and executions plans it has to produce an execution plan that will give the right answer whatever the supplied values might be.”

The OP was hoping that the optimizer would see :b2 and :b4 were arriving with the value 1, infer that “0 = 1” is always false, and reduce the query predicate to “partnumber =22 and member_id = 44” to produce the following plan:


----------------------------------------------------------------------------------------
| Id  | Operation                   | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |          |       |       |     2 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID| CATENTRY |     1 |    10 |     2   (0)| 00:00:01 |
|*  2 |   INDEX UNIQUE SCAN         | CAT_I0   |     1 |       |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("PARTNUMBER"=22 AND "MEMBER_ID"=44)

But this plan could produce the wrong results if the next execution of the query supplied zeros for :b2 or :b4, so Oracle has to do something more generic. (Bear in mind that adaptive cursor sharing keeps reusing the same execution plan until it detects that the performance for some input values is bad; it doesn’t pre-emptively create new plans based on the incoming values – though in principle it might be possible for the Oracle developers to introduce code that can recognise special cases for predicates of the form “constant1 = constant2”).

If you review the SQL statement you can see that it’s clearly trying to allow the user to find data about member_ids and partnumbers where both, neither, or either value is allowed to be null: a couple of “if – then – else” conditions that should have been handled in the client code have been embedded in the code. As we have seen several times before if you can’t change the client code then you have to hope that Oracle will use some clever transformation to handle the query in sections.

We can infer from various details of the posting that the member_id and partnumber columns were both allowed to be null, so if we want to make sure that Oracle always uses an indexed access path to acquire data for this query we need to have an index which starts with those two columns and then has at least one column which is guaranteed to be non-null so, for example, we could simply drop the current index and replace it with one that has a fixed zero on the end:

create index cat_i1 on catentry(member_id, partnumber, 0) compress 1;

With my particular data set, query, and version of Oracle this didn’t make any difference to the plan. But then I thought about the data definition and realised (and checked) that the index had a terrible clustering_factor, so I dropped it and created it with the first two columns in the opposite order:

create index cat_i2 on catentry(member_id, partnumber, 0) compress 1;

Side note:
You’ll notice that I’ve replaced the original unique index with a non-unique index. This was necessary because there were many rows where both partnumber and member_id were null, so if I want to maintain the logic of the previous unique index I’ll need to add a unique constraint on (member_id, partnumber). It’s possible, of course, that in similar circumstances I might want both indexes – one for the uniqueness and to access the data using only one of the columns, the other to access the data using only the other column.

With this index in place, and unhinted, the plan I got from 11.2.0.4 changed to use concatenation with an impressive four-way split:


------------------------------------------------------------------------------------------
| Id  | Operation                     | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |          |       |       |    12 (100)|          |
|   1 |  CONCATENATION                |          |       |       |            |          |
|*  2 |   FILTER                      |          |       |       |            |          |
|   3 |    TABLE ACCESS BY INDEX ROWID| CATENTRY |     1 |    10 |     3   (0)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN          | CAT_I2   |     1 |       |     2   (0)| 00:00:01 |
|*  5 |   FILTER                      |          |       |       |            |          |
|*  6 |    TABLE ACCESS BY INDEX ROWID| CATENTRY |     1 |    10 |     3   (0)| 00:00:01 |
|*  7 |     INDEX RANGE SCAN          | CAT_I2   |     1 |       |     2   (0)| 00:00:01 |
|*  8 |   FILTER                      |          |       |       |            |          |
|*  9 |    TABLE ACCESS BY INDEX ROWID| CATENTRY |     1 |    10 |     3   (0)| 00:00:01 |
|* 10 |     INDEX RANGE SCAN          | CAT_I2   |     1 |       |     2   (0)| 00:00:01 |
|* 11 |   TABLE ACCESS BY INDEX ROWID | CATENTRY |     1 |    10 |     3   (0)| 00:00:01 |
|* 12 |    INDEX RANGE SCAN           | CAT_I2   |     1 |       |     2   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter((0=:B2 AND 0=:B4))
   4 - access("PARTNUMBER" IS NULL AND "MEMBER_ID" IS NULL)
       filter("MEMBER_ID" IS NULL)
   5 - filter(0=:B2)
   6 - filter((LNNVL("MEMBER_ID" IS NULL) OR LNNVL(0=:B4)))
   7 - access("PARTNUMBER" IS NULL AND "MEMBER_ID"=:B3)
       filter("MEMBER_ID"=:B3)
   8 - filter(0=:B4)
   9 - filter((LNNVL("PARTNUMBER" IS NULL) OR LNNVL(0=:B2)))
  10 - access("PARTNUMBER"=:B1 AND "MEMBER_ID" IS NULL)
  11 - filter(((LNNVL("MEMBER_ID" IS NULL) OR LNNVL(0=:B4)) AND
              (LNNVL("PARTNUMBER" IS NULL) OR LNNVL(0=:B2))))
  12 - access("PARTNUMBER"=:B1 AND "MEMBER_ID"=:B3)


To execute this plan the run-time engine works as follows:

  • Operation 2: If :b2 and :b4 are both zero we use the index to find the rows where member_id and partnumber are null (the filter “member_id is null” seems to be redundant)
  • Operation 5: if :b2 is null we use the index to find rows where the partnumber is null and the member_id is the supplied value (and if that’s null the access will immediately return zero rows because of the equality predicate), and discard any rows that have already been returned by operation 2
  • Operation 8: if :b4 is zero we will use the index to find rows where the partnumber is the supplied value (and if the partnumber is null, that access will immediately return zero rows because of the equality predicate) and the member_id is null, and discard any rows that have already been returned by operation 2.
  • Operations 11 and 12 will always run – using the index to find rows that match with equality on both the incoming member_id and partnumber, discarding any rows already returned by the previous operations, and obviously not matching any rows where either column “IS” null.

The critical feature of this plan, of course, is that we got it because we have given Oracle an efficient option to find the rows where both member_id and partnumber are null – and that allows the rest of the concatenation options to take place.

Hints and Upgrades

Interestingly, after the clue that 11g would happily use concatenation with a “good enough” index I went back to the example where I’d just added a zero to the existing index and checked to see what would happen if I added a /*+ use_concat */ hint (without any of the qualifying parameters that the hint can now use) and got the same path with concatenation. The fact that the path appeared wasn’t the interesting bit – see if you can spot what is the interesting bit:

------------------------------------------------------------------------------------------
| Id  | Operation                     | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |          |       |       |    12 (100)|          |
|   1 |  CONCATENATION                |          |       |       |            |          |
|*  2 |   FILTER                      |          |       |       |            |          |
|   3 |    TABLE ACCESS BY INDEX ROWID| CATENTRY |     1 |    10 |     3   (0)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN          | CAT_I1   |     1 |       |     2   (0)| 00:00:01 |
|*  5 |   FILTER                      |          |       |       |            |          |
|*  6 |    TABLE ACCESS BY INDEX ROWID| CATENTRY |     1 |    10 |     3   (0)| 00:00:01 |
|*  7 |     INDEX RANGE SCAN          | CAT_I1   |     1 |       |     2   (0)| 00:00:01 |
|*  8 |   FILTER                      |          |       |       |            |          |
|*  9 |    TABLE ACCESS BY INDEX ROWID| CATENTRY |     1 |    10 |     3   (0)| 00:00:01 |
|* 10 |     INDEX RANGE SCAN          | CAT_I1   |     1 |       |     2   (0)| 00:00:01 |
|* 11 |   TABLE ACCESS BY INDEX ROWID | CATENTRY |     1 |    10 |     3   (0)| 00:00:01 |
|* 12 |    INDEX RANGE SCAN           | CAT_I1   |     1 |       |     2   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Check the cost, and compare it with the cost of the full tablescan. The hinted path has a lower cost than the default path. I think this may be another case of an “unknowable” range scan being ignored in favour of a known alternative.

Finally, we get to today – when I tested the code against 12.1.0.2 and 12.2.0.1. Nothing exciting happened in 12.1.0.2 – the plans were just like the 11g plans, but here’s the plan I got in 12.2 with the “bad” indexing (original index with added zero column) without any hints in the SQL:


----------------------------------------------------------------------------------------------------------
| Id  | Operation			       | Name		 | Rows  | Bytes | Cost (%CPU)| Time	 |
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT		       |		 |	 |	 |     9 (100)| 	 |
|   1 |  VIEW				       | VW_ORE_37A7142B |     4 |    52 |     9   (0)| 00:00:01 |
|   2 |   UNION-ALL			       |		 |	 |	 |	      | 	 |
|   3 |    TABLE ACCESS BY INDEX ROWID	       | CATENTRY	 |     1 |    10 |     2   (0)| 00:00:01 |
|*  4 |     INDEX UNIQUE SCAN		       | CAT_I1 	 |     1 |	 |     1   (0)| 00:00:01 |
|*  5 |    FILTER			       |		 |	 |	 |	      | 	 |
|   6 |     TABLE ACCESS BY INDEX ROWID BATCHED| CATENTRY	 |     1 |    10 |     2   (0)| 00:00:01 |
|*  7 |      INDEX RANGE SCAN		       | CAT_I1 	 |     1 |	 |     1   (0)| 00:00:01 |
|*  8 |    FILTER			       |		 |	 |	 |	      | 	 |
|   9 |     TABLE ACCESS BY INDEX ROWID BATCHED| CATENTRY	 |     1 |    10 |     2   (0)| 00:00:01 |
|* 10 |      INDEX RANGE SCAN		       | CAT_I1 	 |     1 |	 |     1   (0)| 00:00:01 |
|* 11 |    FILTER			       |		 |	 |	 |	      | 	 |
|  12 |     TABLE ACCESS BY INDEX ROWID BATCHED| CATENTRY	 |     1 |    10 |     3   (0)| 00:00:01 |
|* 13 |      INDEX RANGE SCAN		       | CAT_I1 	 |     1 |	 |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------------

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      ...
      OR_EXPAND(@"SEL$1" (1) (2) (3) (4))
      ...
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("MEMBER_ID"=:B3 AND "PARTNUMBER"=:B1)
   5 - filter(0=:B4)
   7 - access("MEMBER_ID" IS NULL AND "PARTNUMBER"=:B1)
       filter(("PARTNUMBER"=:B1 AND LNNVL("MEMBER_ID"=:B3)))
   8 - filter(0=:B2)
  10 - access("MEMBER_ID"=:B3 AND "PARTNUMBER" IS NULL)
       filter(LNNVL("PARTNUMBER"=:B1))
  11 - filter((0=:B4 AND 0=:B2))
  13 - access("MEMBER_ID" IS NULL AND "PARTNUMBER" IS NULL)
       filter(("PARTNUMBER" IS NULL AND LNNVL("PARTNUMBER"=:B1) AND LNNVL("MEMBER_ID"=:B3)))

The full tablescan didn’t appear – but it wasn’t eliminated by concatenation but by the “new” 12.2  variant: “OR EXPANSION”. In this case the net effect is remarkably similar – we still have filter operations comparing :b2 and :b4 with zero, and we still have a scattering of lnnvl() function calls being used to discard rows we’ve already accessed, but the pattern is slightly different and we have a union all.

This change prompted me to go back to testing with just the original index (member_id, partnumber) … which took me back to the full tablescan until I added the hint /*+ or_expand */ to the query to get the following plan:


----------------------------------------------------------------------------------------------------------
| Id  | Operation			       | Name		 | Rows  | Bytes | Cost (%CPU)| Time	 |
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT		       |		 |	 |	 |    29 (100)| 	 |
|   1 |  VIEW				       | VW_ORE_37A7142B |     4 |    52 |    29   (4)| 00:00:01 |
|   2 |   UNION-ALL			       |		 |	 |	 |	      | 	 |
|   3 |    TABLE ACCESS BY INDEX ROWID	       | CATENTRY	 |     1 |    10 |     2   (0)| 00:00:01 |
|*  4 |     INDEX UNIQUE SCAN		       | CAT_I0 	 |     1 |	 |     1   (0)| 00:00:01 |
|*  5 |    FILTER			       |		 |	 |	 |	      | 	 |
|   6 |     TABLE ACCESS BY INDEX ROWID BATCHED| CATENTRY	 |     1 |    10 |     2   (0)| 00:00:01 |
|*  7 |      INDEX RANGE SCAN		       | CAT_I0 	 |     1 |	 |     1   (0)| 00:00:01 |
|*  8 |    FILTER			       |		 |	 |	 |	      | 	 |
|   9 |     TABLE ACCESS BY INDEX ROWID BATCHED| CATENTRY	 |     1 |    10 |     2   (0)| 00:00:01 |
|* 10 |      INDEX RANGE SCAN		       | CAT_I0 	 |     1 |	 |     1   (0)| 00:00:01 |
|* 11 |    FILTER			       |		 |	 |	 |	      | 	 |
|* 12 |     TABLE ACCESS FULL		       | CATENTRY	 |     1 |    10 |    23   (5)| 00:00:01 |
----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("MEMBER_ID"=:B3 AND "PARTNUMBER"=:B1)
   5 - filter(0=:B4)
   7 - access("MEMBER_ID" IS NULL AND "PARTNUMBER"=:B1)
       filter(("PARTNUMBER"=:B1 AND LNNVL("MEMBER_ID"=:B3)))
   8 - filter(0=:B2)
  10 - access("MEMBER_ID"=:B3 AND "PARTNUMBER" IS NULL)
       filter(LNNVL("PARTNUMBER"=:B1))
  11 - filter((0=:B4 AND 0=:B2))
  12 - filter(("PARTNUMBER" IS NULL AND "MEMBER_ID" IS NULL AND LNNVL("PARTNUMBER"=:B1) AND
	      LNNVL("MEMBER_ID"=:B3)))

The plan shows “or expansion”, and highlights the most significant difference between concatenation and expansion – concatenation requires indexed access paths in all branches, or-expansion doesn’t.

At first sight this plan with its full tablescan at operation 12 might seem like a bad idea; but check operation 11, the guarding filter, the tablescan occurs only if both :b2 and :b4 are null. Perhaps that special condition is never supposed to appear, perhaps it’s going to do a lot of work whether or not you can use an index. The fact that you can now handle the original problem without adding or altering existing indexes – provided you can inject this or_expand hint – may be of significant benefit. (Of course, being able to recreate the original index with the columns in the reverse order would even avoid the necessity of worrying about the hint.)

tl;dr

Applications that push “if-then-else” decisions into the SQL and down to the optimizer are a pain in the backside; the performance problems they produce can sometimes be bypassed by the addition of extra indexes that might give you plans (possibly hinted) that report the use of the concatentation operation. In 12cR2 the optimizer has an improved strategy for damage limitation “Cost-based Or Expansion” that can produce very similar effects without the addition of extra indexes. These plans will report union all operations, referencing views with names like: VW_ORE_xxxxxxxxx.

When you next upgrade you may find a few cases where you can get rid of indexes that were only created to work around defective application coding strategies. You may also want to hunt down any code where you’ve added use_concat hints and see if they can be removed, or if they should be replaced by or_expand. since the former hint will disable the latter feature.

 

Setting Oracle Session Parameters for Specific Process Scheduler Processes

David Kurtz - Fri, 2018-03-02 06:01
This note describes a mechanism for setting initialisation parameters for specific processes run on the process scheduler. I will demonstrate it relation to nVision, but it has general application in PeopleSoft.
A table is used to hold metadata that described what setting is applied to which processes. A trigger on the process scheduler request table PSPRCSRQST reads that data and alters the session setting. This approach is easier to adjust and understand that static PL/SQL code in a trigger.
NB: The PSAESRV server process is used to run application engine programs in the process scheduler by default.  It should only be used for very short-lived programs and should usually be deconfigured. It maintains a persistent connection to the database. Therefore, session parameters set for one application engine program will carry forward into subsequent programs run on the same server process with the potential for unintended and unpredictable results. Other processes all establish their own database session that ends when the process terminates. This trigger mechanism can be still used to set parameters for some processes that run on PSAESRV, but the default parameter value should then be set for all other application engine processes. 
Metadata The table that holds the metadata should be defined in Application Designer.
CREATE TABLE PS_PRCS_SESS_PARM (PRCSTYPE VARCHAR2(30) NOT NULL,
PRCSNAME VARCHAR2(12) NOT NULL,
OPRID VARCHAR2(30) NOT NULL,
RUNCNTLID VARCHAR2(30) NOT NULL,
PARAM_NAME VARCHAR2(50) NOT NULL,
PARMVALUE VARCHAR2(128) NOT NULL) TABLESPACE PTTBL STORAGE (INITIAL
40000 NEXT 100000 MAXEXTENTS UNLIMITED PCTINCREASE 0) PCTFREE 10
PCTUSED 80
/
CREATE UNIQUE iNDEX PS_PRCS_SESS_PARM ON PS_PRCS_SESS_PARM (PRCSTYPE,
PRCSNAME,
OPRID,
RUNCNTLID,
PARAM_NAME) TABLESPACE PSINDEX STORAGE (INITIAL 40000 NEXT 100000
MAXEXTENTS UNLIMITED PCTINCREASE 0) PCTFREE 10 PARALLEL NOLOGGING
/
ALTER INDEX PS_PRCS_SESS_PARM NOPARALLEL LOGGING
/
I will demonstrate this mechanism for nVision. The metadata is simply inserted into the table by script.
INSERT INTO sysadm.PS_PRCS_SESS_PARM (prcstype, prcsname, oprid, runcntlid, param_name, parmvalue)
VALUES ('nVision-ReportBook','RPTBOOK',' ',' ', 'parallel_degree_policy','auto');
INSERT INTO sysadm.PS_PRCS_SESS_PARM (prcstype, prcsname, oprid, runcntlid, param_name, parmvalue)
VALUES ('nVision-ReportBook','RPTBOOK',' ',' ', 'parallel_degree_limit','4');
INSERT INTO sysadm.PS_PRCS_SESS_PARM (prcstype, prcsname, oprid, runcntlid, param_name, parmvalue)
VALUES ('nVision-ReportBook','RPTBOOK',' ',' ', 'parallel_degree_level','150');
INSERT INTO sysadm.PS_PRCS_SESS_PARM (prcstype, prcsname, oprid, runcntlid, param_name, parmvalue)
VALUES ('nVision-ReportBook','RPTBOOK',' ',' ', 'parallel_min_time_threshold','1');
INSERT INTO sysadm.PS_PRCS_SESS_PARM (prcstype, prcsname, oprid, runcntlid, param_name, parmvalue)
VALUES ('nVision-ReportBook','RPTBOOK',' ',' ', '_optimizer_skip_scan_enabled','FALSE');
Here we have 5 session parameters that will apply to all nVision reportbooks, but which I don't want to apply to the rest of the system.
PRCSTYPE             PRCSNAME     OPRID      RUNCNTLID       PARAM_NAME                     PARMVALUE
-------------------- ------------ ---------- --------------- ------------------------------ --------------------
nVision-ReportBook RPTBOOK parallel_degree_policy auto
nVision-ReportBook RPTBOOK parallel_degree_limit 4
nVision-ReportBook RPTBOOK parallel_degree_level 150
nVision-ReportBook RPTBOOK parallel_min_time_threshold 1
nVision-ReportBook RPTBOOK _optimizer_skip_scan_enabled FALSE
The objective is to use limited parallelism only in the nVision reporting, and without decorating underlying ledger tables
  • parallel_degree_policy=auto enables the new 12c automatic degree of parallel calculation, statement queuing. 
  • parallel_min_time_threshold is set to 1 second. The default is 10. Statements whose runtime is estimated to be greater than or equal to this value will be considered for automatic degree of parallelism. 
  • parallel_degree_limit=4 restricts the automatic degree of parallelism to 4 to prevent any one statement using excessive parallelism.
  • parallel_degree_level=150 scales up the automatic degree of parallelism calculation but within the parallel_degree_limit. See Kerry Osborne's blog 12c – parallel_degree_level (control for auto DOP)
  • _optimiser_skip_scan_enabled=FALSE disables index skip scan to promote the use of smart full scan and Bloom filtering. It is recommended for engineered systems in Advice for the PeopleSoft DBA. Skip scan can prevent the optimizer choosing a smart full scan, so it makes sense to limits the setting to just nVision. I can also specify a parameter that will only be set when the reportbook is run by a particular operator with a particular run control.
INSERT INTO sysadm.PS_PRCS_SESS_PARM (prcstype, prcsname, oprid, runcntlid, param_name, parmvalue) VALUES ('nVision-ReportBook','RPTBOOK','NVOPR','NVSRBK_2', 'parallel_degree_level','200');
The specific setting for one particular operation ID and run control takes precedence of the generic setting for all reportbooks. In this case, I will scale the degree of parallelism further for a particular reportbook.
PRCSTYPE             PRCSNAME     OPRID      RUNCNTLID       PARAM_NAME                     PARMVALUE
-------------------- ------------ ---------- --------------- ------------------------------ --------------------

nVision-ReportBook RPTBOOK parallel_degree_level 150
nVision-ReportBook RPTBOOK NVOPR NVSRBK_2 parallel_degree_level 200
Trigger When a process starts the first thing it does is update its own status to 7 to indicate that it is processing. This is another example of a trigger created on that transition that injects behaviour at the start of a PeopleSoft process. This trigger reads the metadata and applies the settings with an ALTER SESSION command. The process type, name, operation and run control attributes must exactly match the process request, but a blank space is treated as a wildcard. Underscore parameters must be delimited in double quotes.
CREATE OR REPLACE TRIGGER sysadm.set_prcs_sess_parm
BEFORE UPDATE OF runstatus ON sysadm.psprcsrqst
FOR EACH ROW
WHEN (new.runstatus = 7
AND old.runstatus != 7
AND new.prcstype != 'PSJob')
DECLARE
l_cmd VARCHAR2(100 CHAR);
l_delim VARCHAR2(1 CHAR);
BEGIN
FOR i IN (
WITH x as (
SELECT p.*
, row_number() over (partition by param_name
order by NULLIF(prcstype, ' ') nulls last,
NULLIF(prcsname, ' ') nulls last,
NULLIF(oprid, ' ') nulls last,
NULLIF(runcntlid,' ') nulls last
) priority
FROM PS_PRCS_SESS_PARM p
WHERE (p.prcstype = :new.prcstype OR p.prcstype = ' ')
AND (p.prcsname = :new.prcsname OR p.prcsname = ' ')
AND (p.oprid = :new.oprid OR p.oprid = ' ')
AND (p.runcntlid = :new.runcntlid OR p.runcntlid = ' ')
)
SELECT * FROM x
WHERE priority = 1
) LOOP
IF SUBSTR(i.param_name,1,1) = '_' THEN
l_delim := '"';
ELSE
l_delim := '';
END IF;

IF NULLIF(i.parmvalue,' ') IS NOT NULL THEN
dbms_output.put_line('Rule:'||NVL(NULLIF(i.prcstype,' '),'*')
||'.'||NVL(NULLIF(i.prcsname,' '),'*')
||':'||NVL(NULLIF(i.oprid,' '),'*')
||'.'||NVL(NULLIF(i.runcntlid,' '),'*')
||':'||i.param_name||'='||i.parmvalue);

l_cmd := 'ALTER SESSION SET '||l_delim||i.param_name||l_delim||'='||i.parmvalue;
dbms_output.put_line('PI='||:new.prcsinstance||':'||:new.prcstype||'.'||:new.prcsname||':'
||:new.oprid||'.'||:new.runcntlid||':'||l_cmd);
EXECUTE IMMEDIATE l_cmd;
END IF;
END LOOP;
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line(sqlerrm);
END;
/
The trigger script set_prcs_sess_parm.sql can be download from my miscellaneous PeopleSoft scripts repository on Github.
Testing The trigger can be tested by updating the process scheduler request table in SQL*Plus, but be careful to roll back the update afterwards rather than committing. The trigger writes debug information to the server output that can be seen in SQL*Plus showing the rule being used and the ALTER SESSION command generated. However, this output is not captured in any PeopleSoft log when the trigger is fired by a PeopleSoft process.
set serveroutput on 
update psprcsrqst set runstatus = 7 where runstatus != 7
and prcsname = 'RPTBOOK' and runcntlid = 'NVSRBK_2' and oprid = 'NVOPR' and rownum = 1;
rollback;

Rule:nVision-ReportBook.RPTBOOK:*.*:_optimizer_skip_scan_enabled=FALSE
PI=3543986:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_2:ALTER SESSION SET "_optimizer_skip_scan_enabled"=FALSE
Rule:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_2:parallel_degree_level=200
PI=3543986:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_2:ALTER SESSION SET parallel_degree_level=200
Rule:nVision-ReportBook.RPTBOOK:*.*:parallel_degree_limit=4
PI=3543986:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_2:ALTER SESSION SET parallel_degree_limit=4
Rule:nVision-ReportBook.RPTBOOK:*.*:parallel_degree_policy=auto
PI=3543986:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_2:ALTER SESSION SET parallel_degree_policy=auto
Rule:nVision-ReportBook.RPTBOOK:*.*:parallel_min_time_threshold=1
PI=3543986:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_2:ALTER SESSION SET parallel_min_time_threshold=1
In the above example, the specific rule for NVSRBK_2 was applied setting PARALLEL_DEGREE_LEVEL to 200, whereas in the next example the generic setting of 150 is applied to NVSRBK_1.
update psprcsrqst set runstatus = 7 where runstatus != 7
and prcsname = 'RPTBOOK' and runcntlid = 'NVSRBK_1' and oprid = 'NVOPR' and rownum = 1;
rollback;

Rule:nVision-ReportBook.RPTBOOK:*.*:_optimizer_skip_scan_enabled=FALSE
PI=3543964:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_1:ALTER SESSION SET "_optimizer_skip_scan_enabled"=FALSE
Rule:nVision-ReportBook.RPTBOOK:*.*:parallel_degree_level=150
PI=3543964:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_1:ALTER SESSION SET parallel_degree_level=150
Rule:nVision-ReportBook.RPTBOOK:*.*:parallel_degree_limit=4
PI=3543964:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_1:ALTER SESSION SET parallel_degree_limit=4
Rule:nVision-ReportBook.RPTBOOK:*.*:parallel_degree_policy=auto
PI=3543964:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_1:ALTER SESSION SET parallel_degree_policy=auto
Rule:nVision-ReportBook.RPTBOOK:*.*:parallel_min_time_threshold=1
PI=3543964:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_1:ALTER SESSION SET parallel_min_time_threshold=1

Oracle GoldenGate Application Adapters compatible with trails release 12.3

Tom Kyte - Fri, 2018-03-02 01:26
Hi, Is there a release available of Oracle GoldenGate Application Adapters that is compatible with trail file format 12.3 (for Linux x86-64)? On OTN, GoldenGate Application Adapters release 12.2 is downloadable, however, this release requires set...
Categories: DBA Blogs

Datapump SQL minitoring with SQL (not PL/SQL)

Tom Kyte - Fri, 2018-03-02 01:26
I'm on 11.2 and 12.1 and would like to monitor datapump sessions with pure SQL (I am not allowed to create objects on the databases). I can monitor using: <code> select ss.sid , s.job_name , j.state --, j.job_mode --, j.degree , dbms_rcvm...
Categories: DBA Blogs

Node.js vs PL/SQL

Tom Kyte - Fri, 2018-03-02 01:26
We currently have a read-only application written using MOD PL/SQL. Our agile development group is recommending using angular JS for the AX/AI development, and NodeJS on a Happi framwork. the database is currently Oracle 11.2.0.4. We hope to upgrade ...
Categories: DBA Blogs

Timestamp of oracle and ORA-30088: datetime/interval precision is out of range

Tom Kyte - Fri, 2018-03-02 01:26
Hi Tom I am trying to store timestamp as below. Timstmp(12) YYYY-MM-DD-hh.mm.ss.nnnnnnnnnnnn So i am trying to create the table as below. <code>create table t23 (ts0 timestamp(0) , ts3 timestamp(3) , ts6 timestamp(6) ...
Categories: DBA Blogs

Conditional compilation in query

Tom Kyte - Fri, 2018-03-02 01:26
I am trying to use conditional compilation in a sql query of a cursor. Below is the code <code>create or replace procedure p_env_check(i_env in number, o_cur out sys_refcursor) is begin open o_cur for...
Categories: DBA Blogs

Creating RMAN backups using Commvault

Amis Blog - Thu, 2018-03-01 09:35

One of our customers decided to migrate to Commvault for creating their backups. First they started with OS file backups, but finally they also want to create backups of the Oracle database with Commvault.
Commvault is not replacing RMAN. In fact Commvault generates RMAN commands/scripts which are run on the Oracle database. It can make backups to a storage server.

In this blog article I would like to describe a possible way to make RMAN backups with Commvault. The method described here makes Full, incremental and archivelog backups to an external storage and also makes a Oracle Recommended Backup in the local FRA of the database. So in fact a twofold backup approach is used here: The first one is a classical Full and Incremental backup approach to external storage. The other one is a Oracle Recommended Backup approach to the “local” FRA.

We implement this approach by two Commvault jobs:
1. Full Incremental. This one runs with a frequency of every day.
2. Incremental. This one runs every 4 hours.
The terms used in Commvault are a little bit different from the ones used in Oracle. Therefore these terms can confuse the “standard” Oracle dba a little bit.

The full incremental backup job we execute consists of the following steps:
1. Full backup: to external commvault storage

a. full backup = incremental level 0 backup to external commvault storage
b. autobackup controlfile to external commvault storage

2. Oracle Recommended backup: to FRA on local server

a. backup incremental level 1 for recover of copy: to FRA
b. recover of copy of database: in FRA
c. autobackup controlfile: to FRA

3. backup of archivelog: to external commvault storage

a. backup of archivelog: to external commvault storage
b. autobackup controlfile: to external commvault storage

The incremental backup job consists of:
1. Incremental level 1 backup: to external commvault storage

a. incremental level 1 backup to external commvault storage
b. autobackup controlfile to external commvault storage

2. Oracle Recommended backup: to FRA on local server

a. backup incremental level 1 for recover of copy: to FRA
b. recover of copy of database: in FRA
c. autobackup controlfile: to FRA

3. backup of archivelog: to external commvault storage

a. backup of archivelog: to external commvault storage
b. autobackup controlfile: to external commvault storage

 

Creating a backup job in Commvault:

The following steps should be executed to implement this backup strategy:

First the Commvault console should be started in a browser. That should be done by opening the following URL:

http://[servername].[domainname]/console/

You then should provide your credentials in order to login.

step 1: configure storage policies

Go to Client Computer Groups, select Oracle and then select the server you want to choose, in this case puhora0004. And then select Oracle under puhora0004. Click with the right mouse button on Oracle (the branch under puhora0004) and select properties:

The next screen will be shown:

The DATA Storage Policy and Log Storage Policy should be entered. The system engineer responsible responsible for the Commvault system already made a storage policy for Oracle named SP_NDC1_Oracle. So we entered this storage policy in these fields. See the screenprint above. Click OK.

Back in the last screen select the database under the puhora004-Oracle branch. In this example it is the PRIMRP1P database.

Click with the right mouse button on this database and select properties. Click on tab Storage Device. Then select in the field Storage Policy  … “SP_NDC1_Oracle”.

Click OK.

In the tab on the right select default:

Click with the right mouse button and select Properties. Select tab Storage Device. Select Storage Policy: SP_NDC1_Oracle:

 

Select tab Advanced  and if necessary sub tab Backup Arguments
Enter the field Backup Tag. In this example this is: COPY_DB :

 

Select sub tab Options and select Merge Incremental Image Copies:
(this one switches on Oracle Recommended Backup)

 

Choose tab Content. Select Selective Online Full:

In the Commvault version we used, there was a bug. This bug causes the storage policy under Oracle to disappear if you configure the storage policy in one of the underlying databases. So you should check after all the storage policies have been applied if all the storage policies are still there.

 

Step 2: Create a schedule policy

In the left pane go to Policies. And then choose Schedule Policies. Select with the right mouse button:  New Schedule Policy:

Provide your New Schedule Policy with a name.

Click on Agent type, select Oracle. And then click on the blue colored word Select:

Click on Add:
Select Full:

Choose tab Schedule Pattern: Fill this window for example with the following information:

Click OK.

You have now made a schedule for the full backup.
Choose your Start Time with care. Your incremental backup should be finished before you start a Full backup. If the incremental backup still is running on the scheduled start time of your full backup, your full backup will not start at all. So be sure your incremental backup will not be running on the Start Time you choose for the Full backup.

Next you make a schedule for the incremental backup.

Click Add, select Incremental:

Choose Schedule Pattern. Fill this window for example with the following information:

Click OK.
Next, click on tab Associations:

Look up under the Oracle branch the correct server and databases:
In this example these are the puhora0004, puhora0005 and puthkd17 servers
Select under the requested database the option default.

Click OK.

We have now made two scheduled jobs (full and incremental) that creates backups of 3 databases which are running on 3 different servers.

You could also decide to make a schedule which creates backups of all databases on 1 particular server.

How to view your backup history

You can view which backups have been made on your database with Commvault. To do that select in the left pane Client Computer Groups – Oracle – [servername – in our example puhora004]– Oracle – [database name – in our example PRIMRP1P] Click on PRIMRP1P with the right mouse button and select view and the Backup History.

Click OK on the following screen called Backup History Filter. Then you will see an overview of the backups that were made on this database.

You can view status of the job: Completed or Failed. You can find the type of backup: Full or Incremental. The Start Time, End Time, Duration can also be viewed on this screen.

It is also possible to view the rman log of a backup job. You can do this as follows:
Click on the backup to want to see. Click with your right mouse button on this backup. Choose View RMAN log:

Then you can read the log of the RMAN backup:

 

The post Creating RMAN backups using Commvault appeared first on AMIS Oracle and Java Blog.

Mview creation from view

Tom Kyte - Thu, 2018-03-01 07:06
The MV creation using a view is forcing to use minimum 1 table even if that table is a dummy. Creating MV directly on view is failing with ORA error ORA-12015. Further if I were to use DUAL instead of dummy table it gives a different ORA error ORA...
Categories: DBA Blogs

calculate how many minutes employee worked for a day in hourly breakup

Tom Kyte - Thu, 2018-03-01 07:06
I am having data like Empcode attdate intime outtime 12345 03-JAN-2018 00:00:00 03-JAN-2018 10:56:00 03-JAN-2018 16:44:00 12345 03-JAN-2018 00:00:00 03-JAN-2018 20:00:00 03-JAN-2018 23:...
Categories: DBA Blogs

Getting started with Performance Diagnosis

Tom Kyte - Thu, 2018-03-01 07:06
Mostly we don't get database performance problem, so never felt a need to learn much about performance. Oracle take good care of itself. But one day db server got restarted, then i got performance problem calls from users. At this point i got bli...
Categories: DBA Blogs

Database hang and we cannot switch logfile or shutting down the DB

Tom Kyte - Thu, 2018-03-01 07:06
Hi Experts, During the issue time database hang and we cannot switch logfile or shutting down using IMMEDIATE. All the Redo logfiles are showing status ACTIVE and still its showing ACTIVE after 1 Week. \ Why the Redo logfile is still ACTIVE...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator