Feed aggregator

Oracle Security Training In York, UK, 2018

Pete Finnigan - Sun, 2018-06-03 05:46
I have just updated our public training dates page to add two new dates for Oracle Security training classes that I will be running here in York, UK. We now have 4 dates covering three available classes. These are as....[Read More]

Posted by Pete On 02/06/18 At 06:54 PM

Categories: Security Blogs

Demo: Oracle Database Continuous Query Notification in Node.js

Christopher Jones - Sat, 2018-06-02 08:32

Native Oracle Database Continuous Query Notification (CQN) code has landed in the node-oracledb master branch on GitHub. If you want to play with it, but don't want to wait for the next binary node-oracledb release, you can compile node-oracledb yourself and play with this demo.

 

 

Some of you may already be using CQN via its PL/SQL APIs. The new, native support in node-oracledb makes it all so much nicer. Check out the development documentation for connection.subscribe() and the 'user manual'. There are a couple of examples cqn1.js and cqn2.js available, too.

CQN allows JavaScript methods to be called when database changes are committed by any transaction. You enable it in your node-oracledb app by registering a SQL query. CQN has two main modes: object-level and query-level. The former sends notifications (i.e. calls your nominated JavaScript method) when changes are made to database objects used in your registered query. The query-level mode only sends notifications when database changes are made that would impact the result set of the query, e.g. the WHERE clause is respected.

If you're not using CQN, then you might wonder when you would. For infrequently updated tables you can get CQN to generate notifications on any data or table change. I can see how query-level mode might be useful for proactive auditing to send alerts when an unexpected, but valid, value is inserted or deleted from a table. For tables with medium levels of updates, CQN allows grouping of notifications by time, which is a way of reducing load by preventing too many notifications being generated in too short a time span. But, as my colleague Dan McGhan points out, if you know the table is subject to a lot of change, then your apps will be better off simply polling the table and avoiding any CQN overhead. Note that CQN was designed to be used for relatively infrequently updated tables.

DEMO APP

I've thrown together a little app that uses CQN and Socket.IO to refresh a message notification area on a web page. It's really just a simple smush of the Socket.IO intro example and the node-oracledb CQN examples.

There is a link to all the code in the next section of this post; I'll just show snippets inline here. I'm sure Dan will update his polished 'Real-time Data' example soon, but until then here is my hack code. It uses Node.js 8's async/await style - you can rewrite it if you have an older Node.js version.

One thing about CQN is that the node-oracledb computer must be resolvable by the Database computer; typically this means having a fixed IP address which may be an issue with laptops and DHCP. Luckily plenty of other cases work too. For example, I replaced my Docker web service app with a CQN example and didn't need to do anything with ports or identifying IP addresses. I'll leave you to decide how to run it in your environment. There are CQN options to set the IP address and port to listen on, which may be handy.

The demo premise is a web page with a message notification area that always shows the five most recent messages in a database table. The messages are being inserted into that table by other apps (I'll just use SQL*Plus to do these inserts) and the web page needs to be updated with them only when there is a change. I'm just using dummy data and random strings:

To see how it fits together, look at this no-expense-spared character graphic showing the four components: SQL*Plus, the database, the browser and the Node.js app:

SQL*PLUS: DATABASE: insert into msgtable >-------> msgtable >-------CQN-notification------------------+ commit | | BROWSER: <-------+ NODE.JS APP: | 5 Message | URL '/' serves index.html | 4 Message | | 3 Message | CQN: | 2 Message | subscribe to msgtable with callback myCallback | 1 Message | | | myCallback: <------------------------------------+ | query msgtable +-----------< send rows to browser to update the DOM

The app (bottom right) serves the index page to the browser. It connects to the DB and uses CQN to register interest in msgtable. Any data change in the table from SQL*Plus (top left) triggers a CQN notification from the database to the application, and the callback is invoked. This callback queries the table and uses Socket.IO to send the latest records to the browser, which updates the index.html DOM.

The first thing is to get your DBA (i.e. log in as the SYSTEM user) to give you permission to get notifications:

GRANT CHANGE NOTIFICATION TO cj;

We then need a table that our app will get notifications about, and then query to get the latest messages:

CREATE TABLE cj.msgtable ( k NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY(START WITH 1), message VARCHAR(100) );

The column K is an Oracle Database 12c identity column that will automatically get a unique number inserted whenever a new message is inserted. In older database versions you would create a sequence and trigger to do the same.

The little SQL script I use to insert data (and trigger notifications) is:

INSERT INTO msgtable (message) VALUES (DBMS_RANDOM.STRING('A', 10)); COMMIT;

The Node.js app code is more interesting, but not complex. Here is the code that registers the query:

conn = await oracledb.getConnection(); await conn.subscribe('mysub', { callback: myCallback, sql: "SELECT * FROM msgtable" }); console.log("CQN subscription created");

Although CQN has various options to control its behavior, here I keep it simple - I just want to get notifications when any data change to msgtable is committed.

When the database sends a notifications, the method 'myCallback' will get a message, the contents of which will vary depending on the subscription options. Since I know the callback is invoked when any table data has changed, I ignore the message contents and go ahead and query the table. The rows are then stringified and, by the magic of Socket.IO, sent to the web page:

async function myCallback(message) { let rows = await getData(); // query the msgtable io.emit('message', JSON.stringify(rows)); // update the web page }

The helper function to query the table is obvious:

async function getData() { let sql = `SELECT k, message FROM msgtable ORDER BY k DESC FETCH NEXT :rowcount ROWS ONLY`; let binds = [5]; // get 5 most recent messages let options = { outFormat: oracledb.OBJECT }; let conn = await oracledb.getConnection(); let result = await conn.execute(sql, binds, options); await conn.close(); return result.rows; }

At the front end, the HTML for the web page contains a 'messages' element that is populated by JQuery code when a message is received by Socket.IO:

<ul id="messages"></ul> <script src="https://cdnjs.cloudflare.com/ajax/libs/socket.io/2.1.1/socket.io.js"></script> <script src="https://code.jquery.com/jquery-3.3.1.js"></script> <script> $(function () { var socket = io(); socket.on('message', function(msg){ $('#messages').empty(); $.each(JSON.parse(msg), function(idx, obj) { $('#messages').append($('<li>').text(obj.K + ' ' + obj.MESSAGE)); }); }); }); </script>

You can see that the JSON string received from the app server is parsed and the K and MESSAGE fields of each row object (corresponding to the table columns of the same names) are inserted into the DOM in an unordered list.

That's it.

DEMO IN ACTION

To see it in action, extract the code and install the dependencies:

cjones@mdt:~/n/cqn-sockets$ npm install npm WARN CQN-Socket-Demo@0.0.1 No repository field. npm WARN CQN-Socket-Demo@0.0.1 No license field. added 86 packages in 2.065s

I cheated a bit there and didn't show node-oracledb compiling. Once a production release of node-oracledb is made, you should edit the package.json dependency to use its pre-built binaries. Until then, node-oracledb code will be downloaded and compiled - check the instructions for compiling.

Edit server.js and set your database credentials - or set the referenced environment variables:

let dbConfig = { user: process.env.NODE_ORACLEDB_USER, password: process.env.NODE_ORACLEDB_PASSWORD, connectString: process.env.NODE_ORACLEDB_CONNECTIONSTRING, events: true // CQN needs events mode }

Then start the app server:

cjones@mdt:~/n/cqn-sockets$ npm start > CQN-Socket-Demo@0.0.1 start /home/cjones/n/cqn-sockets > node server.js CQN subscription created Listening on http://localhost:3000

Then load http://localhost:3000/ in a browser. Initially the message pane is blank - I left bootstrapping it as an exercise for the reader.

Start SQL*Plus in a terminal window and create a message:

SQL> INSERT INTO msgtable (message) VALUES (DBMS_RANDOM.STRING('A', 10)); SQL> COMMIT;

Every time data is committed to msgtable, the message list on the web page is automatically updated:

If you don't see messages, review Troubleshooting CQN Registrations. The common problems will be network related: the node-oracledb machine must be resolvable, the port must be open etc.

Try it out and let us know how you go. Remember you are using development code that just landed, so there may be a few rough edges.

PostgreSQL 11 : Procedures are coming

Yann Neuhaus - Sat, 2018-06-02 05:38

Reading about new features about future version PostgreSQL 11. I see that procedures will be implemented. Why is it so important? Until now only functions are available in PostgtreSQL. Of course a function is a procedure that returns a value, we can say, yes it’s true but you cannot manage transactions in a function. To better understand let’s do a quick demonstration.
Note that I am using the snapshot developer version of PostgreSQL 11 .

[postgres@pg_essentials_p1 bin]$ ./psql
Null display is "NULL".
psql (11devel)
Type "help" for help.

(postgres@[local]:5432) [postgres] > select version();
                                                  version
--------------------------------------------------------------------------------
---------------------------
 PostgreSQL 11devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623
 (Red Hat 4.8.5-4), 64-bit
(1 row)

(postgres@[local]:5432) [postgres] >

For the demonstration I have a table emp

(postgres@[local]:5432) [test] > table emp;
 id | name
----+------
  1 | toto
  2 | Edge
(2 rows)

(postgres@[local]:5432) [test] >

And let’s say I want to insert data in my table using following function

CREATE or replace FUNCTION fun_insert_emp(id_emp int,  emp_name varchar(20))
 RETURNS  void AS $$
BEGIN
 insert into emp (id,name) values (id_emp,emp_name);
commit;
END;
$$
LANGUAGE PLPGSQL;

We can describe the function like this

(postgres@[local]:5432) [test] >  \sf  fun_insert_emp                           
CREATE OR REPLACE FUNCTION public.fun_insert_emp(id_emp integer, emp_name character varying)
 RETURNS void
 LANGUAGE plpgsql
AS $function$
BEGIN
 insert into emp (id,name) values (id_emp,emp_name);
commit;
END;
$function$
(postgres@[local]:5432) [test] >

To call a function we use a select like any built-in function. So let’s try to insert a new employee with the function

(postgres@[local]:5432) [test] > select fun_insert_emp(3,'New Emp');
ERROR:  invalid transaction termination
CONTEXT:  PL/pgSQL function fun_insert_emp(integer,character varying) line 4 at COMMIT
(postgres@[local]:5432) [test] >

Seems that the word COMMIT is causing trouble. OK let’s recreate the function without the COMMIT

(postgres@[local]:5432) [test] >  \sf  fun_insert_emp                           
CREATE OR REPLACE FUNCTION public.fun_insert_emp(id_emp integer, emp_name character varying)
 RETURNS void
 LANGUAGE plpgsql
AS $function$
BEGIN
 insert into emp (id,name) values (id_emp,emp_name);
END;
$function$
(postgres@[local]:5432) [test] >

And let’s call again the function. We can see that the row was inserted

(postgres@[local]:5432) [test] > select fun_insert_emp(3,'New Emp');             
fun_insert_emp
----------------

(1 row)

(postgres@[local]:5432) [test] > table emp;
 id |  name
----+---------
  1 | toto
  2 | Edge
  3 | New Emp
(3 rows)

(postgres@[local]:5432) [test] >

But the main problem we saw is that a function in PostgreSQL does not support any transaction instruction (BEGIN…COMMIT, BEGIN…ROLLBACK). This means that in a function I cannot use instructions that allow to rollback or commit based on some conditions.
But PostgreSQL 11 will support procedure. Let’s do again the demonstration with a procedure.
Let’s first create the procedure

(postgres@[local]:5432) [test] > create or replace procedure proc_insert_emp (id_emp int,  emp_name varchar(20))
test-# as $$
test$# Begin
test$# insert into emp (id,name) values (id_emp,emp_name);
test$# commit;
test$# end ;
test$# $$
test-# LANGUAGE PLPGSQL;
CREATE PROCEDURE
(postgres@[local]:5432) [test] >

And let’s insert a new row in table emp using proc_insert_emp

(postgres@[local]:5432) [test] > call  proc_insert_emp(4,'Brice');
CALL

(postgres@[local]:5432) [test] > table emp;
 id |  name
----+---------
  1 | toto
  2 | Edge
  3 | New Emp
  4 | Brice
(4 rows)

(postgres@[local]:5432) [test] >

We can see that the row was inserted. But the main difference is the support of autonomous transaction and this will be definitively change life for developers.

 

Cet article PostgreSQL 11 : Procedures are coming est apparu en premier sur Blog dbi services.

Application Development Services Update

Application Container Cloud Service Secure Environment Variables Environment variables can now be secured by using the user interface or the deployment.json file. The value of the secure...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Installing MAMP to play with PHP, MySQL and OpenFlights

Yann Neuhaus - Sat, 2018-06-02 02:30

You may wonder what I’m doing with those technologies that are completely different from what I usually do. I’ll detail in a future blog post but the idea is giving a short introduction to databases to students at https://powercoders.org/, a coding academy for refugees in Switzerland. They install MAMP (My Apache – MySQL – PHP) during their curriculum for website development, and then I’ll use this environment to teach data modeling and SQL basics. Thus, I’ve to look at MAMP, PHP and MySQL for the first time… And I decided to load the OpenFlights open data to play with.
That explains the title.

CaptureMAMP002
So MAMP is like LAMP (Linux+Apache+PHP+MySQL) but with a M for MacOS, but also Windows (W being an upside-down M after all). Let’s install that on my laptop. I download it from https://www.mamp.info/en/downloads/, run the .exe, all is straightforward and the installer notifies me that the installation will be completed after a reboot.

What? Reboot? Seriously, we are in 2018, that’s Windows 10, I refuse to reboot to install a simple Apache server!

This bundle is very easy to use: a simple window to start and stop the servers (Apache and MySQL) . A setup menu to configure them, but I keep the default. And a link to the start page. All that is installed under C:\MAMP (you can change it, I just kept the default settings). The first time you start the servers, the Windows Firewall configuration is raised and you have to accept it:

CaptureMAMP003

With all defaults (Apache on port 80) my web server pages are on http://localhost (serving the files in C:\MAMP\htdocs) and administration page is at http://localhost/MAMP/
The MySQL administration page (phpMyAdmin) is at http://localhost/MAMP/index.php?page=phpmyadmin. It seems that, at least by default, I don’t need a password to go to the admin pages.

display_errors

I’ll write some PHP and because it’s the first time in my life, I will have some errors. With the default configuration, Apache just raises and Error 500 which does not help me a lot for debugging. This configuration is ok for production because displaying errors may give clues to hackers. But I’m there to play and I want to see the error messages and line numbers.

I have to set display_errors=on for that. The current setting is displayed in http://localhost/MAMP/index.php?language=English&page=phpinfo#module_core and I can change it in C:\MAMP\conf\php7.2.1\php.ini and after a restart of the Apache server I can see full error messages:

Warning: mysqli_real_connect(): (28000/1045): Access denied for user 'root'@'localhost' (using password: YES) in C:\MAMP\htdocs\index.php on line 123

Little fix

But now that I display the errors, I get this annoying message each time I try to do something in phpMyAdmin (which runs as PHP in the same Apache server):

MAMP "count(): Parameter must be an array or an object that implements Countable"

CaptureMAMP004

So this product, which is free but has also a ‘PRO’ version, probably running the same code, is delivered with bad code, raising errors that were ignored. Don’t tell me that it is just a warning. You will see that parentheses are missing, this is a syntax error and raising only a warning for that is quite bad.
CaptureMAMP006
My common sense tells me that we should set display_errors=on and test a few screens before releasing a software. But that step has probably been skipped. Fortunately, the message is clear: line 615 of C:\MAMP\bin\phpMyAdmin\libraries\sql.lib.php

The message is about count() not having the correct parameter. The line 615 shows count($analyzed_sql_results[‘select_expr’] == 1 ) which is probably not correct because it counts a boolean expression. I’ve changed it to (count($analyzed_sql_results[‘select_expr’]) == 1 ) as I suppose they want to count and compare to one.

Well, I’ve never written one line of PHP and I already hate it for its error handling weakness.

Load some data

I want to initialize the database with some data and I’ll use the OpenFlights database. I’ve downloaded and unzipped https://github.com/jpatokal/openflights/blob/master/sql/master.zip
I go to the unzipped directory and run MySQL:

cd /d D:\Downloads\openflights-master

Another little thing to fix here: the sql\create.sql and sql\load-data.sql files contain some lines starting with “\! echo” but this \! command (to run a system command) exists on Linux but not on the Windows port of MySQL. We have to remove them before running the SQL scripts. I’m used to Oracle where I can port my code and scripts from one platform to the other, and was a but surprised by this.

Ready to connect:

C:\MAMP\bin\mysql\bin\mysql test --port=3306 --host=localhost --user root --password
Enter password:

The MySQL connection parameters are displayed on http://localhost/MAMP/ including the password (root)


source sql\create.sql
 
mysql> source sql\create.sql
Query OK, 0 rows affected (0.00 sec)
 
Connection id: 314
Current database: flightdb2
 
Query OK, 0 rows affected (0.02 sec)
 
Query OK, 0 rows affected (0.02 sec)
 
Query OK, 0 rows affected (0.01 sec)
 
Query OK, 0 rows affected (0.01 sec)
 
Query OK, 0 rows affected (0.02 sec)
Records: 0 Duplicates: 0 Warnings: 0
 
Query OK, 0 rows affected (0.02 sec)
Records: 0 Duplicates: 0 Warnings: 0
 
Query OK, 0 rows affected (0.00 sec)
...

This has created the flightdb2 database, with openflights user, and 15 tables.

Now, if you are still in the unzipped directory, you can load data with the source sql\load-data.sql script which loads from the data\*.dat files

mysql> source sql\load-data.sql
Query OK, 6162 rows affected, 4 warnings (0.04 sec)
Records: 6162 Deleted: 0 Skipped: 0 Warnings: 4
 
Query OK, 7184 rows affected, 7184 warnings (0.12 sec)
Records: 7184 Deleted: 0 Skipped: 0 Warnings: 7184
 
Query OK, 67663 rows affected (0.53 sec)
Records: 67663 Deleted: 0 Skipped: 0 Warnings: 0
 
Query OK, 260 rows affected (0.01 sec)
Records: 260 Deleted: 0 Skipped: 0 Warnings: 0
 
Query OK, 12 rows affected (0.01 sec)
Records: 12 Deleted: 0 Skipped: 0 Warnings: 0

Query from PHP

So, for my first lines of PHP I’ve added the following to C:\MAMP\htdocs\index.php:

<?php
$user = 'openflights'; $password = '';
$db = 'flightdb2'; $host = 'localhost'; $port = 3306;
 
$conn = mysqli_init();
if (!$conn) {
die("mysqli_init failed");
}
if (!$success = mysqli_real_connect( $conn, $host, $user, $password, $db, $port)) {
die("😠 Connection Error: " . mysqli_connect_error());
}
echo "😎 Connected to database <b>$db</b> as user <b>$user</b>.";
?>
 
<p>
Here are the Airports:
<table border=1>
<tr><th>IATA</th><th>Name</th></tr>
 
<?php
$result = $conn->query("select iata,name from airports where country='Greenland' order by 2");
if ($result->num_rows > 0) {
while($row = $result->fetch_assoc()) {
echo "<tr><td>" . $row["iata"]. "</td><td> " . $row["name"]. "</tr>";
}
} else {
echo "0 results";
}
mysqli_close($conn);
?>
</table>

CaptureMAMP008

Here, I call mysqli_init(), set the credentials and call mysqli_real_connect() to get the connection handle. Then I run my query and display the result as an HTML table. Nothing difficult here. The main challenge is probably to keep the code maintainable.

In my opinion, and despite the small issues encountered, MAMP is a simple way to setup a development environment on Windows. All is there to introduce SQL and Database for developers, and show how to call it from a programming language.

 

Cet article Installing MAMP to play with PHP, MySQL and OpenFlights est apparu en premier sur Blog dbi services.

storing password history

Tom Kyte - Fri, 2018-06-01 17:26
<code> I have a table create table user_password_history (user_cd varchar2(3), user_pass varchar2(30), created_date date); i want to keep last 5 records of each user in this table. kindly guide me how to achieve it. Thanks for your suppor...
Categories: DBA Blogs

The one case where we need an index rebuild - how to avoid it?

Tom Kyte - Fri, 2018-06-01 17:26
Having read Tom's books I know that in general there's no reason to rebuild indexes, and so we don't. However, we have one particular index which does benefit from rebuilding. Since it's a locally-partitioned index on a date-partitioned table we ...
Categories: DBA Blogs

CHAR Index (vs) NUMERIC Index

Tom Kyte - Fri, 2018-06-01 17:26
Hi Tom, In comparison to an Index on CHAR field and Index on NUMERIC field pls let me know which Index(whether CHAR or NUMERIC) would be faster. Thanx & Regards, Madhusudhana Rao.P
Categories: DBA Blogs

TNS-12541 : TNS: No Listener

Tom Kyte - Fri, 2018-06-01 17:26
Hi, In order to configure Standby database I have created one listener "Prod" & TNS Service "To_Stand" on Primary Side.Listener created successfully & its running.But the problem is TNS is not establishing connection with the database.Its saying "No...
Categories: DBA Blogs

Creating Dependent/Cascading Select Lists with Visual Builder

Shay Shmeltzer - Fri, 2018-06-01 17:13

A common requirement in applications is to have dependent lists (also known as cascading lists) - meaning have the value selected in one place influence the values that could be select in another place. For example when you select a state, we'll only show you cities in that state in the city list.

In the short demo video below, I'm showing you how to implement this cascading lists solution with the new Visual Builder Cloud Service.

The solution is quite simple

You catch the event of a value change in the first list, and in the action chain that is invoked you set a filterCriterion on the second list. (See this entry for a quick introduction to filterCriterion).

Since the list is connected to a ServiceDataProvider, there is no further action you need to take - the change to the SDP will be reflected in the UI component automatically.

Quick tips - make sure you reference the id of the column and that your operators are properly defined and enclosed in double quotes.

 

Categories: Development

SQLcl connect target depends on previous connection

Yann Neuhaus - Fri, 2018-06-01 05:36

I thought it was a bug for two reasons: first, because I don’t like that my connect destination depends on the context, and then because it is a different behavior than in SQL*Plus. But finally, it is the correct behavior, and the 100% compatibility with SQL*Plus is expected only when SET CLASSIC=ON. And as I was surprised, and I think it can be dangerous, here is a blog post about it. Basically, be very careful if you are connected with a service name and you want to connect locally.

Basically, if you attended my demo about ‘From Transportable Tablespaces to Pluggable Databases’, where I switch between different instances you may have seen that I had to run ‘connect / as sysdba’ two times because the first one failed with invalid username/password

SQL> connect scott/tiger@//localhost/PDB1
Connected.
...
SQL> connect / as sysdba
USER =
URL = jdbc:oracle:oci8:@//localhost/PDB1
Error Message = ORA-01017: invalid username/password; logon denied
 
Warning: You are no longer connected to ORACLE.
 
SQL> connect / as sysdba
Connected.

This is not a big deal, but that means that it tries to connect to //localhost/PDB1 when I wanted to connect locally to my ORACLE_SID environment variable. Here, expecting a bequeath connection, I didn’t provide a password, then I cannot connect to the PDB. But imagine that I use a password, and the password is the same in the two databases… I would have been connected to the wrong database. Just imagine this:

SQL> host echo $ORACLE_SID
TEST
SQL> connect sys/password as sysdba
Connected.
-- checking something on PROD
SQL> connect sys/password@PROD as sysdba
Connected.
...
-- back to TEST (or at least I think so)
SQL> host echo $ORACLE_SID
TEST
SQL> connect sys/password as sysdba
Connected.
-- do terrible things
SQL> drop table VERY_IMPORTANT_TABLE;
Table VERY_IMPORTANT_TABLE dropped.
-- now look where I am:
SQL> show connection
CONNECTION:
SYS@jdbc:oracle:oci8:@PROD AS SYSDBA
CONNECTION_IDENTIFIER:
PROD

Actually, what happens is that when SQLcl is already connected with a connection string (i.e not locally using bequeath) the next connect command will use the same connection string. This means that:

connect user/password

is actually equivalent to

connect user/password@&_CONNECT_IDENTIFIER

SQL*Plus

This behavior has been introduced in SQLcl but this is not how SQL*Plus works:

SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
SQL> connect / as sysdba
Connected.
SQL> define _CONNECT_IDENTIFIER
DEFINE _CONNECT_IDENTIFIER = "CDB1" (CHAR)

Disconnect

The first solution to avoid this in SQLcl is to always disconnect before you want to connect to a different service:

SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
SQL> disc
 
Disconnected from Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.2.0.0.0
SQL> connect sys/oracle as sysdba
Connected.
SQL> show connection
CONNECTION:
SYS@jdbc:oracle:oci8:@ AS SYSDBA
CONNECTION_IDENTIFIER:
CDB$ROOT

This is why the second time was ok in my case: first one failed with invalid password and then I was disconnected.

TWO_TASK

The second solution is to set an impossible TWO_TASK (or LOCAL in Windows) so that local connections are impossible:

SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
SQL> connect sys/oracle as sysdba
USER = sys
URL = jdbc:oracle:oci8:@NoWhere
Error Message = ORA-12154: TNS:could not resolve the connect identifier specified
USER = sys
URL = jdbc:oracle:thin:@NoWhere
Error Message = IO Error: Unknown host specified
USER = sys
URL = jdbc:oracle:thin:@NoWhere:1521/NoWhere
Error Message = IO Error: Unknown host specified

CLASSIC=ON

The third solution is to run SQLcl in SQL*Plus 100% compatible mode:

SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
SQL> set classic on
SQL> show classic
SQL*Plus mode: ON
SQL> connect / as sysdba
Connected.

Here we have the same behavior as SQL*Plus: no use of current connection string.

The SQL CLASSIC ON is usually for the output (error messages, autotrace statistics, and a few enhancement made to SQLcl). And the online help still says that it is about output:

SQL> help set classic
SET CLASSIC
SET CLASSIC [ ON | OFF ] Set classic SQL*Plus Settings on
This will allow scripts which expect traditional output to be honored.

However, it seems that this CLASSIC mode is also very important for connection.

Test and show _CONNECTION_STRING

If you show the connection string at the prompt, this may prevent errors:

SQL> set sqlprompt "_connect_identifier> "
//localhost/CDB2> connect sys/oracle@//localhost/CDB1 as sysdba
Connected.
//localhost/CDB1> connect sys/oracle as sysdba
Connected.

Always check which database

By the way, when I prepare a script that can make some damages when not run at the correct place, I usually add a test on DBID on top of it:

CDB1> whenever sqlerror exit failure;
CDB1> select 0/decode(dbid,'944121612',1,0) from v$database;
 
Error starting at line : 1 in command -
select 0/decode(dbid,'944121612',1,0) from v$database
Error report -
ORA-01476: divisor is equal to zero

Different passwords

Of course, you should have different passwords on prod and test databases. However, I prefer to have passwords in a wallet (external password file) and then you will always have the correct identification as it is recorded for each service name.

 

Cet article SQLcl connect target depends on previous connection est apparu en premier sur Blog dbi services.

Index Bouncy Scan 4

Jonathan Lewis - Fri, 2018-06-01 03:19

There’s always another hurdle to overcome. After I’d finished writing up the “index bouncy scan” as an efficient probing mechanism to find the combinations of the first two columns (both declared not null) of a very large index a follow-up question appeared almost immediately: “what if it’s a partitioned index”.

The problem with “typical” partitioned indexes is that the smallest value of the leading column might appear in any of the partitions, and the combination of that value and the smallest value for the second column might not appear in all the partitions where the smallest value appears. Consider a table of 10 partitions and a locally partitioned index on (val1, val2) where neither column is the partition key. The smallest value of val1 – call it k1 may appear only in partitions 4, 7, 8, 9, 10; the lowest combination of (val1, val2) – call it (k1, k2) may appear only in partitions 8 and 10. In a global (or globally partitioned) index the pair (k1, k2) would be at the low (leftmost) end of the index, but to find the pair in a locally partitioned index we have to probe the leftmost end of 10 separate index partitions – and once we’ve done that each “bounce” requires us to probe 10 index partitions for the first (val1, val2) pair where val1 = k1 and val2 is just just greater than k2, or val1 is just greater than k1 and val2 is the minimum for that value of val1. The more partitions we have the greater the number of index partitions we have to probe at each step and the more likely it is that we ought to switch to a brute force index fast full scan with aggregate.

Here’s the starting point for solving the problem (maybe) – I’ll create a simple partitioned table, and use the “bouncy scan” code from the earlier posting with the table and column names adjusted accordingly:


rem
rem     Script:         bouncy_index_3.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Apr 2018
rem     Purpose:
rem
rem     Last tested
rem             12.2.0.1
rem

create table pt1 (
        object_id,
        owner,
        object_type,
        object_name,
        status,
        namespace
)
nologging
partition by hash (object_id) partitions 4
as
select
        object_id,
        owner,
        object_type,
        object_name,
        status,
        namespace
from
        (select * from all_objects),
        (select rownum n1 from dual connect by level <= 10) ; alter table pt1 modify(status not null); execute dbms_stats.gather_table_stats(null,'pt1',granularity=>'ALL',method_opt=>'for all columns size 1')

create index pt1_i1 on pt1(status, namespace) nologging local;

prompt  ==================================================
prompt  Make some rows in the last partition have a status
prompt  that won't be found in the first partition.
prompt  ==================================================

column namespace format 99999999
column partition_name new_value m_part

select  partition_name
from    user_tab_partitions
where   table_name = 'PT1'
order by
        partition_position
;

update pt1 partition (&m_part) set status = 'MISSING' where rownum <= 10;

select
        dbms_mview.pmarker(rowid), status, namespace
from    pt1
where   status = 'MISSING'
;

I’ve created a hash partitioned copy of view all_objects, duplicating it 10 times and created a local index on the columns (status, namespace). My data has two values for status, ‘VALID’ and ‘INVALID’, and there are about 10 values for the namespace. I’ve then updated a few rows in the last partition, giving them a status value that is between the two current values – this is just one little test case to help me check that my code is going to catch all values even if they don’t appear in the first table partition.

Here’s the query from the earlier posting – and it does get the right results – followed by the execution plan:


alter session set statistics_level = all;

set serveroutput off
set linesize 180
set pagesize 60

prompt  =============================================================
prompt  Original Query, showing expensive access for driving minimums
prompt  =============================================================

with bounce1(status, namespace) as (
        select status, namespace
        from    (
                select
                        /*+ index(pt1) no_index_ffs(pt1) */
                        status, namespace,
                        row_number() over(order by status, namespace) rn
                from    pt1
        )
        where
                rn = 1
        union all
        select
                v1.status, v1.namespace
        from    bounce1,
                lateral (
                              select  /*+ index(pt1) no_index_ffs(pt1) no_decorrelate */
                                      pt1.status, pt1.namespace
                              from    pt1
                              where   pt1.status > bounce1.status
                              and     rownum = 1
                ) v1
        where   bounce1.status is not null
        and     bounce1.namespace is not null
),
bounce2 (status, namespace)
as (
        select  status, namespace
        from    bounce1
        where   bounce1.status is not null
        union all
        select  bounce2.status, (select min(pt1.namespace) namespace from pt1 where pt1.status = bounce2.status and pt1.namespace > bounce2.namespace) namespace
        from    bounce2
        where   bounce2.namespace is not null
        and     bounce2.status is not null
)
select * from bounce2
where
        bounce2.namespace is not null
and     bounce2.status is not null      -- > redundant predicate
order by
        status, namespace
;

select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last outline'));


----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                    | Name            | Starts | E-Rows | Cost (%CPU)| Pstart| Pstop | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                             |                 |      1 |        | 16378 (100)|       |       |     10 |00:00:00.58 |    1869 |       |       |          |
|   1 |  SORT ORDER BY                               |                 |      1 |      4 | 16378   (4)|       |       |     10 |00:00:00.58 |    1869 |  2048 |  2048 | 2048  (0)|
|*  2 |   VIEW                                       |                 |      1 |      4 | 16377   (4)|       |       |     10 |00:00:00.58 |    1869 |       |       |          |
|   3 |    UNION ALL (RECURSIVE WITH) BREADTH FIRST  |                 |      1 |        |            |       |       |     12 |00:00:00.58 |    1869 |  1024 |  1024 |          |
|*  4 |     VIEW                                     |                 |      1 |      2 |  8157   (4)|       |       |      2 |00:00:00.58 |    1747 |       |       |          |
|   5 |      UNION ALL (RECURSIVE WITH) BREADTH FIRST|                 |      1 |        |            |       |       |      2 |00:00:00.58 |    1747 |  1024 |  1024 | 2048  (0)|
|*  6 |       VIEW                                   |                 |      1 |      1 |  4047   (4)|       |       |      1 |00:00:00.58 |    1732 |       |       |          |
|*  7 |        WINDOW SORT PUSHED RANK               |                 |      1 |    617K|  4047   (4)|       |       |      1 |00:00:00.58 |    1732 |  2048 |  2048 | 2048  (0)|
|   8 |         PARTITION HASH ALL                   |                 |      1 |    617K|  1759   (2)|     1 |     4 |    617K|00:00:00.34 |    1732 |       |       |          |
|   9 |          INDEX FULL SCAN                     | PT1_I1          |      4 |    617K|  1759   (2)|     1 |     4 |    617K|00:00:00.15 |    1732 |       |       |          |
|  10 |       NESTED LOOPS                           |                 |      2 |      1 |  4110   (4)|       |       |      1 |00:00:00.01 |      15 |       |       |          |
|  11 |        RECURSIVE WITH PUMP                   |                 |      2 |        |            |       |       |      2 |00:00:00.01 |       0 |       |       |          |
|  12 |        VIEW                                  | VW_LAT_1BBF5C63 |      2 |      1 |     9   (0)|       |       |      1 |00:00:00.01 |      15 |       |       |          |
|* 13 |         COUNT STOPKEY                        |                 |      2 |        |            |       |       |      1 |00:00:00.01 |      15 |       |       |          |
|  14 |          PARTITION HASH ALL                  |                 |      2 |      1 |     9   (0)|     1 |     4 |      1 |00:00:00.01 |      15 |       |       |          |
|* 15 |           INDEX RANGE SCAN                   | PT1_I1          |      5 |      1 |     9   (0)|     1 |     4 |      1 |00:00:00.01 |      15 |       |       |          |
|  16 |     SORT AGGREGATE                           |                 |     10 |      1 |            |       |       |     10 |00:00:00.01 |     122 |       |       |          |
|  17 |      PARTITION HASH ALL                      |                 |     10 |      1 |     9   (0)|     1 |     4 |     27 |00:00:00.01 |     122 |       |       |          |
|  18 |       FIRST ROW                              |                 |     40 |      1 |     9   (0)|       |       |     27 |00:00:00.01 |     122 |       |       |          |
|* 19 |        INDEX RANGE SCAN (MIN/MAX)            | PT1_I1          |     40 |      1 |     9   (0)|     1 |     4 |     27 |00:00:00.01 |     122 |       |       |          |
|  20 |     RECURSIVE WITH PUMP                      |                 |     10 |        |            |       |       |     10 |00:00:00.01 |       0 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(("BOUNCE2"."NAMESPACE" IS NOT NULL AND "BOUNCE2"."STATUS" IS NOT NULL))
   4 - filter("BOUNCE1"."STATUS" IS NOT NULL)
   6 - filter("RN"=1)
   7 - filter(ROW_NUMBER() OVER ( ORDER BY "STATUS","NAMESPACE")<=1) 13 - filter(ROWNUM=1) 15 - access("PT1"."STATUS">"BOUNCE1"."STATUS")
  19 - access("PT1"."STATUS"=:B1 AND "PT1"."NAMESPACE">:B2)

In terms of time the query doesn’t seem to have done too badly – but I’m only using a small data set and we can see from the numbers that we haven’t produced an efficient plan. Operations 8 and 9 tell us that we’ve done an index full scan on every single partition before passing the data up for a window sort operation. That’s clearly a bad thing, but we did have an index() hint at that bit of code that worked very well for the simple (global) index so maybe we should have taken that out before testing (except it doesn’t help much to do so since Oracle still scans all 617K rows, changing to an index fast full scan).

Apart from that massive load the rest of the query looks quite efficient. We keep seeing “partition hash all” of course – whatever we do we tend to do it to 4 separate partitions one after the other – but everything else we do looks rather efficient. But there is another problem – and this is where the importance of inserting the rows with status = ‘MISSING’ shows up: this query didn’t find them! We have a predicate “rownum = 1” in the second half of the bounce1 recursive subquery and because we’re using a partitioned index we’ve managed to find a row that looks appropriate in an early partition when the row we really needed doesn’t appear until the last partition.

Let’s return to this problem later – first we want to check if the rest of the query will run efficiently and give us the right answer if we can find some way of getting the starting values; so let’s use a strategy we’ve used before – replace the bounce1 subquery with a union all select from dual:


with bounce1(status, namespace) as (
        select status, namespace
        from    (
                select 'INVALID' status, 1 namespace from dual
                union all
                select 'MISSING', 4 from dual
                union all
                select 'VALID', 1 from dual
        )
),
bounce2 (status, namespace)
as (
        select  status, namespace
        from    bounce1
        where   bounce1.status is not null
        union all
        select  bounce2.status, (select min(pt1.namespace) namespace from pt1 where pt1.status = bounce2.status and pt1.namespace > bounce2.namespace) namespace
        from    bounce2
        where   bounce2.namespace is not null
        and     bounce2.status is not null
)
select * from bounce2
where
        bounce2.namespace is not null
and     bounce2.status is not null      -- > redundant predicate
order by
        status, namespace
;

select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last partition outline'));

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name   | Starts | E-Rows | Cost (%CPU)| Pstart| Pstop | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |        |      1 |        |    76 (100)|       |       |     11 |00:00:00.01 |     132 |       |       |          |
|   1 |  SORT ORDER BY                             |        |      1 |      6 |    76   (2)|       |       |     11 |00:00:00.01 |     132 |  2048 |  2048 | 2048  (0)|
|*  2 |   VIEW                                     |        |      1 |      6 |    75   (0)|       |       |     11 |00:00:00.01 |     132 |       |       |          |
|   3 |    UNION ALL (RECURSIVE WITH) BREADTH FIRST|        |      1 |        |            |       |       |     14 |00:00:00.01 |     132 |  1024 |  1024 |          |
|   4 |     VIEW                                   |        |      1 |      3 |     6   (0)|       |       |      3 |00:00:00.01 |       0 |       |       |          |
|   5 |      UNION-ALL                             |        |      1 |        |            |       |       |      3 |00:00:00.01 |       0 |       |       |          |
|   6 |       FAST DUAL                            |        |      1 |      1 |     2   (0)|       |       |      1 |00:00:00.01 |       0 |       |       |          |
|   7 |       FAST DUAL                            |        |      1 |      1 |     2   (0)|       |       |      1 |00:00:00.01 |       0 |       |       |          |
|   8 |       FAST DUAL                            |        |      1 |      1 |     2   (0)|       |       |      1 |00:00:00.01 |       0 |       |       |          |
|   9 |     SORT AGGREGATE                         |        |     11 |      1 |            |       |       |     11 |00:00:00.01 |     132 |       |       |          |
|  10 |      PARTITION HASH ALL                    |        |     11 |      1 |     9   (0)|     1 |     4 |     27 |00:00:00.01 |     132 |       |       |          |
|  11 |       FIRST ROW                            |        |     44 |      1 |     9   (0)|       |       |     27 |00:00:00.01 |     132 |       |       |          |
|* 12 |        INDEX RANGE SCAN (MIN/MAX)          | PT1_I1 |     44 |      1 |     9   (0)|     1 |     4 |     27 |00:00:00.01 |     132 |       |       |          |
|  13 |     RECURSIVE WITH PUMP                    |        |     10 |        |            |       |       |     11 |00:00:00.01 |       0 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("BOUNCE2"."NAMESPACE" IS NOT NULL AND "BOUNCE2"."STATUS" IS NOT NULL))
  12 - access("PT1"."STATUS"=:B1 AND "PT1"."NAMESPACE">:B2)

This gets us the right answer, very efficiently. There are only 11 rows in the result set and we have an average 12 buffer visits per row – which is reasonble given that we (probably) have to probe 4 index partitions for every row. So that’s 11 * 4 * 3 buffer visits per probe – which seems just about optimal.

The next step is to figure out a way of getting the (three in our case) starting points while using a partitioned index. Here’s a query we can use for bounce1:


with bounce1(status, namespace) as (
        select
                (select min(status) from pt1) status,
                (select /*+ index(pt1) */ min(namespace) from pt1 where status = (select min(status) from pt1)) namespace
        from
                dual
        union all
        select
                v1.status, v2.namespace
        from    bounce1,
                lateral(
                        (select /*+ index(pt1) */ min(pt1.status) status from pt1 where pt1.status > bounce1.status)
                )       v1,
                lateral(
                        select /*+ index(pt1) */ min(pt1.namespace) namespace
                        from pt1
                        where pt1.status =  (select min(pt2.status) from pt1 pt2 where pt2.status > bounce1.status)
                )       v2
        where
                bounce1.status is not null
        and     bounce1.namespace is not null
)
select * from bounce1
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost partition outline'));

It looks a little convoluted with all the inline select statements, but they all do very small amounts of work and they’re only reading the index leaf blocks that you have to read. We know from yesterday’s post that Oracle can execute the scalar subqueries at lines 3 and 4 very efficiently; we can hope (and check) that the lateral() subqueries driven by the single values from the recursive row in bounce1 will operate just as efficiently – and here’s the plan:


----------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name            | Starts | E-Rows | Cost (%CPU)| Pstart| Pstop | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                          |                 |      1 |        |   166 (100)|       |       |      4 |00:00:00.01 |     132 |
|   1 |  VIEW                                     |                 |      1 |      2 |   166   (0)|       |       |      4 |00:00:00.01 |     132 |
|   2 |   UNION ALL (RECURSIVE WITH) BREADTH FIRST|                 |      1 |        |            |       |       |      4 |00:00:00.01 |     132 |
|   3 |    SORT AGGREGATE                         |                 |      1 |      1 |            |       |       |      1 |00:00:00.01 |      12 |
|   4 |     PARTITION HASH ALL                    |                 |      1 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      12 |
|   5 |      INDEX FULL SCAN (MIN/MAX)            | PT1_I1          |      4 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      12 |
|   6 |    SORT AGGREGATE                         |                 |      1 |      1 |            |       |       |      1 |00:00:00.01 |      24 |
|   7 |     PARTITION HASH ALL                    |                 |      1 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      24 |
|   8 |      FIRST ROW                            |                 |      4 |      1 |     9   (0)|       |       |      4 |00:00:00.01 |      24 |
|*  9 |       INDEX RANGE SCAN (MIN/MAX)          | PT1_I1          |      4 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      24 |
|  10 |        SORT AGGREGATE                     |                 |      1 |      1 |            |       |       |      1 |00:00:00.01 |      12 |
|  11 |         PARTITION HASH ALL                |                 |      1 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      12 |
|  12 |          INDEX FULL SCAN (MIN/MAX)        | PT1_I1          |      4 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      12 |
|  13 |    FAST DUAL                              |                 |      1 |      1 |     2   (0)|       |       |      1 |00:00:00.01 |       0 |
|  14 |    NESTED LOOPS                           |                 |      4 |      1 |   146   (0)|       |       |      3 |00:00:00.01 |      96 |
|  15 |     NESTED LOOPS                          |                 |      4 |      1 |   137   (0)|       |       |      3 |00:00:00.01 |      36 |
|  16 |      RECURSIVE WITH PUMP                  |                 |      4 |        |            |       |       |      3 |00:00:00.01 |       0 |
|  17 |      VIEW                                 | VW_LAT_C2D92EFA |      3 |      1 |     9   (0)|       |       |      3 |00:00:00.01 |      36 |
|  18 |       SORT AGGREGATE                      |                 |      3 |      1 |            |       |       |      3 |00:00:00.01 |      36 |
|  19 |        PARTITION HASH ALL                 |                 |      3 |      1 |     9   (0)|     1 |     4 |      8 |00:00:00.01 |      36 |
|  20 |         FIRST ROW                         |                 |     12 |      1 |     9   (0)|       |       |      8 |00:00:00.01 |      36 |
|* 21 |          INDEX RANGE SCAN (MIN/MAX)       | PT1_I1          |     12 |      1 |     9   (0)|     1 |     4 |      8 |00:00:00.01 |      36 |
|  22 |     VIEW                                  | VW_LAT_C2D92EFA |      3 |      1 |     9   (0)|       |       |      3 |00:00:00.01 |      60 |
|  23 |      SORT AGGREGATE                       |                 |      3 |      1 |            |       |       |      3 |00:00:00.01 |      60 |
|  24 |       PARTITION HASH ALL                  |                 |      3 |      1 |     9   (0)|     1 |     4 |      5 |00:00:00.01 |      60 |
|  25 |        FIRST ROW                          |                 |     12 |      1 |     9   (0)|       |       |      5 |00:00:00.01 |      60 |
|* 26 |         INDEX RANGE SCAN (MIN/MAX)        | PT1_I1          |     12 |      1 |     9   (0)|     1 |     4 |      5 |00:00:00.01 |      60 |
|  27 |          SORT AGGREGATE                   |                 |      3 |      1 |            |       |       |      3 |00:00:00.01 |      36 |
|  28 |           PARTITION HASH ALL              |                 |      3 |      1 |     9   (0)|     1 |     4 |      8 |00:00:00.01 |      36 |
|  29 |            FIRST ROW                      |                 |     12 |      1 |     9   (0)|       |       |      8 |00:00:00.01 |      36 |
|* 30 |             INDEX RANGE SCAN (MIN/MAX)    | PT1_I1          |     12 |      1 |     9   (0)|     1 |     4 |      8 |00:00:00.01 |      36 |
----------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   9 - access("STATUS"=)
  21 - access("PT1"."STATUS">"BOUNCE1"."STATUS")
  26 - access("PT1"."STATUS"=)
  30 - access("PT2"."STATUS">:B1)

Although we have done lots of individual probes into the index they have all been very efficient using a min/max access and an average of about 3 buffer visits per probe. So we can now insert this new bounce1 subquery into the previous query in place of the union all of dual and check that the two pieces of the query cooperate.


with bounce1(status, namespace) as (
        select
                (select min(status) from pt1) status,
                (select /*+ index(pt1) */ min(namespace) from pt1 where status = (select min(status) from pt1)) namespace
        from
                dual
        union all
        select
                v1.status, v2.namespace
        from    bounce1,
                lateral(
                        (select /*+ index(pt1) */ min(pt1.status) status from pt1 where pt1.status > bounce1.status)
                )       v1,
                lateral(
                        select /*+ index(pt1) */ min(pt1.namespace) namespace
                        from pt1
                        where pt1.status =  (select min(pt2.status) from pt1 pt2 where pt2.status > bounce1.status)
                )       v2
        where
                bounce1.status is not null
        and     bounce1.namespace is not null
),
bounce2 (status, namespace)
as (
        select  status, namespace from bounce1
        union all
        select  bounce2.status, (select min(t.namespace) namespace from pt1 t where t.namespace > bounce2.namespace and status=bounce2.status) namespace
        from    bounce2
        where   bounce2.status is not null
        and     bounce2.namespace is not null
)
select  *
from    bounce2
where   namespace is not null
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost partition outline'));

------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                   | Name            | Starts | E-Rows | Cost (%CPU)| Pstart| Pstop | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                            |                 |      1 |        |   396 (100)|       |       |     11 |00:00:00.01 |     266 |
|*  1 |  VIEW                                       |                 |      1 |      4 |   396   (1)|       |       |     11 |00:00:00.01 |     266 |
|   2 |   UNION ALL (RECURSIVE WITH) BREADTH FIRST  |                 |      1 |        |            |       |       |     15 |00:00:00.01 |     266 |
|   3 |    VIEW                                     |                 |      1 |      2 |   166   (0)|       |       |      4 |00:00:00.01 |     132 |
|   4 |     UNION ALL (RECURSIVE WITH) BREADTH FIRST|                 |      1 |        |            |       |       |      4 |00:00:00.01 |     132 |
|   5 |      SORT AGGREGATE                         |                 |      1 |      1 |            |       |       |      1 |00:00:00.01 |      12 |
|   6 |       PARTITION HASH ALL                    |                 |      1 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      12 |
|   7 |        INDEX FULL SCAN (MIN/MAX)            | PT1_I1          |      4 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      12 |
|   8 |      SORT AGGREGATE                         |                 |      1 |      1 |            |       |       |      1 |00:00:00.01 |      24 |
|   9 |       PARTITION HASH ALL                    |                 |      1 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      24 |
|  10 |        FIRST ROW                            |                 |      4 |      1 |     9   (0)|       |       |      4 |00:00:00.01 |      24 |
|* 11 |         INDEX RANGE SCAN (MIN/MAX)          | PT1_I1          |      4 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      24 |
|  12 |          SORT AGGREGATE                     |                 |      1 |      1 |            |       |       |      1 |00:00:00.01 |      12 |
|  13 |           PARTITION HASH ALL                |                 |      1 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      12 |
|  14 |            INDEX FULL SCAN (MIN/MAX)        | PT1_I1          |      4 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      12 |
|  15 |      FAST DUAL                              |                 |      1 |      1 |     2   (0)|       |       |      1 |00:00:00.01 |       0 |
|  16 |      NESTED LOOPS                           |                 |      4 |      1 |   146   (0)|       |       |      3 |00:00:00.01 |      96 |
|  17 |       NESTED LOOPS                          |                 |      4 |      1 |   137   (0)|       |       |      3 |00:00:00.01 |      36 |
|  18 |        RECURSIVE WITH PUMP                  |                 |      4 |        |            |       |       |      3 |00:00:00.01 |       0 |
|  19 |        VIEW                                 | VW_LAT_C2D92EFA |      3 |      1 |     9   (0)|       |       |      3 |00:00:00.01 |      36 |
|  20 |         SORT AGGREGATE                      |                 |      3 |      1 |            |       |       |      3 |00:00:00.01 |      36 |
|  21 |          PARTITION HASH ALL                 |                 |      3 |      1 |     9   (0)|     1 |     4 |      8 |00:00:00.01 |      36 |
|  22 |           FIRST ROW                         |                 |     12 |      1 |     9   (0)|       |       |      8 |00:00:00.01 |      36 |
|* 23 |            INDEX RANGE SCAN (MIN/MAX)       | PT1_I1          |     12 |      1 |     9   (0)|     1 |     4 |      8 |00:00:00.01 |      36 |
|  24 |       VIEW                                  | VW_LAT_C2D92EFA |      3 |      1 |     9   (0)|       |       |      3 |00:00:00.01 |      60 |
|  25 |        SORT AGGREGATE                       |                 |      3 |      1 |            |       |       |      3 |00:00:00.01 |      60 |
|  26 |         PARTITION HASH ALL                  |                 |      3 |      1 |     9   (0)|     1 |     4 |      5 |00:00:00.01 |      60 |
|  27 |          FIRST ROW                          |                 |     12 |      1 |     9   (0)|       |       |      5 |00:00:00.01 |      60 |
|* 28 |           INDEX RANGE SCAN (MIN/MAX)        | PT1_I1          |     12 |      1 |     9   (0)|     1 |     4 |      5 |00:00:00.01 |      60 |
|  29 |            SORT AGGREGATE                   |                 |      3 |      1 |            |       |       |      3 |00:00:00.01 |      36 |
|  30 |             PARTITION HASH ALL              |                 |      3 |      1 |     9   (0)|     1 |     4 |      8 |00:00:00.01 |      36 |
|  31 |              FIRST ROW                      |                 |     12 |      1 |     9   (0)|       |       |      8 |00:00:00.01 |      36 |
|* 32 |               INDEX RANGE SCAN (MIN/MAX)    | PT1_I1          |     12 |      1 |     9   (0)|     1 |     4 |      8 |00:00:00.01 |      36 |
|  33 |    SORT AGGREGATE                           |                 |     11 |      1 |            |       |       |     11 |00:00:00.01 |     134 |
|  34 |     PARTITION HASH ALL                      |                 |     11 |      1 |     9   (0)|     1 |     4 |     27 |00:00:00.01 |     134 |
|  35 |      FIRST ROW                              |                 |     44 |      1 |     9   (0)|       |       |     27 |00:00:00.01 |     134 |
|* 36 |       INDEX RANGE SCAN (MIN/MAX)            | PT1_I1          |     44 |      1 |     9   (0)|     1 |     4 |     27 |00:00:00.01 |     134 |
|  37 |    RECURSIVE WITH PUMP                      |                 |     10 |        |            |       |       |     11 |00:00:00.01 |       0 |
------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("NAMESPACE" IS NOT NULL)
  11 - access("STATUS"=)
  23 - access("PT1"."STATUS">"BOUNCE1"."STATUS")
  28 - access("PT1"."STATUS"=)
  32 - access("PT2"."STATUS">:B1)
  36 - access("STATUS"=:B1 AND "T"."NAMESPACE">:B2)

Job done. We’ve found the distinct set of pairs without having to scan the entire index. We’ve found 11 pairs at a total cost of 266 buffer gets. For comparitive purposes the query totalled 56 buffer visits when I recreated the table as a non-partitioned table (again updating a few rows to status = ‘MISSING’).

It’s important to note that this query can only work this efficiently in 12.2 (and possibly in a suitably patched 11.2.0.4) because of the optimizer’s ability to use the min/max operation for queries like: “select max(col2) where col1 = (select max()…))”. When I ran the final query on 12.1.0.2 the execution plan changed around lines 11 and 28 where 12.2.0.1 could use the aggregate subquery to drive the min/max scan 12.1.0.2 did a real range scan with aggregate (which was extremely expensive at one point).

------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                   | Name            | Starts | E-Rows | Cost (%CPU)| Pstart| Pstop | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------------------------------------------------------------------------
|  23 |       VIEW                                  | VW_LAT_C2D92EFA |      3 |      1 |  1206   (2)|       |       |      3 |00:00:05.43 |    2414 |
|  24 |        SORT AGGREGATE                       |                 |      3 |      1 |            |       |       |      3 |00:00:05.43 |    2414 |
|  25 |         PARTITION HASH ALL                  |                 |      3 |    422K|  1206   (2)|     1 |     4 |    845K|00:00:05.84 |    2414 |
|* 26 |          INDEX RANGE SCAN                   | PT1_I1          |     12 |    422K|  1206   (2)|     1 |     4 |    845K|00:00:02.03 |    2414 |
|  27 |           SORT AGGREGATE                    |                 |      3 |      1 |            |       |       |      3 |00:00:00.01 |      36 |
|  28 |            PARTITION HASH ALL               |                 |      3 |      1 |     9   (0)|     1 |     4 |      8 |00:00:00.01 |      36 |
|  29 |             FIRST ROW                       |                 |     12 |      1 |     9   (0)|       |       |      8 |00:00:00.01 |      36 |
|* 30 |              INDEX RANGE SCAN (MIN/MAX)     | PT1_I1          |     12 |      1 |     9   (0)|     1 |     4 |      8 |00:00:00.01 |      36 |
------------------------------------------------------------------------------------------------------------------------------------------------------

As you can see, this makes a dramatic difference to the work Oracle has to do – in this case 2,414 buffer gets and 845K rows examined. As I said in yestrday’s post – there’s a patch for 11.2.0.4, so there could be a patch for 12.1.0.2 if you ask for it, but it looks like no-one has done so yet.

<h3>Footnote:</h3>

I could have used a lateral() view in the first half of bounce1 to reduce the reported number of probes of pt1_i1 in the plan – but it made the code extremely messy, I had to include a /*+ no_decorrelate */ hint in it, and it increased the number of buffer visits slightly because the optimizer seemed to lose the option for a min/max scan in this particular lateral join.

 

How To Prototype In Time and Budget Constraint Projects

Nilesh Jethwa - Thu, 2018-05-31 23:07

In the software development industry, you can expect that you’ll encounter clients who are too cheap, who want to have more than they pay for, who want to make things done impossibly quick, and who are just plain difficult to … Continue reading ?

By: MockupTiger Wireframes

Initial release of PeopleTools 8.57 will be in the Oracle Cloud

PeopleSoft Technology Blog - Thu, 2018-05-31 16:44

Oracle recently announced in a Tech Update that PeopleTools 8.57 will be generally available to all PeopleSoft customers for use first in the Oracle Cloud.  Shortly after, most likely with the third PeopleTools patch (8.57.03) it will be available for on premises environments. This, understandably, has generated some questions so I thought I take a few minutes to clear things up. 

When we talk about running PeopleSoft in the cloud, we are talking about taking one or more of your existing PeopleSoft environments and instead of running them in your data center, they are run on hardware that you subscribe to and manage in the cloud.  In this case, the cloud is Oracle’s Infrastructure as a Service, also called Oracle Cloud Infrastructure (OCI).   When you move one of your environments to the cloud, it’s backed up to files, copied to a cloud repository and provision from the repository.  We call that process ‘Lift and Shift’.  When it’s done, your application with your data, your customizations, and your configurations is running in the cloud.  Even though it’s in the cloud, you are responsible for maintaining it.  What has changed is the infrastructure – servers, storage, network – that the application is running on.

Just to be clear, there is no Software-as-a-Services (SaaS) version of PeopleSoft or PeopleTools, nor does Oracle have any plans to release one.   So, thinking it through, the 8.57 release of PeopleTools that we make available in the cloud is exactly the same as the version you will install in your on premises environment.  Why then are we releasing it first in the cloud?

There are several reasons, but the most significant is to build awareness of the benefits of running PeopleSoft applications in the Oracle Cloud, and what that can mean to you.  We believe that in the long term, the best way to run PeopleSoft applications is to do so in in the cloud.  To make it even better, one of the major initiatives over the past couple of years is the release of PeopleSoft Cloud Manager.  With Cloud Manager, many of the processes that are time consuming or difficult, particularly around lifecycle management, have been improved or automated.  The PeopleTools upgrade, for instance, is automated.  Just choose one of your application images from your cloud repository and start it with the latest Tools version and the app will be upgraded as part of the provisioning process.  It’s that easy.  And that’s just one example.

It’s pretty easy to take advantage of the initial releases of PeopleTools in the cloud.  In fact, as I write this there is a 30 day Trial Program that gives you free credits to try it out.  Be sure to follow the correct OBE when installing.  For more information go to this link or talk to your Oracle Account team.  There is also a PeopleSoft Key Concept page about running Peoplesoft on the Oracle Cloud and PeopleSoft Cloud Manager where you can get more information.  It only takes a small investment to try this out, and it could lead to major improvements in how you manage your applications.

The Single Responsibility principle

Andrew Clarke - Thu, 2018-05-31 16:14
The Single Responsibility principle is the foundation of modular programming, and is probably the most important principle in the SOLID set. Many of the other principles flow from it.

It is quite simple: a program unit should do only one thing. A procedure should implement a single task; a package should gather together procedures which solve a set of related tasks. Consider the Oracle library package UTL_FILE. Its responsibility is quite clear: it is for working with external files. It implements all the operations necessary to work with OS files: opening them, closing them, reading and writing, etc. It defines a bespoke suite of exceptions too.

Each procedure in the package has a clear responsibility too. For instance, fclose() closes a single referenced file whereas fclose_all() closes all open files. Now, the package designers could have implemented that functionality as a single procedure, with different behaviours depending on whether the file parameter was populated or unpopulated. This might seem a simpler implementation, because it would be one fewer procedure. But the interface has actually become more complicated: essentially we have a flag parameter, which means we need to know a little bit more about the internal processing of fclose(). It would have made the package just a little bit harder to work with without saving any actual code.

Of course, it's pretty easy to define the Single Responsibility of a low level feature like file handling. We might think there are some superficially similarities with displaying information to the screen but it's fairly obvious that these are unrelated and so we need tow packages, UTL_FILE and DBMS_OUTPUT. When it comes to our own code, especially higher level packages, it can be harder to define the boundaries. At the broadest level we can define domains - SALES, HR, etc. But we need more than one package per domain: how do we decide the responsibilities of indvidual pacakages?

Robert C Martin defines the Single Responsibility principle as: "A class should have only one reason to change." Reasons for change can be many and various. In database applications dependence on tables is a primary one. So procedures which work a common set of table may well belong together. But there are at least two sets of privileges for data: reading and manipulating. So it's likely we will need a package which gathers together reporting type queries which can be granted to read-only users and a package which executes DML statements which can be granted to more privileged users. Maybe our domain requires special processing, such as handling sensitive data; procedures for implementing that business logic will belong in separate packages.

Single responsibility becomes a matrix, with dependencies along one access and audience of users along another.

The advantages of Singel Responsibility should be obvious. It allows us to define a cohesive package, collecting together all the related functionality which makes it easy for others reuse it. It also allows us to define private routines in a package body, which reduces the amount of code we have to maintain while giving us a mechanism for preventing other developers from using it. Restricting the features to a single responsibility means unrelated functions are not coupled together. This gives a better granularity for granting the least privileges necessary to users of our code. Part of the Designing PL/SQL Programs series

Oracle 18c Autonomous Health Framework (AHF) - Part 1

Syed Jaffar - Thu, 2018-05-31 10:03
Recently I had to present at eProseed AnualTech conference in Luxembourg and I was requested  to present a topic something about Oracle18c.

Obviously I don't want to talk and repeat the same about Autonomous Database, many experts already said much on this. I then decided to pick a topic which really helps DBAs, Administrators and finally to the organization. I was really fascinated about Oracle 18c autonomous health framework concepts and decided to do a presentation on this topics.

Working in a complex and huge Oracle environment, I knew where most of our energy and time is spend, as a DBA or system administrator. We always focus on avoiding run time availability and performance issues. In a complex and critical environment, every other day, you will face a new challenge and you must be on your toes as DBA during the business hours.

For DBA, most importantly, we need to ensure the database availability, at the same time, ensure its deliver the same performance 24x7. Imagine, if you get stuck with latches, instance crash, node crash, someone changes the binaries permission/ownership, you will spend hours and hours to fix and find the root cause of the issues.

With 18c autonomous health framework, its easy to avoid and auto fix run-time availability and performance issues. There are 8 components that makes this framework. Though some of them are present in 12.2, but, all these are configured automatically upon 18c configuration and run 24x7 in daemon mode. Also, 3 of the components have machine learning capabilities. to fix issue automatically.

I will start discussing about 8 components in next series. Stay tune for more on this topic.



Patching ODA lite to 12.2.1.3.0

Yann Neuhaus - Thu, 2018-05-31 09:57

Here is the latest patch for your ODA and it seems that Oracle documentation for this patch is not exactly the procedure you’ll have to follow to successfully patch your appliance. I recently updated X6-2M and X6-2L to this latest release and here is how to do that. In this example I was patching from 12.1.2.12.0, no intermediate patch was needed.

1) Download the patch

It seems odd but sometimes finding the corresponding patch is not so easy! With the patch number, it’s more convenient. For 12.2.1.3.0 with the dcs stack the number is 27648057. This patch will update all the components:  dcs (odacli), operating system, bios/firmwares, ilom, GI and dbhomes.

Copy the patch in a temporary folder on the server, for example /opt/patch. You’ll need to be root to apply the patch.

2) Check the actual versions and free up space on disk

I recommend you to check the actual versions. It’s easy:

odacli describe-component

System Version
---------------
12.1.2.12.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       12.1.2.12.0           up-to-date
GI                                        12.1.0.2.170814       up-to-date
DB {
[ OraDB12102_home1 ]                      12.1.0.2.170814       up-to-date
[ OraDB11204_home1 ]                      11.2.0.4.170814       up-to-date
}
ILOM                                      3.2.7.26.a.r112632    up-to-date
BIOS                                      39050100              up-to-date
OS                                        6.8                   up-to-date
FIRMWARECONTROLLER {
[ c6 ]                                    4.650.00-7176         up-to-date
[ c0,c1,c2,c3,c4,c5                       KPYABR3Q              up-to-date
}
FIRMWAREDISK                              0R3Q                  up-to-date

Don’t care about Available Version column, it’s only valid after you register a new patch. Your ODA doesn’t check the latest patch online.

For free space check that folders /, /u01 and /opt have enough free GB to process. At least 10GB I think.

3) Prepare the patch files

It’s slightly different than previous versions. Only the first zipfile has to be uncompressed and registered. The 2 other files can directly be registered without unzipping them.

cd /opt/patch
unzip p27648057_122130_Linux-x86-64_1of3.zip

odacli update-repository -f /opt/patch/oda-sm-12.2.1.3.0-180504-server1of3.zip
odacli update-repository -f /opt/patch/p27648057_122130_Linux-x86-64_2of3.zip
odacli update-repository -f /opt/patch/p27648057_122130_Linux-x86-64_3of3.zip

4) Update the dcs-agent

First you’ll have to update the dcs-agent:

odacli update-dcsagent -v 12.2.1.3.0
{
"jobId" : "150b3486-cfb2-4b32-b751-0ed89ce3d7be",
"status" : "Created",
"message" : "Dcs agent will be restarted after the update. Please wait for 2-3 mins before executing the other commands",
"reports" : [ ],
"createTimestamp" : "May 22, 2018 10:31:50 AM CEST",
"resourceList" : [ ],
"description" : "DcsAgent patching",
"updatedTime" : "May 22, 2018 10:31:50 AM CEST"
}

As for every kind of operation you do with odacli, you receive a jobId you can monitor:

odacli describe-job -i "150b3486-cfb2-4b32-b751-0ed89ce3d7be"

Job details
----------------------------------------------------------------
                     ID:  150b3486-cfb2-4b32-b751-0ed89ce3d7be
            Description:  DcsAgent patching
                 Status:  Success
                Created:  May 22, 2018 10:31:50 AM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
dcsagent rpm verification                May 22, 2018 10:31:50 AM CEST       May 22, 2018 10:31:50 AM CEST       Success
Patch location validation                May 22, 2018 10:31:50 AM CEST       May 22, 2018 10:31:50 AM CEST       Success
dcs-agent upgrade                        May 22, 2018 10:31:50 AM CEST       May 22, 2018 10:31:51 AM CEST       Success

This update takes only a minute.

Check again the version and you will see a new component in 18c, quite weird:

odacli describe-component
System Version
---------------
12.2.1.3.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       12.1.2.12.0           12.2.1.3.0
GI                                        12.1.0.2.170814       12.2.0.1.180116
DB {
[ OraDB12102_home1 ]                      12.1.0.2.170814       12.1.0.2.180116
[ OraDB11204_home1 ]                      11.2.0.4.170814       11.2.0.4.180116
}
DCSAGENT                                  18.1.3.0.0            up-to-date
ILOM                                      3.2.7.26.a.r112632    up-to-date
BIOS                                      39050100              39090000
OS                                        6.8                   6.9
FIRMWARECONTROLLER {
[ c6 ]                                    4.650.00-7176         up-to-date
[ c0,c1,c2,c3,c4,c5 ]                     KPYABR3Q              kpyagr3q
}
FIRMWAREDISK                              0R3Q                  up-to-date

5) Update the server

Updating the server will actually update not only the operating system but also the ILOM, the BIOS, the firmware of internal disks and the GI. For the OS, Oracle Linux will go from 6.8 to 6.9 and the update process will assume that no additional packages have been installed. If you installed additional packages, please remove them as they can prevent the patch to apply correctly. Even devel packages should be removed, the upgrade of normal packages linked to them will not work because of the dependencies. You can easily remove the install packages with rpm -e, for example:

rpm -e openssl-devel-1.0.1e-48.el6_8.4.x86_64
rpm -e krb5-devel-1.10.3-57.el6.x86_64
rpm -e zlib-devel-1.2.3-29.el6.x86_64
rpm -e keyutils-libs-devel-1.4-5.0.1.el6.x86_64
rpm -e libcom_err-devel-1.42.8-1.0.2.el6.x86_64
rpm -e libselinux-devel-2.0.94-7.el6.x86_64
rpm -e libsepol-devel-2.0.41-4.el6.x86_64

Now you can safely run the patching:

odacli update-server -v 12.2.1.3.0

This update is the longest one (between 30 minutes and 1 hour), while you should see your server rebooting. As usual check the status of the job with describe-job:

odacli describe-job -i "27d2195f-f16b-44d8-84e0-6af6e48ccad7"

At the end of the process, describe-job will look like that:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Patch location validation                May 23, 2018 11:07:09 AM CEST       May 23, 2018 11:07:09 AM CEST       Success
dcs-controller upgrade                   May 23, 2018 11:07:09 AM CEST       May 23, 2018 11:07:09 AM CEST       Success
Patch location validation                May 23, 2018 11:07:09 AM CEST       May 23, 2018 11:07:09 AM CEST       Success
dcs-cli upgrade                          May 23, 2018 11:07:09 AM CEST       May 23, 2018 11:07:09 AM CEST       Success
Creating repositories using yum          May 23, 2018 11:07:09 AM CEST       May 23, 2018 11:07:10 AM CEST       Success
Applying HMP Patches                     May 23, 2018 11:07:10 AM CEST       May 23, 2018 11:09:04 AM CEST       Success
Patch location validation                May 23, 2018 11:09:04 AM CEST       May 23, 2018 11:09:04 AM CEST       Success
oda-hw-mgmt upgrade                      May 23, 2018 11:09:04 AM CEST       May 23, 2018 11:09:04 AM CEST       Success
Creating repositories using yum          May 23, 2018 11:09:04 AM CEST       May 23, 2018 11:09:09 AM CEST       Success
Applying OS Patches                      May 23, 2018 11:09:09 AM CEST       May 23, 2018 11:11:04 AM CEST       Success
OSS Patching                             May 23, 2018 11:11:04 AM CEST       May 23, 2018 11:11:04 AM CEST       Success
Applying Firmware Disk Patches           May 23, 2018 11:11:04 AM CEST       May 23, 2018 11:11:08 AM CEST       Success
Applying Firmware Expander Patches       May 23, 2018 11:11:08 AM CEST       May 23, 2018 11:11:13 AM CEST       Success
Applying Firmware Controller Patches     May 23, 2018 11:11:13 AM CEST       May 23, 2018 11:11:16 AM CEST       Success
Checking Ilom patch Version              May 23, 2018 11:11:17 AM CEST       May 23, 2018 11:11:19 AM CEST       Success
Patch location validation                May 23, 2018 11:11:19 AM CEST       May 23, 2018 11:11:20 AM CEST       Success
Apply Ilom patch                         May 23, 2018 11:11:21 AM CEST       May 23, 2018 11:11:22 AM CEST       Success
Copying Flash Bios to Temp location      May 23, 2018 11:11:22 AM CEST       May 23, 2018 11:11:22 AM CEST       Success
Starting the clusterware                 May 23, 2018 11:11:23 AM CEST       May 23, 2018 11:11:23 AM CEST       Success
Creating GI home directories             May 23, 2018 11:11:23 AM CEST       May 23, 2018 11:11:23 AM CEST       Success
Cloning Gi home                          May 23, 2018 11:11:23 AM CEST       May 23, 2018 11:13:53 AM CEST       Success
Configuring GI                           May 23, 2018 11:13:53 AM CEST       May 23, 2018 11:14:06 AM CEST       Success
Running GI upgrade root scripts          May 23, 2018 11:14:06 AM CEST       May 23, 2018 11:29:04 AM CEST       Success
Resetting DG compatibility               May 23, 2018 11:29:04 AM CEST       May 23, 2018 11:29:09 AM CEST       Success
Running GI config assistants             May 23, 2018 11:29:09 AM CEST       May 23, 2018 11:30:10 AM CEST       Success
restart oakd                             May 23, 2018 11:30:13 AM CEST       May 23, 2018 11:30:23 AM CEST       Success
Updating GiHome version                  May 23, 2018 11:30:23 AM CEST       May 23, 2018 11:30:25 AM CEST       Success
preRebootNode Actions                    May 23, 2018 11:30:39 AM CEST       May 23, 2018 11:31:25 AM CEST       Success
Reboot Ilom                              May 23, 2018 11:31:25 AM CEST       May 23, 2018 11:31:25 AM CEST       Success

Don’t forget that if there is a problem during the patching process (you forgot to remove an additional rpm for example), you can relaunch the patching and it will skip the already patched components. But you will loose a lot of time! Please control the components version after the reboot with describe-component.

6) Patch the dbhomes

You now need to patch the dbhomes separately. First of all list them:

odacli list-dbhomes

ID                                       Name                 DB Version                               Home Location                                 Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
86b6a068-55a8-4171-9f94-48b86f135065     OraDB12102_home1     12.1.0.2.170814 (26680878, 26609798)     /u01/app/oracle/product/12.1.0.2/dbhome_1     Configured
c6e7d556-5785-41d8-a120-ed3ef756848a     OraDB11204_home1     11.2.0.4.170814 (26609929, 26609445)     /u01/app/oracle/product/11.2.0.4/dbhome_1     Configured

And then patch the first one:

odacli update-dbhome -v 12.2.1.3 -i c6e7d556-5785-41d8-a120-ed3ef756848a

It will also apply the datapatch on all the primary databases connected to this home. If you don’t have any database linked to the dbhome patching is fast:

odacli describe-job -i "9defa52d-2997-42ea-9bd3-aafdbf0a4dc5"
Job details
----------------------------------------------------------------
                     ID:  9defa52d-2997-42ea-9bd3-aafdbf0a4dc5
            Description:  DB Home Patching: Home Id is c6e7d556-5785-41d8-a120-ed3ef756848a
                 Status:  Success
                Created:  May 23, 2018 11:43:55 AM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
clusterware patch verification           May 23, 2018 11:43:56 AM CEST       May 23, 2018 11:43:58 AM CEST       Success
Patch location validation                May 23, 2018 11:43:58 AM CEST       May 23, 2018 11:43:59 AM CEST       Success
Opatch updation                          May 23, 2018 11:44:19 AM CEST       May 23, 2018 11:44:20 AM CEST       Success
Patch conflict check                     May 23, 2018 11:44:20 AM CEST       May 23, 2018 11:44:27 AM CEST       Success
db upgrade                               May 23, 2018 11:44:27 AM CEST       May 23, 2018 11:46:01 AM CEST       Success

About 2 minutes for an empty dbhome.

And for a dbhome linked to databases:

odacli update-dbhome -v 12.2.1.3 -i 86b6a068-55a8-4171-9f94-48b86f135065
{
"jobId" : "8083dc43-61fe-49da-8081-43b4e5257e95",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : "May 23, 2018 12:46:06 PM CEST",
"resourceList" : [ ],
"description" : "DB Home Patching: Home Id is 86b6a068-55a8-4171-9f94-48b86f135065",
"updatedTime" : "May 23, 2018 12:46:06 PM CEST"
}

odacli describe-job -i "e871f741-7dd6-49c6-9b4a-af0d68e647e7"

Job details
----------------------------------------------------------------
ID:  e871f741-7dd6-49c6-9b4a-af0d68e647e7
Description:  DB Home Patching: Home Id is 86b6a068-55a8-4171-9f94-48b86f135065
Status:  Success
Created:  May 23, 2018 1:37:00 PM CEST
Message:  WARNING::Failed to run datapatch on db DB01TST##WARNING::Failed to run datapatch on db DB03TST

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
clusterware patch verification           May 23, 2018 1:37:21 PM CEST        May 23, 2018 1:37:23 PM CEST        Success
Patch location validation                May 23, 2018 1:37:23 PM CEST        May 23, 2018 1:37:23 PM CEST        Success
Opatch updation                          May 23, 2018 1:37:23 PM CEST        May 23, 2018 1:37:23 PM CEST        Success
Patch conflict check                     May 23, 2018 1:37:23 PM CEST        May 23, 2018 1:37:23 PM CEST        Success
db upgrade                               May 23, 2018 1:37:23 PM CEST        May 23, 2018 1:37:23 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:37:23 PM CEST        May 23, 2018 1:37:57 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:37:57 PM CEST        May 23, 2018 1:38:26 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:38:26 PM CEST        May 23, 2018 1:38:51 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:38:51 PM CEST        May 23, 2018 1:39:49 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:39:49 PM CEST        May 23, 2018 1:40:28 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:40:28 PM CEST        May 23, 2018 1:40:53 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:40:53 PM CEST        May 23, 2018 1:41:19 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:41:19 PM CEST        May 23, 2018 1:41:44 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:41:44 PM CEST        May 23, 2018 1:42:14 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:42:14 PM CEST        May 23, 2018 1:42:48 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:42:48 PM CEST        May 23, 2018 1:43:22 PM CEST        Success

It’s longer but less than 10 minutes for this example. You can see the number of databases here (one line SqlPatch upgrade for one database). The job is successful, but actually not so successful because 2 warnings are raised. And be careful because message field in the job details is limited: check the /opt/oracle/dcs/log/dcs-agent.log for extended warning messages.

Always check the DB Version after the update of all dbhomes, this is the new version of the binaries:

odacli list-dbhomes

ID                                       Name                 DB Version                               Home Location                                 Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
86b6a068-55a8-4171-9f94-48b86f135065     OraDB12102_home1     12.1.0.2.180116 (26925218, 26925263)     /u01/app/oracle/product/12.1.0.2/dbhome_1     Configured
c6e7d556-5785-41d8-a120-ed3ef756848a     OraDB11204_home1     11.2.0.4.180116 (26609929, 26925576)     /u01/app/oracle/product/11.2.0.4/dbhome_1     Configured

If update job is really successful, jump to step 8.

7) Update the databases where datapatch failed

For all the databases where datapatch cannot be applied, you’ll have to do it manually. I discovered that if your database is not in AMERICAN_AMERICA language and territory (FRENCH_FRANCE in my case) odacli will not be able to apply the datapatch on the database. Let’s check the status of the datapatch on the database, set the NLS_LANG environment to AMERICAN_AMERICA and apply the patch on these databases:

su – oracle
. oraenv <<< DB01TST
cd $ORACLE_HOME/OPatch
sqlplus / as sysdba
SQL> set serveroutput on
SQL> exec dbms_qopatch.get_sqlpatch_status;

Patch Id : 26609798
Action : APPLY
Action Time : 24-SEP-2017 23:50:14
Description : DATABASE BUNDLE PATCH 12.1.0.2.170814
Logfile :
/u01/app/oracle/cfgtoollogs/sqlpatch/26609798/21481992/26609798_apply_G100316_CD
BROOT_2017Sep24_23_48_22.log
Status : SUCCESS

Patch Id : 26925263
Action : APPLY
Action Time : 29-MAY-2018 15:40:27
Description : DATABASE BUNDLE PATCH 12.1.0.2.180116
Logfile :
/u01/app/oracle/cfgtoollogs/sqlpatch/26925263/21857460/26925263_apply_DB01TST_20
18May29_15_40_13.log
Status : FAILED

PL/SQL procedure successfully completed.

exit
export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
./datapatch -verbose
sqlplus / as sysdba
SQL> set serveroutput on
SQL> exec dbms_qopatch.get_sqlpatch_status;

Patch Id : 26609798
Action : APPLY
Action Time : 24-SEP-2017 23:50:14
Description : DATABASE BUNDLE PATCH 12.1.0.2.170814
Logfile :
/u01/app/oracle/cfgtoollogs/sqlpatch/26609798/21481992/26609798_apply_G100316_CD
BROOT_2017Sep24_23_48_22.log
Status : SUCCESS

Patch Id : 26925263
Action : APPLY
Action Time : 29-MAY-2018 15:40:27
Description : DATABASE BUNDLE PATCH 12.1.0.2.180116
Logfile :
/u01/app/oracle/cfgtoollogs/sqlpatch/26925263/21857460/26925263_apply_DB01TST_20
18May29_15_40_13.log
Status : FAILED

Patch Id : 26925263
Action : APPLY
Action Time : 29-MAY-2018 15:53:43
Description : DATABASE BUNDLE PATCH 12.1.0.2.180116
Logfile :
/u01/app/oracle/cfgtoollogs/sqlpatch/26925263/21857460/26925263_apply_DB01TST_20
18May29_15_53_01.log
Status : SUCCESS

exit

Repeat this operation on all the faulty databases.

8) Patching the storage

Patching the storage is actually patching the NVMe disks. This is not always mandatory as your disks can already have the latest version. So, if needed, apply the patch:

odacli update-storage -v 12.2.1.3.0
{
"jobId" : "4a221df4-8c85-4f34-aa7f-e014cdb751f7",
"status" : "Created",
"message" : "Success of Storage Update may trigger reboot of node after 4-5 minutes. Please wait till node restart",
"reports" : [ ],
"createTimestamp" : "May 29, 2018 13:27:41 PM CEST",
"resourceList" : [ ],
"description" : "Storage Firmware Patching",
"updatedTime" : "May 29, 2018 13:27:41 PM CEST"
}

odacli describe-job -i "4a221df4-8c85-4f34-aa7f-e014cdb751f7"

Job details
----------------------------------------------------------------
ID:  4a221df4-8c85-4f34-aa7f-e014cdb751f7
Description:  Storage Firmware Patching
Status:  Success
Created:  May 29, 2018 1:27:41 PM CEST
Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Applying Firmware Disk Patches           May 29, 2018 1:27:41 PM CEST        May 29, 2018 1:27:45 PM CEST        Success
Applying Firmware Expander Patches       May 29, 2018 1:27:45 PM CEST        May 29, 2018 1:27:52 PM CEST        Success
Applying Firmware Controller Patches     May 29, 2018 1:27:52 PM CEST        May 29, 2018 1:28:51 PM CEST        Success
preRebootNode Actions                    May 29, 2018 1:28:51 PM CEST        May 29, 2018 1:28:51 PM CEST        Success
Reboot Ilom                              May 29, 2018 1:28:51 PM CEST        May 29, 2018 1:28:51 PM CEST        Success

Beware of the reboot of the server few minutes after the patching is finished! And control the components version after the reboot.

9) Optional: deploy the latest db clone files

If you’ll never deploy a new dbhome this step is not necessary. If you will, or simply if you don’t know if you’ll have to do that later, download and register the new db clone files in the repository to be able to create a new dbhome at the same patch level, for example:

cd /opt/patch
unzip p23494992_122130_Linux-x86-64.zip
Archive:  p23494992_122130_Linux-x86-64.zip
 extracting: odacli-dcs-12.2.1.3.0-180320-DB-12.1.0.2.zip
  inflating: README.txt

update-image --image-files odacli-dcs-12.2.1.3.0-180320-DB-12.1.0.2.zip
Unpacking will take some time,  Please wait...
Unzipping odacli-dcs-12.2.1.3.0-180320-DB-12.1.0.2.zip

10) Control the version of the components

Now the patching is done. It took about 2 hours, not so bad for all these updates. Finally, do a last check of the components to see if everything is fine:

odacli describe-component
System Version
---------------
12.2.1.3.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       12.2.1.3.0            up-to-date
GI                                        12.2.0.1.180116       up-to-date
DB {
[ OraDB12102_home1 ]                      12.1.0.2.180116       up-to-date
[ OraDB11204_home1 ]                      11.2.0.4.180116       up-to-date
}
DCSAGENT                                  18.1.3.0.0            up-to-date
ILOM                                      4.0.0.24.r121140      up-to-date
BIOS                                      39090000              up-to-date
OS                                        6.9                   up-to-date
FIRMWARECONTROLLER {
[ c6 ]                                    4.650.00-7176         up-to-date
[ c0,c1,c2,c3,c4,c5 ]                     KPYAGR3Q              up-to-date
}
FIRMWAREDISK                              0R3Q                  up-to-date

Everything should be up-to-date now, until the next patch :-)

 

Cet article Patching ODA lite to 12.2.1.3.0 est apparu en premier sur Blog dbi services.

Dynamic Process, Conditions and Scope

Jan Kettenis - Thu, 2018-05-31 08:56

In Oracle Integration Cloud's Dynamic Processes activation/termination conditions can be based on case events. These events are related to the scope of the components they relate to, which implies some restrictions. The below explains how this works, and how to work around these restrictions.

A Dynamic Process or Case (as I will call it in this article) in the Oracle Integration Cloud consists of four component types: the Case itself, Stages (phases), Activities, and Milestones. An Activity or Milestone is either in a particular Stage (in the picture below Activities A to H are), or global (Activities X and Y). Cases, Processes, Stages, Activities and Miletones cannot be nested (but a Case can initiate a sub-Case via an Activity, which I will discuss another time).



Except for the case itself, all other components can explicitly be activated/enabled or terminated/completed based on conditions. For example in the dynamic process above Milestone 1 is activated once Activity A is completed, and Stage 2 is to be activated once Stage 1 is completed.

A Stage implicitly completes when all work in that stage is done (i.e. all Activities), and a Case implicitly completes when all work in the case is done. Currently the status of a Case cannot be explicitly set using conditions, but I would expect this to become possible in some next version. In the meantime there is a REST API that can be used to close or complete a case.

There are two types of conditions for explicit activation/termination:
  • (case) Events, for example completion of an activity
  • (case) Data Driven, for example "status" field gets value "started"
Events and Data Driven can also be used in combination, for example Activity B is only activated when Activity A is completed (event) AND some "status" data field has value "approve" (data driven).

The scope of an Event is its container, meaning:
  • A Stage can only be activated or terminated by a condition based on an Event concerning another Stage or a Global Activity.
  • An Activity can only be activated or terminated by a condition based on an Event concerning another Activity or Milestone in the same Stage.
  • A Milestone can only be completed by a condition based on an Event concerning an Activity or another Milestone in the same Stage.
  • A Global Activity can only be enabled or terminated by a condition based on an Event concerning a Stage, or Global Milestone or another Global Activity
  • A Global Milestone can only be enabled or terminated by a condition based on an Event concerning a Stage, a Global Activity, or another Global Milestone
I expect that in practice most conditions will be based on Events (so far for me that is the case) where the scope of these events will impose no limit on that. However, there are situations where you will need a "work-around".

Let's assume that in the example Stage 2 is only to be activated when Milestone 1 is completed and otherwise Stage 2 is to be skipped and the case should directly go to Stage 3. Because of the way events are limited by their scope, you cannot create a condition for Stage 2 to be skipped based on the completion of Milestone 1 (which is in Stage 1 and therefore not visible outside).

The work-around is to use a Data Driven condition instead. You can for example have a "metaData.status" field that you can set to something like "skip phase 2" and use that instead.

In general, it probably always is a good idea to let your case have some complex data element for example called "metaData" consisting of fields like "dateStarted" and "status", which that you fill out via the activities, and if needed can be used in conditions everywhere.

Min/Max upgrade

Jonathan Lewis - Thu, 2018-05-31 08:13

Here’s a nice little optimizer enhancement that appeared in 12.2 to make min/max range scans (and full scans) available in more circumstances. Rather than talk through it, here’s a little demonstration:

rem
rem     Script:         122_minmax.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2018
rem     Purpose:
rem
rem     Last tested
rem             12.2.0.1        Good path
rem             12.1.0.2        Bad path

create table pt1 (
        object_id,
        owner,
        object_type,
        object_name,
        status,
        namespace
)
nologging
partition by hash (object_id) partitions 4
as
select
        object_id,
        owner,
        object_type,
        object_name,
        status,
        namespace
from
        (select * from all_objects),
        (select rownum n1 from dual connect by level <= 10)  -- > comment to avoid format wordpress issue
;

alter table pt1 modify(status not null);

execute dbms_stats.gather_table_stats(null,'pt1',granularity=>'ALL',method_opt=>'for all columns size 1')

create index pt1_i1 on pt1(status, namespace) nologging local;

alter session set statistics_level = all;
set serveroutput off
set linesize 156
set pagesize 60
set trimspool on

select  min(status) from pt1;
select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost partition'));

select  min(namespace) from pt1 where status = 'INVALID';
select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost partition'));

select  min(namespace) from pt1 where status = (select min(status) from pt1);
select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost partition'));

The basic “min/max” optimisation allows Oracle to avoid a massive sort aggregate – Oracle doesn’t need to acquire a lot of data and sort it when it knows that the “left hand” end of an index is the low values and the “right hand” is the high values so, for example, in the first query above the optimizer could simply walk down the index branches to the left hand leaf and look at the single lowest entry in the leaf block to determine the lowest value for status … if the index had been a global index.

Things get a little messy, though, when the index is locally partitioned and your query isn’t about the partition key and there’s no suitable global index. Once upon a time (IIRC) Oracle would simply have to do an index fast full scan across all index partitions to handle such a query, but some time ago it got a lot cleverer and was enhanced to do a min/max scan on each partition in turn getting one value per partition very efficiently, then aggregating across those values to find the global minimum.

Here are the three execution plans (with rowsource execution stats pulled from memory) taken from 12.1.0.2 for the queries above:


-----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name   | Starts | E-Rows | Cost (%CPU)| Pstart| Pstop | A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |        |      1 |        |     9 (100)|       |       |      1 |00:00:00.01 |      12 |
|   1 |  SORT AGGREGATE             |        |      1 |      1 |            |       |       |      1 |00:00:00.01 |      12 |
|   2 |   PARTITION HASH ALL        |        |      1 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      12 |
|   3 |    INDEX FULL SCAN (MIN/MAX)| PT1_I1 |      4 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      12 |
-----------------------------------------------------------------------------------------------------------------------------


-------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name   | Starts | E-Rows | Cost (%CPU)| Pstart| Pstop | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |        |      1 |        |     9 (100)|       |       |      1 |00:00:00.01 |      12 |
|   1 |  SORT AGGREGATE               |        |      1 |      1 |            |       |       |      1 |00:00:00.01 |      12 |
|   2 |   PARTITION HASH ALL          |        |      1 |      1 |     9   (0)|     1 |     4 |      1 |00:00:00.01 |      12 |
|   3 |    FIRST ROW                  |        |      4 |      1 |     9   (0)|       |       |      1 |00:00:00.01 |      12 |
|*  4 |     INDEX RANGE SCAN (MIN/MAX)| PT1_I1 |      4 |      1 |     9   (0)|     1 |     4 |      1 |00:00:00.01 |      12 |
-------------------------------------------------------------------------------------------------------------------------------


-----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name   | Starts | E-Rows | Cost (%CPU)| Pstart| Pstop | A-Rows |   A-Time   | Buffers | Reads  |
-----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |        |      1 |        |   337 (100)|       |       |      1 |00:00:00.07 |    2402 |   2242 |
|   1 |  SORT AGGREGATE                |        |      1 |      1 |            |       |       |      1 |00:00:00.07 |    2402 |   2242 |
|   2 |   PARTITION HASH ALL           |        |      1 |    422K|   328  (10)|     1 |     4 |     10 |00:00:00.07 |    2402 |   2242 |
|*  3 |    INDEX FAST FULL SCAN        | PT1_I1 |      4 |    422K|   328  (10)|     1 |     4 |     10 |00:00:00.07 |    2402 |   2242 |
|   4 |     SORT AGGREGATE             |        |      1 |      1 |            |       |       |      1 |00:00:00.01 |      12 |      0 |
|   5 |      PARTITION HASH ALL        |        |      1 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      12 |      0 |
|   6 |       INDEX FULL SCAN (MIN/MAX)| PT1_I1 |      4 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      12 |      0 |
-----------------------------------------------------------------------------------------------------------------------------------------

In the first plan Oracle has done an “index full scan (min/max)” across each of the four partitions in turn to return one row very cheaply from each, then aggregated to find the overall minimum.

In the second plan Oracle has done an “index range scan (min/max)” in exactly the same way, since it was able to find the start point in the index for the status ‘INVALID’ very efficiently.

In the third plan Oracle has been able to find the minimum value for the status (‘INVALID’) very efficiently in the subquery, and has passed that single value up to the main query, which has then used a brute force approach to search the whole of every partition of the index for every occurrence (all 10 of them) of the value ‘INVALID’ and then aggregated them to find the minimum namespace. Despite “knowing”, by the time the main query runs, that there will be a single value to probe for the status, the optimizer has not anticipated the fact that the final query will effectively become the same as the preceding one. As a result we’ve read 2,242 data blocks into the cache.

Turn, then, to the execution plan from 12.2.0.1 for this last query:


---------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name   | Starts | E-Rows | Cost (%CPU)| Pstart| Pstop | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |        |      1 |        |     9 (100)|       |       |      1 |00:00:00.01 |      24 |
|   1 |  SORT AGGREGATE                 |        |      1 |      1 |            |       |       |      1 |00:00:00.01 |      24 |
|   2 |   PARTITION HASH ALL            |        |      1 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      24 |
|   3 |    FIRST ROW                    |        |      4 |      1 |     9   (0)|       |       |      4 |00:00:00.01 |      24 |
|*  4 |     INDEX RANGE SCAN (MIN/MAX)  | PT1_I1 |      4 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      24 |
|   5 |      SORT AGGREGATE             |        |      1 |      1 |            |       |       |      1 |00:00:00.01 |      12 |
|   6 |       PARTITION HASH ALL        |        |      1 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      12 |
|   7 |        INDEX FULL SCAN (MIN/MAX)| PT1_I1 |      4 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      12 |
---------------------------------------------------------------------------------------------------------------------------------

In 12.2 you can see that the main query is now doing an “index range scan (min/max)” on each index partition in turn, based on the incoming (though unknown at parse time) single value from the subquery. As a result the total work done is a mere 24 buffer visits.

There have been a couple of occasions in the past where I’ve had to write some PL/SQL to work around little details like this. It’s nice to know simple tables and partitoned tables with local indexes can now behave the same way. I also wonder whether there may be sites that could drop (or drop columns from, or make local) some indexes that they’ve previously created to  handle queries of the “most recent occurence” type.

If, for any reason, you need to disable this enhancement, it’s controlled by fix_control (v$system_fix_control) “18915345 Allow MIN/MAX optimization for pred having single row subquery” which can be set in the startup file, at the system level, or in the session.

Update

Checking MoS for the bug number I found that the limitation had been reported for 11.2.0.3, with “Fixed in product version” reported as 12.2; but there are patches for various releases of 11.2.0.4, though none yet for 12.1.0.2 – but if you think you need it you can always try raising an SR.

 

Introducing SQL Server on Kubernetes

Yann Neuhaus - Thu, 2018-05-31 06:50

After spending some times with Docker Swarm let’s introduce SQL Server on Kubernetes (aka K8s). Why another container orchestrator? Well, because Microsoft gives a strong focus on Kubernetes in their documentation and their events and because K8s is probably one of the most popular orchestration tools in the IT industry. By the way, I like to refer to the Portworx Annual Container Adoption Survey to get a picture of container trend over the years and we may notice there is no clear winner among orchestration tools yet between Swarm and K8s. By 2017, one another interesting point was persistent storage challenge that is the top 1 of the top list of adoption. I’m looking forward to see the next report about this point because you probably guessed, database containers rely mainly on it.

Anyway, as an IT services company, it appears justifiable to include K8s to our to-do list about container orchestrators :)

blog 136 - 000 - K8s - banner

First of all, let’s say this blog post doesn’t aim to compare Docker Swarm and K8s. Each platform has pros and cons and you can read a lot on the internet. I will rather expose some thoughts about deploying our dbi services docker image on this platform. Indeed, since last year we mainly work on our SQL Server docker image based on Docker and Docker Swarm architectures and it may be interesting to see if we may go the same way with K8s.

But before deploying our custom image we need to install a K8s infrastructure. From an installation perspective K8s cluster is likely harder to use than Docker Swarm. This time rather than using my own lab environment, I will shift on both Azure container and Azure container registry services to provision an operational K8s service. I just want here to focus on deploying my image and get some experience feedbacks about interacting with K8s. The Microsoft procedure is well-documented so there is no really adding-value to duplicate the installation steps. Because we operate on Azure, I will use a lot of az cli and kubectl commands to deploy and to manage my K8s service. Here some important information concerning my infrastructure:

I first installed and configured a private registry through the Azure container registry service in order to push my custom docker image for SQL Server 2017 on Linux. Obviously, this step may be optional regarding your context. My custom image is named dbi_linux_sql2017.

[dab@DBI-LT-DAB:#]> az acr repository list --name dbik8registry --output table
Result
------------------
dbi_linux_sql2017
mssql-server-linux

 

Then I installed my K8s service that includes 2 nodes. This is likely not a recommended scenario for production but it will fit with my need for the moment. I will probably scale my architecture for future tests.

[dab@DBI-LT-DAB:#]> kubectl cluster-info
Kubernetes master is running at https://dbik8sclus-k8s-rg-913528-...
Heapster is running at https://dbik8sclus-k8s-rg-913528-...
KubeDNS is running at https://dbik8sclus-k8s-rg-913528-...
kubernetes-dashboard is running at https://dbik8sclus-k8s-rg-913528-...
…
[dab@DBI-LT-DAB:#]> kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://dbik8sclus-k8s-rg-913528-3eb7146d.hcp.westeurope.azmk8s.io:443
  name: dbik8scluster
contexts:
- context:
    cluster: dbik8scluster
    user: clusterUser_k8s-rg_dbik8scluster
  name: dbik8scluster
current-context: dbik8scluster
…

[dab@DBI-LT-DAB:#]> kubectl get nodes
NAME                       STATUS    ROLES     AGE       VERSION
aks-nodepool1-78763348-0   Ready     agent     6h        v1.9.6
aks-nodepool1-78763348-1   Ready     agent     6h        v1.9.6

 

From an Azure perspective, my K8s cluster is composed of several resources in a dedicated resource group with virtual machines, disks, network interfaces, availability sets and a K8s load balancer reachable from a public IP address.

blog 136 - 00 - K8s - Azure config

Finally, I granted to my K8s cluster sufficient permissions to access my private Docker registry (READ role).

[dab@DBI-LT-DAB:#]>$CLIENT_ID=(az aks show --resource-group k8s-rg --name dbik8scluster --query "servicePrincipalProfile.clientId" --output tsv)
[dab@DBI-LT-DAB:#]>$ACR_ID=$(az acr show --name dbik8registry --resource-group k8s-rg --query "id" --output tsv)

[dab@DBI-LT-DAB:#]>az role assignment create --assignee $CLIENT_ID --role Reader --scope $ACR_ID

[dab@DBI-LT-DAB:#]> kubectl get secrets
NAME                  TYPE                                  DATA      AGE
default-token-s94vc   kubernetes.io/service-account-token   3         6h

 

Similar to Docker Swarm, we may rely on secret capabilities to protect the SQL Server sa password. So, let’s take advantage of it!

[dab@DBI-LT-DAB:#]> kubectl create secret generic mssql --from-literal=SA_PASSWORD="xxxxx"
[dab@DBI-LT-DAB:#]> kubectl get secrets
NAME                  TYPE                                  DATA      AGE
default-token-s94vc   kubernetes.io/service-account-token   3         6h
mssql                 Opaque                                1         6h

 

At this stage before deploying my SQL Server application, let’s introduce some K8s important concepts we should be familiar as a database administrator.

  • Pod

Referring to the K8s documentation, a pod is a logical concept that represents one or more application containers with some shared resources as shared storage, networking including unique cluster IP address and metadata about each container image such image version, exposed port etc ….

Each container in the same pod is always co-located and co-scheduled and run in shared context on the same node. Comparing to Docker Swarm, the latter doesn’t offer such capabilities because as far I as know by default, tasks are spread services across the cluster and there is no really easy way to achieve the same concept than K8s pod.

To simplify, a K8s pod is a group of containers that are deployed together on the same host. Referring to my SQL Server deployment with only one container, pod may be replaced by container here but in a real production scenario SQL Server will likely be one part of a K8s pod.

blog 136 - 0 - K8s - POD

 

We may correlate what was said previously by using K8s related commands to pods. Here a status of the pod related to my SQL Server deployment.

[dab@DBI-LT-DAB:#]> kubectl get pods
NAME                                READY     STATUS    RESTARTS   AGE
mssql-deployment-5845f974c6-xx9jv   1/1       Running   0          3h

[dab@DBI-LT-DAB:#]> kubectl describe pod mssql-deployment-5845f974c6-xx9jv
Name:           mssql-deployment-5845f974c6-xx9jv
Namespace:      default
Node:           aks-nodepool1-78763348-0/10.240.0.4
Start Time:     Wed, 30 May 2018 19:16:46 +0200
Labels:         app=mssql
                pod-template-hash=1401953072
Annotations:    <none>
Status:         Running
IP:             10.244.1.13
Controlled By:  ReplicaSet/mssql-deployment-5845f974c6
Containers:
  mssql:
    Container ID:   docker://b71ba9ac3c9fa324d8ff9ffa8ec24015a676a940f4d2b64cbb85b9de8ce1e227
    Image:          dbik8registry.azurecr.io/dbi_linux_sql2017:CU4
    Image ID:       docker-pullable://dbik8registry.azurecr.io/dbi_linux_sql2017@sha256:5b9035c51ae2fd4c665f957da2ab89432b255db0d60d5cf63d3405b22a36ebc1
    Port:           1433/TCP
    State:          Running
      Started:      Wed, 30 May 2018 19:17:22 +0200
    Ready:          True
    Restart Count:  0
    Environment:
      ACCEPT_EULA:        Y
      MSSQL_SA_PASSWORD:  xxxxxx
      DMK:                Y
    Mounts:
      /var/opt/mssql from mssqldb (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-s94vc (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          True
  PodScheduled   True
Volumes:
  mssqldb:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  mssql-data
    ReadOnly:   false
….

 

  • Replica set

A Replication set is a structure that enables you to easily create multiple pods, then make sure that that number of pods always exists. If a pod does crash, the Replication Controller replaces it. It also provides scale capabilities as we get also from Docker Swarm.

  • Service

From K8s documentation a service is also an abstraction which defines a logical set of Pods and a policy by which to access them – sometimes called a micro-service. The set of Pods targeted by a service is (usually) determined by a Label Selector. While there are some differences under the hood, we retrieve the same concepts with Docker Swarm from a deployment perspective.

  •  Virtual IP and service proxies

Referring again to the K8s documentation, every node in a Kubernetes cluster runs a kube-proxy that is responsible for implementing a form of virtual IP for Services. It includes Ingress network that is also part of Docker Swarm architecture with overlay networks and routing mesh capabilities.

In my case, as described previously I used an external load balancer with an EXTERNAL-IP configured to access my SQL Server container from the internet (xx.xxx.xxx.xx is my masked public IP as you already guessed)

[dab@DBI-LT-DAB:#]> kubectl get services
NAME               TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)          AGE
kubernetes         ClusterIP      10.0.0.1       <none>          443/TCP          7h
mssql-deployment   LoadBalancer   10.0.200.170   xx.xxx.xxx.xx   1433:30980/TCP   4h

 

From an Azure perspective the above output corresponds to what we may identify as my Kubernetes load balancer and public IP address resources as well.

blog 136 - 1 - K8s - Load Balancer

blog 136 - 12 - K8s - Public IP

Once again, my intention was not to compare Docker Swarm and K8s at all but just to highlight the fact if you’re already comfortable with Docker Swarm, the move on K8s is not as brutal as we may suppose from a high-level point of view.

Ok let’s start now the deployment phase. As said previously my private container registry already contains my custom SQL Server image. I just had to tag my image on my local machine and to push it the concerned registry as I might do with other remote Docker registries.

[dab@DBI-LT-DAB:#]> docker images
REPOSITORY                                    TAG                 IMAGE ID            CREATED             SIZE
dbik8registry.azurecr.io/dbi_linux_sql2017    CU4                 3c6bafb33a5c        17 hours ago        1.42GB
dbi/dbi_linux_sql2017                         CU4                 3c6bafb33a5c        17 hours ago        1.42GB

[dab@DBI-LT-DAB:#]> az acr repository list --name dbik8registry --output table
Result
------------------
dbi_linux_sql2017
mssql-server-linux

[dab@DBI-LT-DAB:#]> az acr repository show-tags --name dbik8registry --repository mssql-server-linux --output table
Result
--------
2017-CU4

 

In addition, I used a persistent storage based on Azure managed disk in order to guarantee persistence for my SQL Server database files.

[dab@DBI-LT-DAB:#]> kubectl describe pvc mssql-data
Name:          mssql-data
Namespace:     default
StorageClass:  azure-disk
Status:        Bound
Volume:        pvc-32a42393-6402-11e8-885d-f2170a386bd7
…

 

Concerning the image itself we use some custom parameters to create both a dedicated user for applications that will run on the top of the SQL Server instance and to enable the installation of the DMK maintenance module for SQL Server at the container start up. We have other customization topics but for this blog post it will be sufficient to check what we want to test.

Here my deployment file. Comparing to Docker Swarm deployment file, I would say the manifest is more complex with K8s (that’s a least my feeling).

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: mssql-deployment
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: mssql
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: mssql
        image: dbik8registry.azurecr.io/dbi_linux_sql2017:CU4
        ports:
        - containerPort: 1433
        env:
        - name: ACCEPT_EULA
          value: "Y"
        - name: MSSQL_SA_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mssql
              key: SA_PASSWORD 
        - name: DMK
          value: "Y"
        volumeMounts:
        - name: mssqldb
          mountPath: /var/opt/mssql
      volumes:
      - name: mssqldb
        persistentVolumeClaim:
          claimName: mssql-data
---
apiVersion: v1
kind: Service
metadata:
  name: mssql-deployment
spec:
  selector:
    app: mssql
  ports:
    - protocol: TCP
      port: 1433
      targetPort: 1433
  type: LoadBalancer

 

Let’s deploy and let’s spin up our SQL Server application

[dab@DBI-LT-DAB:#]> kubectl apply -f T:\dbi_dbaas_azure\sqlk8sazuredeployment.yaml
deployment "mssql-deployment" created
service "mssql-deployment" created

 

Pod and services are created. Let’s take a look at some information about them. Deployment and pod are ok. The last command shows the associated internal IP to connect in order to the SQL Server pod as well as a external / public IP address that corresponds to the Ingress load-balancer to connect from outside Azure internal network. We also get a picture of exposed ports.

[dab@DBI-LT-DAB:#]> kubectl get deployments
NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
mssql-deployment   1         1         1            1           7m

[dab@DBI-LT-DAB:#]> kubectl get pods -o wide
NAME                               READY     STATUS    RESTARTS   AGE       IP            NODE
mssql-deployment-8c67fdccc-pbg6d   1/1       Running   0          12h       10.244.1.16   aks-nodepool1-78763348-0

[dab@DBI-LT-DAB:#]> kubectl get replicasets
NAME                         DESIRED   CURRENT   READY     AGE
mssql-deployment-8c67fdccc   1         1         1         12h

[dab@DBI-LT-DAB:#]> kubectl get services
NAME               TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)          AGE
kubernetes         ClusterIP      10.0.0.1       <none>          443/TCP          9h
mssql-deployment   LoadBalancer   10.0.134.101   xxx.xxx.xxx.xxx   1433:31569/TCP   7m

[dab@DBI-LT-DAB:#]> kubectl describe service mssql-deployment
Name:                     mssql-deployment
Namespace:                default
Labels:                   <none>
…
Selector:                 app=mssql
Type:                     LoadBalancer
IP:                       10.0.134.101
LoadBalancer Ingress:     xxx.xxx.xxx.xxx
Port:                     <unset>  1433/TCP
TargetPort:               1433/TCP
NodePort:                 <unset>  31569/TCP
Endpoints:                10.244.1.16:1433
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

 

Let’s try now to connect to new fresh SQL Server instance from my remote laptop:

blog 136 - 2 - K8s - Container

Great job! My container includes all my custom stuff as the dbi_tools database and dedicated maintenance jobs related to our DMK maintenance tool. We may also notice the dbi user created during the container start up.

Just out of curiosity, let’s have a look at the pod log or container log because there is only one in the pod in my case. The log includes SQL Server log startup and I put only some interesting samples here that identify custom actions we implemented during the container startup.

[dab@DBI-LT-DAB:#]> kubectl get po -a
NAME                               READY     STATUS    RESTARTS   AGE
mssql-deployment-8c67fdccc-pk6sm   1/1       Running   0          21m

[dab@DBI-LT-DAB:#]> kubectl logs mssql-deployment-8c67fdccc-pk6sm
…
======= 2018-05-30 21:04:59 Creating /u00 folder hierarchy ========
cat: /config.log: No such file or directory
======= 2018-05-30 21:04:59 Creating /u01 folder hierarchy ========
======= 2018-05-30 21:04:59 Creating /u02 folder hierarchy ========
======= 2018-05-30 21:04:59 Creating /u03 folder hierarchy ========
======= 2018-05-30 21:04:59 Creating /u98 folder hierarchy ========
======= 2018-05-30 21:04:59 Linking binaries and configuration files to new FHS ========
======= 2018-05-30 21:04:59 Creating MSFA OK =======
….
2018-05-30 21:05:13.85 spid22s     The default language (LCID 1033) has been set for engine and full-text services.
======= 2018-05-30 21:05:29 MSSQL SERVER STARTED ========
======= 2018-05-30 21:05:29 Configuring tempdb database files placement =======
…
2018-05-30 21:06:05.16 spid51      Configuration option 'max server memory (MB)' changed from 2147483647 to 1500. Run the RECONFIGURE statement to install.
Configuration option 'show advanced options' changed from 0 to 1. Run the RECONFIGURE statement to install.
Configuration option 'max server memory (MB)' changed from 2147483647 to 1500. Run the RECONFIGURE statement to install.
======= 2018-05-30 21:06:05 Configuring max server memory OK =======
======= 2018-05-30 21:06:05 Creating login dbi =======
======= 2018-05-30 21:06:05 Creating login dbi OK =======
======= 2018-05-30 21:06:05 Installing DMK =======
Changed database context to 'master'.
Creating dbi_tools...
2018-05-30 21:06:12.47 spid51      Setting database option MULTI_USER to ON for database 'dbi_tools'.
Update complete.
Changed database context to 'dbi_tools'.

(1 rows affected)

(1 rows affected)
======= 2018-05-30 21:06:12 Installing DMK OK =======
======= MSSQL CONFIG COMPLETED =======
2018-05-30 21:10:09.20 spid51      Using 'dbghelp.dll' version '4.0.5'
2018-05-30 21:10:19.76 spid51      Attempting to load library 'xplog70.dll' into memory. This is an informational message only. No user action is required.
2018-05-30 21:10:19.87 spid51      Using 'xplog70.dll' version '2017.140.3022' to execute extended stored procedure 'xp_msver'. This is an informational message only; no user action is required.

 

To finish this blog post up properly, let’s simulate a pod failure to check the K8s behavior with our SQL Server container.

[dab@DBI-LT-DAB:#]> kubectl delete pod mssql-deployment-8c67fdccc-pk6sm
pod "mssql-deployment-8c67fdccc-pk6sm" deleted

[dab@DBI-LT-DAB:#]> kubectl get pods
NAME                               READY     STATUS        RESTARTS   AGE
mssql-deployment-8c67fdccc-jrdgg   1/1       Running       0          5s
mssql-deployment-8c67fdccc-pk6sm   1/1       Terminating   0          26m

[dab@DBI-LT-DAB:#]> kubectl get pods
NAME                               READY     STATUS    RESTARTS   AGE
mssql-deployment-8c67fdccc-jrdgg   1/1       Running   0          2m

 

As expected, the replica set is doing its job by re-creating the pod to recover my SQL Server instance, and by connecting to the persistent storage. We can check we can still connect on the instance from the load balancer IP address without running into any corruption issue.

blog 136 - 3 - K8s - Container 2

To conclude, I would say that moving our custom SQL docker image was not as hard as I expected. Obviously, there are some difference between the both orchestrator products but from an application point of view it doesn’t make a big difference. In an administration perspective, I’m agree the story is probably not the same :)

What about K8s from a development perspective? You may say that you didn’t own such Azure environment but the good news is you can use Minikube which is the single node version of Kubernetes mainly designed for local development. I will probably blog about it in the future. Stay tuned!

 

 

Cet article Introducing SQL Server on Kubernetes est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator