Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 14 hours 43 min ago

SQL Server 2016: New SQL PowerShell CMDLETs for ErrorLog

Thu, 2016-07-14 00:54

With the latest release of SQL Server Management Studio(SSMS) 2016 (13.0.15500.91), downloadable here, was introduced new CMDLETs for Always Encrypted, SQL Agent and the  ErrorLog.
SSMS_update01

In this article, I will present you the 2 new CMDLETs for the Error Logs:

  • Get-SqlErrorLog: Retrieves the SQL Server Logs.
  • Set-SqlErrorLog: Sets or resets the maximum number of error log files before recycling.

My first step is to search all commands with “Sql”:

Get-Command | Select Name |Where-Object {$_.Name -like "*Sql"*}

PowerShell_ErrorLog_01

As you can see, I have a lot of commands. I filter with SqlErrorLog and have the detail of both commands:

Get-Command | Where-Object {$_.Name -like "*SqlErrorLog*"} | Format-List *

PowerShell_ErrorLog_02

To have the detail per command, I use these commands:

Get-Command | Where-Object {$_.Name -eq "Get-SqlErrorLog"} | Format-List *
Get-Command | Where-Object {$_.Name -eq "Set-SqlErrorLog"} | Format-List *

 

CMDLET Get-SqlErrorLog

For Example, a simple query to retrieve all backup lines:

Get-SqlErrorLog | Where-Object { $_.text -like "*BACKUP*"} | Out-GridView

PowerShell_ErrorLog_03 You can do the same for the failed login:

Get-SqlErrorLog | Where-Object { $_.text -like "*Failed*"} | Out-GridView

PowerShell_ErrorLog_04

Or directly find all errors between 2 dates with –Before and –After parameters:

Get-SqlErrorLog -Before "2016/06/30" -After "2016/06/28" | Where-Object { $_.text -like "*Error:*"} | Out-GridView

PowerShell_ErrorLog_05 >

CMDLET Set-SqlErrorLog

It is very easy to configure the number of errorlog files with this command:

Set-SqlErrorLog -MaxLogCount [6-99]

PowerShell_ErrorLog_06

After the command:
PowerShell_ErrorLog_07

For the fun, I try to enter a value equal to 1 and a value equal to 100 to see if an error message appears:
PowerShell_ErrorLog_08

It is very nice to have these 2 news CMDLETs in SQL PowerShell ;-)

 

Cet article SQL Server 2016: New SQL PowerShell CMDLETs for ErrorLog est apparu en premier sur Blog dbi services.

Oracle Public Cloud patch conflict

Wed, 2016-07-13 13:20

This morning I wanted to test a patch (18633374) in the Oracle Cloud Service. The DBaaS was created as an ‘Enterprise Edition Extreme Performance’ which comes with all options, including multitenant option. I applied my patch. My test required to create a new tablespace but it failed with: ORA-65010: maximum number of pluggable databases created

This is the kind of message we get when we try to use a feature that is not allowed in Standard Edition. But I was in Enterprise Edition here:
First thing I did was to tweet a screenshot, in case someone encountered the issue already:

This is not exactly my definition of 'EE Extreme Perf'. WTF did I do? pic.twitter.com/y34JoX9mY3

— Franck Pachot (@FranckPachot) July 13, 2016

And second thing was to try to reproduce the issue because it’s a test environment where I did things quickly and I don’t remember all what was done.
I create a new service in EE Extreme Performance:

Connected to:
Oracle Database 12c EE Extreme Perf Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Oracle Label Security, OLAP, Advanced Analytics
and Real Application Testing options

check that I can create additional pluggable databases

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB READ WRITE NO
4 DEMOS READ WRITE NO
SQL> create pluggable database PDBNEW admin user admin identified by admin;
Pluggable database created.
SQL> create pluggable database PDBNEW1 admin user admin identified by admin;
Pluggable database created.

I tried to do the same as I did (apply patch 18633374)

[oracle@CDBA 18633374]$ dbshut $ORACLE_HOME
Processing Database instance "CDB": log file /u01/app/oracle/product/12.1.0/dbhome_1/shutdown.log
$ORACLE_HOME/OPatch/opatch apply
[oracle@CDBA 18633374]$ $ORACLE_HOME/OPatch/opatch apply
Oracle Interim Patch Installer version 12.1.0.1.10
Copyright (c) 2016, Oracle Corporation. All rights reserved.
 
Oracle Home : /u01/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
from : /u01/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version : 12.1.0.1.10
OUI version : 12.1.0.2.0
Log file location : /u01/app/oracle/product/12.1.0/dbhome_1/cfgtoollogs/opatch/18633374_Jul_13_2016_11_23_28/apply2016-07-13_11-23-28AM_1.log
 
Verifying environment and performing prerequisite checks...
 
Conflicts/Supersets for each patch are:
 
Patch : 18633374
 
Conflict with 23192060
Conflict details:
/u01/app/oracle/product/12.1.0/dbhome_1/lib/libserver12.a:kpdbc.o
/u01/app/oracle/product/12.1.0/dbhome_1/lib/libserver12.a:krb.o
/u01/app/oracle/product/12.1.0/dbhome_1/lib/libserver12.a:krbb.o
/u01/app/oracle/product/12.1.0/dbhome_1/lib/libserver12.a:krbi.o
/u01/app/oracle/product/12.1.0/dbhome_1/lib/libserver12.a:krbabr.o
 
Following patches have conflicts: [ 18633374 23192060 ]

Yes, I remember that I had to de-install an interim patch that was there in my newly created DBaaS:


[oracle@CDBA 18633374]$ $ORACLE_HOME/OPatch/opatch lspatches
23192060;
22674709;Database PSU 12.1.0.2.160419, Oracle JavaVM Component (Apr2016)
22291127;Database Patch Set Update : 12.1.0.2.160419 (22291127)

Let’s do it:


[oracle@CDBA 18633374]$ $ORACLE_HOME/OPatch/opatch rollback -id 23192060
Oracle Interim Patch Installer version 12.1.0.1.10
Copyright (c) 2016, Oracle Corporation. All rights reserved.
 
Oracle Home : /u01/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
from : /u01/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version : 12.1.0.1.10
OUI version : 12.1.0.2.0
Log file location : /u01/app/oracle/product/12.1.0/dbhome_1/cfgtoollogs/opatch/23192060_Jul_13_2016_11_24_49/rollback2016-07-13_11-24-49AM_1.log
 
RollbackSession rolling back interim patch '23192060' from OH '/u01/app/oracle/product/12.1.0/dbhome_1'
 
Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/oracle/product/12.1.0/dbhome_1')
 
Is the local system ready for patching? [y|n] y
User Responded with: Y
 
Patching component oracle.oracore.rsf, 12.1.0.2.0...
 
Patching component oracle.rdbms.rsf, 12.1.0.2.0...
 
Patching component oracle.rdbms, 12.1.0.2.0...
Deleting "kscs.o" from archive "/u01/app/oracle/product/12.1.0/dbhome_1/lib/libserver12.a"
Deleting "kststqae.o" from archive "/u01/app/oracle/product/12.1.0/dbhome_1/lib/libserver12.a"
 
Patching component oracle.rdbms.dbscripts, 12.1.0.2.0...
RollbackSession removing interim patch '23192060' from inventory
Log file location: /u01/app/oracle/product/12.1.0/dbhome_1/cfgtoollogs/opatch/23192060_Jul_13_2016_11_24_49/rollback2016-07-13_11-24-49AM_1.log
 
OPatch succeeded.

Then I check if I’m still able to create a new PDB:


[oracle@CDBA 18633374]$ dbstart $ORACLE_HOME
Processing Database instance "CDB": log file /u01/app/oracle/product/12.1.0/dbhome_1/startup.log

Ok, I’ll not detail the following problem for the moment:
ORA-01078: failure in processing system parameters
LRM-00101: unknown parameter name 'encrypt_new_tablespaces'

During my tests I removed that encrypt_new_tablespaces parameter from the spfile to continue.


[oracle@CDBA 18633374]$ sqlplus / as sysdba
 
SQL*Plus: Release 12.1.0.2.0 Production on Wed Jul 13 11:34:57 2016
 
Copyright (c) 1982, 2014, Oracle. All rights reserved.
 
Connected to:
Oracle Database 12c EE Extreme Perf Release 12.1.0.2.0 - 64bit Production
 
SQL> create pluggable database PDBNEW2 admin user admin identified by admin;
create pluggable database PDBNEW2 admin user admin identified by admin
  *
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

Ok, issue reproduced. Interim patch 23192060 is required to be able to have EE Extreme Perf able to act as an Enterprise Edition.

No description, but from alert.log the list of bugs bundled there:
Patch Id: 23192060
Patch Description:
Patch Apply Time:
Bugs Fixed: 19366375,19665921,19770063,21281607,21470120,21923026,23072137

A lookup in MOS gives:

19366375 – CORRECT ELAPSED TIME CALCULATION AND ADD DIAGNOSTIC FOR BUG 18920073
19665921 – ENABLE HCC FOR DBCS REGARDLESS OF EXTREME PERFORMANCE OPTION OR OTHER OPTIONS
19770063 – GET INFO ABOUT CLOUD BUNDLE INTO V$INSTANCE TABLE
21281607 – TRANSPARENTLY ENCRYPT TABLESPACE AT CREATION IN CLOUD
21470120 – CLOUD BACKPORT FOR HCC AND VERSION CHANGES
21923026 – ORA-600 [OLTP COMPRESSION SANITY CHECK] 23072137 – TDE OFFLINE ENCRYPTION SHOULD NOT BE ALLOWED CONCURRENTLY DURING MRP

Several Oracle Public Cloud specifics here. The ‘encrypt_new_tablespaces’ to do TDE for all new user tablespace, the HCC which is possible in any EE on the Cloud, and info about cloud edition in v$instance…

Let’s check the edition now that I’ve de-installed the patch 19770063:

SQL> select edition from v$instance;
 
EDITION
-------
UNKNOWN

This value comes from x$ksuxsinst.ksuxsedition which is 0 there and the view knows only the value 8 for Enterprise Edition:

SQL> select ksuxsedition from x$ksuxsinst;
 
KSUXSEDITION
------------
0
 
SQL> select view_definition from v$fixed_view_definition where view_name='GV$INSTANCE';
 
VIEW_DEFINITION
---------------
select ks.inst_id,ksuxsins,ksuxssid,ksuxshst,ksuxsver,ksuxstim,decode(ksuxssts,0,'STARTED',1,'MOUNTED',2,'OPEN',3,'OPEN MIGRATE','UNKNOWN'),decode(ksuxsshr,0,'NO',1,'YES',2,NULL),ksuxsthr,decode(ksuxsarc,0,'STOPPED',1,'STARTED','FAILED'),decode(ksuxslsw,0,NULL,2,'ARCHIVE LOG',3,'CLEAR LOG',4,'CHECKPOINT', 5,'REDO GENERATION'),decode(ksuxsdba,0,'ALLOWED','RESTRICTED'),decode(ksuxsshp,0,'NO','YES'),decode(kvitval,0,'ACTIVE',2147483647,'SUSPENDED','INSTANCE RECOVERY'),decode(ksuxsrol,1,'PRIMARY_INSTANCE',2,'SECONDARY_INSTANCE','UNKNOWN'), decode(qui_state,0,'NORMAL',1,'QUIESCING',2,'QUIESCED','UNKNOWN'), decode(bitand(ksuxsdst, 1), 0, 'NO', 1, 'YES', 'NO'), ks.con_id, decode(ksuxsmode,2,'READ MOSTLY','REGULAR'),
decode(ksuxsedition, 2, 'PO', 4, 'SE', 8, 'EE', 16, 'XE', 32, 'CS', 40, 'CE', 'UNKNOWN'), ksuxsfam from x$ksuxsinst ks, x$kvit kv, x$quiesce qu where kvittag = 'kcbwst'

No doubt, this meay lead to inaccesible EE features.

When you create a DBaaS in, wich includes the patch 2319206, you get the following for EE Extreme Performance:


SQL> host $ORACLE_HOME/OPatch/opatch lspatches
23192060;
22674709;Database PSU 12.1.0.2.160419, Oracle JavaVM Component (Apr2016)
22291127;Database Patch Set Update : 12.1.0.2.160419 (22291127)
 
SQL> select edition from v$instance;
 
EDITION
-------
XP
 
SQL> select ksuxsedition from x$ksuxsinst;
 
KSUXSEDITION
------------
256
 
SQL> select view_definition from v$fixed_view_definition where view_name='GV$INSTANCE';
 
VIEW_DEFINITION
---------------
select ks.inst_id,ksuxsins,ksuxssid,ksuxshst,ksuxsver,ksuxstim,decode(ksuxssts,0,'STARTED',1,'MOUNTED',2,'OPEN',3,'OPEN MIGRATE','UNKNOWN'),decode(ksuxsshr,0,'NO',1,'YES',2,NULL),ksuxsthr,decode(ksuxsarc,0,'STOPPED',1,'STARTED','FAILED'),decode(ksuxslsw,0,NULL,2,'ARCHIVE LOG',3,'CLEAR LOG',4,'CHECKPOINT', 5,'REDO GENERATION'),decode(ksuxsdba,0,'ALLOWED','RESTRICTED'),decode(ksuxsshp,0,'NO','YES'),decode(kvitval,0,'ACTIVE',2147483647,'SUSPENDED','INSTANCE RECOVERY'),decode(ksuxsrol,1,'PRIMARY_INSTANCE',2,'SECONDARY_INSTANCE','UNKNOWN'), decode(qui_state,0,'NORMAL',1,'QUIESCING',2,'QUIESCED','UNKNOWN'), decode(bitand(ksuxsdst, 1), 0, 'NO', 1, 'YES', 'NO'), ks.con_id, decode(ksuxsmode,1,'REGULAR',2,'READ MOSTLY','READ ONLY'),
decode(ksuxsedition, 2, 'PO', 4, 'SE', 8, 'EE', 16, 'XE', 32, 'CS', 64, 'CE', 128, 'HP', 256, 'XP', 'UNKNOWN'), ksuxsfam, kjidtv from x$ksuxsinst ks, x$kvit kv, x$quiesce qu, x$kjidt where kvittag = 'kcbwst'

So what?

The Oracle Public Cloud is a strange PaaS: database is provisioned automatically but you can break everything you want later: you are DBA, SYSDBA and even root, as in IaaS. But it’s not because you can do everything that you should do everything. The Oracle Database software has been adapted for the Cloud and requires specific patches. After each PSU, those patches are merged to be applied over the PSU. And if you need to apply a new patch which conflicts with one of them, then you should request a merge that includes the Cloud fixes.

Having different patches for Cloud and for on-premises is not very nice. If the goal is to have dev and test in the public cloud and prod on-premises, then we want the same software and the same patching procedures. But don’t worry, this is because the cloud arrived after 12.1.0.2 release. Next generation will be stabilized on the cloud first. We complains about ‘cloud-first’?

 

Cet article Oracle Public Cloud patch conflict est apparu en premier sur Blog dbi services.

SQL Server 2016 AlwaysOn: Direct seeding and backup considerations

Tue, 2016-07-12 15:27

As promised in my prior blog post, we’ll talk now about direct seeding feature and backups. We found an interesting LIMIT_CONCURRENT_BACKUPS value for current state of the hadr_automatic_seeding_state_transition event. As said the last time, this value is meaningful by itself.

So let’s try to perform a backup while the database is concerned by an active seeding session. In fact, my backup was blocked (as I supposed) and here what I found from the sys.dm_tran_locks and sys.dm_exec_requests DMVs.

SELECT	
	r.command,
	r.wait_type,
	r.wait_resource,
	DB_NAME(tl.resource_database_id) as [database_name],
	tl.resource_type,
	tl.resource_subtype,
	tl.request_mode,
	tl.request_type,
	tl.request_status,
	r.session_id as blocked_session_id,
	r.blocking_session_id
FROM sys.dm_tran_locks as tl
join sys.dm_exec_requests as r
	on tl.request_session_id = r.session_id
WHERE tl.request_session_id = <concerned session>

 

  • The blocked session (my backup)

blog 100 - AG direct seeding - 1 - blocked session

  • Blocking session (my direct seeding session)

blog 100 - AG direct seeding - 2 - blocking session

In short, my backup session is waiting on database object with a LCK_M_U lock type that is already held by my direct seeding session. But the interesting part is that adding a database with direct seeding mode seems to be similar to a performing a backup operation from a locking perspective. We also recognize characteristic wait types that occur during a backup operation (ASYNC_IO_COMPLETION, BACKUPBUFFER). So direct seeding as a streaming backup? Well that sounds good!

Well, let’s continue with this other query

SELECT 
	r.command,
	r.wait_type,
	r.wait_resource,
	r.scheduler_id
from sys.dm_exec_requests as r
join sys.dm_os_schedulers as s
	on s.scheduler_id = r.scheduler_id
where r.session_id = 82
	AND s.status = 'VISIBLE ONLINE'

 

blog 100 - AG direct seeding - 3 - direct seeding internal

Do you notice the command text related to the direct seeding session? VDI_CLIENT_WORKER seems to indicate that SQL Server is using the virtual device interface internally during the seeding operation (VDI prefix as Virtual Device Interface). If SQL Server uses backup stream and VDI interface internally during seeding operation, it may explain why it has to limit concurrent backups in our case.

How to check if SQL Server uses VDI during direct seeding operation? Well, we know that using VDI implies to use sqlvdi.dll. So my first idea consisted in checking if the corresponding dll is loaded as module used by the sqlservr.exe process and the tasklist tool is a good candidate to answer to our need.

blog 100 - AG direct seeding - 4 - direct seeding sqlvdi dll

The above picture seems to confirm my assumption but this test is not perfect. Indeed, we may also claim that anything else of the  could have trigger the load of the sqlvdi.dll. So my second idea was to capture the stack dump from the windbg tool while running the seeding session and here what I was able to see:

blog 100 - AG direct seeding - 5 - direct seeding and windbg

We may recognize CHadrBuildSourceStateLimitConcurrentBackups (similar to value LIMIT_CONCURRENT_BACKUPS value from the hadr_automatic_seeding_state_transition event) what is a parameter to the function CallFunctAndResumeBuildTask(). Let’s also notice the use of sqlmin!CHadrBuildSourceStateSeeding::CreateVdiSession function that seems to confirm that SQL Server is using VDI session during seeding process.

Ok after this funny test, let’s back to something more important to keep in mind: we know that direct seeding will limit concurrent backups. Thus it will imply than our existing RPO may be impacted especially if the seeding operation takes a very long time.

Happy database seeding!!

 

 

 

Cet article SQL Server 2016 AlwaysOn: Direct seeding and backup considerations est apparu en premier sur Blog dbi services.

Set the SQL Native Client Default Port with PowerShell

Tue, 2016-07-12 08:06

I written an article about “SQL Server 2012: Configuring your TCP Port via PowerShell” and I received a question from PaulJ:
“How do you set the port for the SQL Native Client 11.0 Configuration (32bit) – as seen in the SQL Configuration Manager?”

This is a very good question and I decide to write this blog as an answer to this question.
The first step is always the same, initialization of my object:

[reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.SqlWmiManagement") | Out-Null

$wmi = New-Object ("Microsoft.SqlServer.Management.Smo.Wmi.ManagedComputer")

The second step is used to know for which client protocol the setting belongs to.
In the class “Microsoft.SqlServer.Management.Smo.Wmi.ManagedComputer”, you find a property “ClientProtocols” as you can see in the msdn web page:
tcpnativeclient01
I display the name and the protocol Properties with this command:

$wmi.ClientProtocols | Select displayname -ExpandProperty ProtocolProperties

tcpnativeclient02
As you can see, I have 4 client protocols (Named Pipes, default port, KEEPALIVE and KEEPALIVEINTERVAL).
The next step is to select the default port:

$tcp_list = $wmi.ClientProtocols  | Where-Object {$_.displayname -eq "TCP/IP"}
$default_tcp = $tcp_list.ProtocolProperties | Where-Object {$_.Name -eq "Default Port"}
$default_tcp

tcpnativeclient03
As you can see, the default client port is set to 1433 and now, I will set another value for this port:

$default_tcp.value=50100

Note: The port has a System.Int32 type
Validate this change with an Alter:

$tcp_list.alter()

To finish, do not forget to restart your services to activate the port change:

$sql_service = ($wmi.Services | Where-Object { $_.Type -eq "SqlServer" })
$sql_service.alter()
$sql_service.stop()
$sql_service.start()

tcpnativeclient04
Et voilà! The default port for the client protocol is changed!

 

Cet article Set the SQL Native Client Default Port with PowerShell est apparu en premier sur Blog dbi services.

Nulls in composite keys

Tue, 2016-07-12 06:36

Comparison of NULL can be misleading and it’s even worse for unique constraint validation. Having partial nulls in a composite key can be tricky because the SQL ANSI specification is not very easy to understand, and implementation can depend on the RDBMS. Here is an example with composite unique key and foreign key on Oracle.

Unique constraint

I create a table with a composite unique constraint:
SQL> create table TABLE1 (a char, b char, unique(a,b));
Table TABLE1 created.

I can insert a row with a=’X’ and b=’X':
SQL> insert into TABLE1 values ('X','X');
1 row inserted.

I cannot insert the same row:
SQL> insert into TABLE1 values ('X','X');
*
ERROR at line 1:
ORA-00001: unique constraint (SYS.SYS_C0015464) violated

I insert another row with same value for column a but different value for column b:
SQL> insert into TABLE1 values ('X','Y');
1 row inserted.

And another row with same value for column a but a null for column b:
SQL> insert into TABLE1 values ('X',null);
1 row inserted.

However, I cannot insert the same a second time:
SQL> insert into TABLE1 values ('X',null);
*
ERROR at line 1:
ORA-00001: unique constraint (SYS.SYS_C0015464) violated

If you look at documentation, this is documented as:
Because of the search mechanism for unique key constraints on multiple columns, you cannot have identical values in the non-null columns of a partially null composite unique key constraint.

It looks like an implementation reason (the search mechanism is the index that enforces the unique constraint). What is documented in SQL-92?
A unique constraint is satisfied if and only if no two rows in a table have the same non-null values in the unique columns.

How to interpret this? We cannot insert two (‘X’,null) because that would be two rows with same non-null value (a=’X’) and the Oracle implementation is compilent.

Or is it? We can also read the definition as the unique constraint being violated only when we find rows that have non-null values and they are the same. This is what MySQL and PostgresSQL do: accept duplicates when there is at least one null.
This is also what I found more intuitive: I usually consider NULL as a value that is not known at insert time but that will be assigned a value later during the lifecycle of the row. Thus, I expect to be able to insert rows where there is a null and check the constraint only when all columns have a value.

It is probably an implementation choice from Oracle which stores nulls as a zero-length string and then cannot have two identical entries in a unique index.

Now inserting a row where a is null and b is null:
SQL> insert into TABLE1 values (null,null);
1 row inserted.

And because that do not violate the rule whatever the way we read it (non-null values are not the same as there are no non-null values at all here) I can insert a second one:
SQL> insert into TABLE1 values (null,null);
1 row inserted.

This is documented as
Unless a NOT NULL constraint is also defined, a null always satisfies a unique key constraint

About implementation, there is no problem because full null entries are not stored in the index. They are stored in bitmap indexes, but bitmap indexes cannot be used to enforce a unique constraint.

In summary, here is what can be stored on a table where (a,b) is unique but nullable:

SQL> select rownum,TABLE1.* from TABLE1;
 
ROWNUM A B
---------- - -
1 X X
2 X Y
3 X
4
5

Foreign key

Now that I have a unique key, I can reference it:
SQL> create table TABLE2 (a char, b char, foreign key(a,b) references TABLE1(a,b));
Table TABLE2 created.

Yes. You don’t need to reference the primary key. Any unique key, even with nullable columns, can be referenced.

I can insert a row where parent exists:
SQL> insert into TABLE2 values('X','X');
1 row inserted.

As I’ve no unique key on the child, it’s many to one relationship:
SQL> insert into TABLE2 values('X','X');
1 row inserted.

I also have a parent with a=’X’ and b=’Y':
SQL> insert into TABLE2 values('X','Y');
1 row inserted.

But I’ve no parent with a=’Y':
SQL> insert into TABLE2 values('Y','Y');
*
ERROR at line 1:
ORA-02291: integrity constraint (SYS.SYS_C0015465) violated - parent key not found

So far so good. I said that I have a many to one relationship, but it’s a many to one or zero because my columns are nullable:
SQL> insert into TABLE2 values(null,null);
1 row inserted.

So far so good. But I have a composite key with nullable columns here, and I can insert a row where a=’X’ and b is null:
SQL> insert into TABLE2 values('X',null);
1 row inserted.

But do you think that all non null parent values must exist?
SQL> insert into TABLE2 values('Y',null);
1 row inserted.

Once again, this is documented as:
If any column of a composite foreign key is null, then the non-null portions of the key do not have to match any corresponding portion of a parent key.

And this is what is specified in SQL-92:
If no <match type> was specified then, for each row R1 of the referencing table, either at least one of the values of the referencing columns in R1 shall be a null value, or the value of each referencing column in R1 shall be equal to the value of the corresponding referenced column in some row of the referenced table. More detail about the other match types in Oracle Development Guide.

That may look strange, but, still thinking about NULLS as unknown values, you can consider that constraints cannot be validated until we know all values.

Here is what I was able to insert into my table even with no a=’Y’ in the parent:

SQL> select rownum,TABLE2.* from TABLE2;
 
ROWNUM A B
---------- - -
1 X X
2 X X
3 X Y
4 X
5
6 Y

So what?

Having nulls in composite unique key or foreign key can be misleading, then it’s better to ensure that what you define fits what you expect. It’s probably better to prevent partial nulls in foreign key (a check constraint can ensure that if one column is null then all columns must be null) or to have and additional referential integrity constraint which ensures that you can set only the allowed values for a subset of columns (in our case, a table with column a as primary key that we can reference).

 

Cet article Nulls in composite keys est apparu en premier sur Blog dbi services.

SharePlex Compare and Repair commands

Mon, 2016-07-11 13:31

Reading Franck Pachot blog about comparing source and target in a Dbvisit replication (http://blog.dbi-services.com/compare-source-and-target-in-a-dbvisit-replication/), I decide to write a small article about how we can do same thing with SharePlex. Indeed SharePlex provides built-in commands to compare and repair synchronization.
Below is our configuration file. We are just replicating table article to article_rep. Both tables contain 2000000 rows

datasource:o.SPLEXDB

#source tables target tables routing map
titi.article titi.article_rep atlasrep2@o.SPLEXSTR2

From the source let’s insert 2 rows to verify that replication is working fine.

SQL> insert into article values (2000001,'2000001_designation',2000000001);

1 row created.

SQL> commit;

Commit complete.

SQL> insert into article values (2000002,'2000002_designation',2000000002);

1 row created.

SQL> commit;

Commit complete.


SQL> select * from article where idart in (2000001,2000002);

     IDART DESIGNATION                          PRIX
---------- ------------------------------ ----------
   2000001 2000001_designation            2000000001
   2000002 2000002_designation            2000000002

From the target let’s verify that the 2 rows are replicated

SQL> select * from article_rep where idart in (2000001,2000002);

     IDART DESIGNATION                          PRIX
---------- ------------------------------ ----------
   2000001 2000001_designation            2000000001
   2000002 2000002_designation            2000000002

To compare source and target in SharePlex, the command compare is used. Let’s see a demonstration.
From the source 

sp_ctrl (atlas:2104)>  compare titi.article to titi.article_rep at atlasrep2@o.SPLEXSTR2 for o.SPLEXDB

  comparing 1 of 1 objects

  compare started; job id 7

sp_ctrl (atlas:2104)>

And now let’s see the status of our replication

sp_ctrl (atlas:2104)> show compare

   The syntax to get status for the compare command has changed
   Instead of 'show compare' use the 'compare status' or 'repair status' command


Job ID    : 7
PID       : 8644
Host      : atlas.localdomain
Started   : 06-JUL-16 11:34:48
Job Type  : Compare
Status    : Done - 1 object completed

ID     Tablename                            Status     Time       Total Rows %Comp Total Time
------ ------------------------------------ ---------- ---------- ---------- ----- ----------
1      "TITI"."ARTICLE"                     In Sync           N/A    2000002   100       0:31


sp_ctrl (atlas:2104)>

We see that tables are in sync. When running the compare command, SharePlex obtains a brief exclusive lock on the source table to get read consistency for its row selection.
On the target system, SharePlex obtains an exclusive lock on the target table and retains the lock for the duration of the comparison of that table.
Now let’s Sync out our replication
From the target let’s delete a row (note that we have one directional replication only from source to target)

SQL> delete from article_rep where idart=2000002;

1 row deleted.

SQL> commit;

Commit complete.

SQL>

And from the source let’s insert a row

SQL> insert into article values (2000003,'2000003_designation',2000000003);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from article where idart=2000003;

     IDART DESIGNATION                          PRIX
---------- ------------------------------ ----------
   2000003 2000003_designation            2000000003


SQL> select count(*) from article;

  COUNT(*)
----------
   2000003

SQL>

From the target we can verify that the last row with idart=2000003 is present but the number of rows is now different between the 2 tables. Indeed we deleted the row with idart=2000002 in the target and this deletion is not replicated in the source. Now replication is sync out

SQL> select * from article_rep where idart=2000003;

     IDART DESIGNATION                          PRIX
---------- ------------------------------ ----------
   2000003 2000003_designation            2000000003

SQL> select count(*) from article_rep;

  COUNT(*)
----------
   2000002

Let’s run again compare command on the source and let’s see the output now

sp_ctrl (atlas:2104)> compare titi.article to titi.article_rep at atlasrep2@o.SPLEXSTR2 for o.SPLEXDB

We can see that the status is Sync Out (note that comparison of 2000000 rows was very quick)

sp_ctrl (atlas:2104)> show compare

   The syntax to get status for the compare command has changed
   Instead of 'show compare' use the 'compare status' or 'repair status' command


Job ID    : 9
PID       : 12583
Host      : atlas.localdomain
Started   : 06-JUL-16 13:24:48
Job Type  : Compare
Status    : Done - 1 object completed

ID     Tablename                            Status     Time       Total Rows %Comp Total Time
------ ------------------------------------ ---------- ---------- ---------- ----- ----------
1      "TITI"."ARTICLE"                     Out Sync          N/A    2000003   100       0:25

To sync In  the replication we just have to use the repair command. This command is magic.

sp_ctrl (atlas:2104)> repair  titi.article to titi.article_rep at atlasrep2@o.SPLEXSTR2 for o.SPLEXDB

  repairing 1 of 1 objects

  repair started; job id 10

In the Log file on the target we can see that SharePlex is using ORA_HASH to compare

[oracle@atlasrep2 log]$ less declt_10-1_SPLEXDB_192.168.1.40_p9883.log
declt    2016-07-06 13:36:45.228937 9883 1352263552 NOTE: setting up HASH column info (de_table_setup_ora_hash_compare,L2915)
declt    2016-07-06 13:36:45.228971 9883 1352263552 Key Compare : Off
declt    2016-07-06 13:36:45.229009 9883 1352263552 Job ID      : 10
declt    2016-07-06 13:36:45.229017 9883 1352263552 Source Table: "TITI"."ARTICLE"
declt    2016-07-06 13:36:45.229122 9883 1352263552 Source Node : atlas.localdomain
declt    2016-07-06 13:36:45.229130 9883 1352263552 Target Table: "TITI"."ARTICLE_REP"
declt    2016-07-06 13:36:45.229135 9883 1352263552 Target Route: atlasrep2@SPLEXSTR2
declt    2016-07-06 13:36:45.229140 9883 1352263552 Batch Size  : 100000 rows
declt    2016-07-06 13:36:45.229145 9883 1352263552 Repair      : On (2 Pass)
declt    2016-07-06 13:36:45.229150 9883 1352263552 sp_declt PID: 9883
declt    2016-07-06 13:36:45.229155 9883 1352263552 Read Buffer Size : 1 mbytes
declt    2016-07-06 13:36:45.237727 9883 1352263552 Current status: ready to fetch batch,of rows; including the order by sorting.
declt    2016-07-06 13:36:45.237782 9883 1352263552 running sql statement : (de_select_prepare_to_fetch,L1384)
declt    2016-07-06 13:36:45.237795 9883 1352263552   SELECT /*+ PARALLEL (A,2) */ ROWID,ORA_HASH("IDART"||'~'||"PRIX"), ORA_HASH("DESIGNATION") FROM "TITI"."ARTICLE_REP" A ORDER BY 2, 3
declt    2016-07-06 13:36:54.064711 9883 1352263552 Current status: got batch of rows
declt    2016-07-06 13:37:04.527252 9883 1352263552 NOTE: total number of rows fetched = 2000002 (../src/deqtr/de_select.cpp,L1630)
declt    2016-07-06 13:37:04.533329 9883 1352263552 Notice: starting the second phase in the repair process
declt    2016-07-06 13:37:04.616704 9883 1352263552 Leaving de_msg_clt_preCompareMessaging1_5,L1297
declt    2016-07-06 13:37:06.659513 9883 1352263552 Leaving de_msg_clt_preCompareMessaging2,L614
declt    2016-07-06 13:37:06.659716 9883 1352263552 No new key column order, using default column order for TITI.ARTICLE_REP
declt    2016-07-06 13:37:06.721957 9883 1352263552 Current status: ready to fetch batch,of rows; including the order by sorting.
declt    2016-07-06 13:37:06.721996 9883 1352263552 running sql statement : (de_select_prepare_to_fetch,L1384)

When issuing again the show compare command, we can see that the status is now Repaired.

sp_ctrl (atlas:2104)> show compare

   The syntax to get status for the compare command has changed
   Instead of 'show compare' use the 'compare status' or 'repair status' command


Job ID    : 10
PID       : 13008
Host      : atlas.localdomain
Started   : 06-JUL-16 13:36:37
Job Type  : Repair
Status    : Done - 1 object completed

ID     Tablename                            Status     Time       Total Rows %Comp Total Time
------ ------------------------------------ ---------- ---------- ---------- ----- ----------
1      "TITI"."ARTICLE"                     Repaired          N/A    2000003   100       0:29

From the source we can verify the number of rows

SQL> select count(*) from article;

  COUNT(*)
----------
   2000003

From the target we also can verify that we have the same number of rows and that the row deleted (idart=2000002) is now present.

SQL> select count(*) from article_rep;

  COUNT(*)
----------
   2000003


SQL> select * from article_rep  where idart=2000002;

     IDART DESIGNATION                          PRIX
---------- ------------------------------ ----------
   2000002 2000002_designation            2000000002
Conclusion

We will retain that SharePlex provides powerful commands compare/repair to fix out of synchronization. Just note that during execution of these commands tables are locked (briefly on the source and more on the target) meaning that no transaction will not be possible during this period.
Note that the compare and repair commands can also be used for initial load (not recommended because of locks).


 

Cet article SharePlex Compare and Repair commands est apparu en premier sur Blog dbi services.

Oracle Multitenant feature name

Sun, 2016-07-10 15:46

We all know what is the multitenant option: a container database (CDB) with multiple user pluggable databases (PDB). This requires Enterprise Edition plus option. But you can use the same architecture without the option and even in Standard Edition: a CDB with only one user PDB. It is called “single-tenant” or “lone PDB”. How do you call this new 12c architecture?

I call “multitenant architecture” the following set of features, available in all editions:

  • dictionary separation between system metadata/objects and user metadata/object
  • unplug / plug features to transport PDBs
  • create a new PDB as a clone of another existing PDB
  • ability for a session to switch between PDB and CDB$ROOT explicitly with ‘ALTER SESSION’ or implicitly through metadata and object links
  • ability to read data for several containers in one query

But it may look strange to use the “multitenant” term when in Standard Edition or Enterprise edition without option.

Container database (CDB)

One idea would be to simply call it ‘CDB architecture’, because it’s the architecture of the container database, and because a database without this architecture is called “non-CDB”:

@franckpachot Oracle docs says „CDB architecture“ and „Multitenant Architecture“ – I prefer CDB, because there’s Non-CDB, too :)

— Markus Flechtner (@markusdba) July 8, 2016

consolidated database (CDB)

However, as very often with Oracle, the names change and the acronyms remain. Old names are found in the software, before new name is made public to market it. We all know that CDB stands for “container database” because a CDB is a container that contains at least 3 containers (CDB$ROOT, PDB$SEED and your PDB) and more when you have the multitenant option. However I see no mention of that name in ORACLE_HOME/rdbms/admin scripts:
[oracle@CDB ~]$ grep -i "container database" $ORACLE_HOME/rdbms/admin/* | wc
0 0 0

It seems that the original meaning of CDB was “consolidated database”:

[oracle@CDB ~]$ grep -i "consolidated database" $ORACLE_HOME/rdbms/admin/* | wc
58 465 7276
 
[oracle@CDB ~]$ grep -i "consolidated database" $ORACLE_HOME/rdbms/admin/*.bsq
/u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/dcore.bsq:REM Consolidated Database.
/u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/dcore.bsq:REM SEED Pluggable Database in a Consolidated Database.
/u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/doptim.bsq:Rem the caching in library cache in a Consolidated Database.
/u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/dsec.bsq: * Consolidated Database
/u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/dsec.bsq: * Consolidated Database

So, the idea behind CDB is the same as behind multitenant: it’s consolidation of several pluggable databases. And then, consolidation is not really what is done by single-tenant where we have only one user PDB per CDB and where this architecture requires 3 containers instead of one non-CDB.

Pluggable databases

Another idea would be to call it “pluggable databases architecture” because this is the real evolution. User tablespaces are transportable for a long time, since 8i introduction of extended rowid and locally managed tablespaces. 12c brought the same for system tablespaces so that PDBs are fully transportable physically, thanks to separation of dictionary.

The real point of this new architecture is the total separation of system data/metadata and user data/metadata, the separation of system DBA role and application DBA role, and this is pluggable databases. Multitenancy is just one thing that is possible with this new architecture. It’s important today because it’s required for the cloud (for consolidation and easy provisioning).

Oracle Multitenant

Actually, the feature was called ‘Oracle Pluggable Database’ until 12.1.0.1 was released publicly with a new name: Oracle Multitenant. And this is the name we can see in Database Feature Usage Statistics. Well, if you look at the first patchset of 12cR1, 12.1.0.2, you will see the old name ‘Oracle Pluggable Databases’ but this is a bug (Patch 20718081 changes back the name).

Here is what you see from Database Feature Usage Statistics after an upgrade from 12.1.0.1 to 12.1.0.2, in EM Express:

CaptureFUMultitenant

And from DBA_FEATURE_USAGE_STATISTICS view:


SQL> select name, version, detected_usages, currently_used, aux_count from dba_feature_usage_statistics where name like '%Multitenant%' or name like '%Pluggable%';
 
NAME VERSION DETECTED_USAGES CURRE AUX_COUNT
------------------------------ ----------------- --------------- ----- ----------
Oracle Multitenant 12.1.0.1.0 98 FALSE 1
Oracle Pluggable Databases 12.1.0.2.0 32 FALSE 1

So, bug aside, the name of the feature is ‘Oracle Multitenant’ and this is true for any container database, because the feature is considered as used as soon as V$DATABASE.CDB=’YES':
select count(*) into feature_boolean from v$database where cdb = 'YES'

The usage of the multitenant option is detected by the AUX_COUNT which is the number of user PDBs:
select count(*) into aux_count from v$pdbs where con_id > 2
CON_ID=0 is for the CDB, CON_ID=1 is for CDB$ROOT, CON_ID=2 is for PDB$SEED and user PDBs start at CON_ID > 2

So the name is “multitenant” whatever the number of PDBs.

So what?

I prefer to stick with “multitenant architecture” even when used without the multitenant option. It’s a good way to keep in mind that, even if it brings lot of interesting features for single-tenant as well, Oracle has taken the step of dictionary separation with the motivation of Cloud, Consolidation and Multitenancy. Charged options are a good trigger to increase priority of evolution requests…
However, this “multitenant architecture” brings very interesting features to Standard Edition and Enterprise Edition even without option. And it you doubt, I’ll try to convince you in San Francisco, September 18th.

 

Cet article Oracle Multitenant feature name est apparu en premier sur Blog dbi services.

SQL Server 2016 AlwaysOn: Direct seeding and performance considerations

Fri, 2016-07-08 13:05

This blog post follows the previous one about new direct seeding feature shipped with SQL Server 2016. As a reminder, I had some doubts about using direct seeding with large databases because log stream is not compressed by default but I forgot the performance improvements described into the Microsoft BOL. I remembered to talk about it a couple of months ago in this blog post.

Microsoft did a good work of improving the AlwaysOn log transport layer and it could be very interesting to compare two methods: Adding a 100 GB database by using usual way that includes backup and restore operations from the primary to the secondary or using direct seeding feature. Which one is the quickest method?

Let’s just have a quick look at my test environment. Two virtual machines with the following configuration:

  • 4x Intel Core i7-3630QM 2.3 GHz
  • 4GB of RAM
  • 2 10Gbps network cards
  • One disk that will host both the database data and log files on my primary replica (Samsung Portable SSD T3 500GB with S.M.A.R.T, NCQ and TRIM)
  • One disk that will host both the database data and log files on my secondary replica (Samsung Portable SSD T1 500GB with S.M.A.R.T, NCQ and TRIM)
  • One disk that will host backups (Samsung SSD 840 EVC, 250GB with S.M.A.R.T, NCQ and TRIM) used by both virtual machines

As an aside, each SSD disk is able to deliver at least 450MB/s and 35000 IOPS.

blog 99 - AG direct seeding - 0 - lab environment

I also used a custom database named GestionCom that contains roughly 100GB of data for my test. 100GB would be sufficient to get relevant results.

 

blog 99 - AG direct seeding - 01 - db GestionCom

 

Go ahead and let’s compare both synchronization methods

 

First test by using an usual way to add  a database to an availability group

As said earlier, my first test will consist in using the usual way so far to add a database to an availability group. Let’s say that we may use 3 ways for data synchronization: FULL, join only and skip initial synchronization. We will use the first method for this test that includes all the steps: backup and restore the concerned database and then join it to the availability group. At this point we may easily image that the most part of the time will be consumed in the backup and restore step. I also want to precise that I did not use voluntary fine tuning options like BUFFERCOUNT, MAXTRANSFERSIZE or splitting backups to several media files in order to stay compliant with the availability group wizard.

 

Step Duration Backup database to backup local disk (primary)WITH CHECKSUM, COMPRESSION 06’55’’ Restore database from network share (secondary)WITH CHECKSUM, NORECOVERY 17’10’’ Join database to availability group + start synchronization 00’01’’ Total 24’06’’

 

What about resource consumption?

On the primary …

blog 99 - AG direct seeding - 10 - CPU and disk activity during backup

 

blog 99 - AG direct seeding - 1 - network bandwith during restore from primary

On the secondary …

 

blog 99 - AG direct seeding - 11 - CPU and disk activity during restore

blog 99 - AG direct seeding - 2 - network bandwith during restore from secondary

CPU utilization is equal to 35% on average during the test. Moreover, disk write throughput seems to stagnate to 130 MB/s on average and includes both backup and restore activities. The network throughput utilization seems also to stagnate between 135 Mbps and 174 Mbps according to my test.

So it is clear that my environment is under-used regarding resource consumption in this first test.

 

Second test by using new direct seeding method

This time I will use the new database deployment method: direct seeding. As said in my previous blog, using this feature will simplify a lot the adding database process but what about the synchronization speed and resource consumption in this case?

Well, to get a good picture of what happens during the seeding process, we will use different tools as the new sys.dm_hadr_automatic_seeding DMV and extended events as well. Extended events will help us to understand what happens under the cover in this case but to measure only the time duration of the operation we don’t need them. If you look at the event list as well as categories, you will probably notice a new dbseed category available that corresponds to the direct seeding. Events in this category are only available from the debug channel. That’s fine because we want to track when the seeding process starts, when it finishes and what’s happen between these two events (like failure, timeout, progress). By the way, the hadr_physical_progress may be very useful to get a picture of network activity for the concerned seeding session if your network card is shared between other sessions or availability group replication activities. In my case, I’m the only one and I will get this information directly from the task manager panel.

So let’s create the extended event session:

CREATE EVENT SESSION [hadr_direct_seeding] 
ON SERVER 
ADD EVENT sqlserver.hadr_automatic_seeding_start
(
    ACTION(sqlserver.database_name,sqlserver.sql_text)
	)
,
ADD EVENT sqlserver.hadr_automatic_seeding_state_transition
(
    ACTION(sqlserver.database_name,sqlserver.sql_text)
),
ADD EVENT sqlserver.hadr_automatic_seeding_success
(
    ACTION(sqlserver.database_name,sqlserver.sql_text)
),
ADD EVENT sqlserver.hadr_automatic_seeding_timeout
(
    ACTION(sqlserver.database_name,sqlserver.sql_text)
),
ADD EVENT sqlserver.hadr_physical_seeding_progress
(
    ACTION(sqlserver.database_name,sqlserver.sql_text)
)
ADD TARGET package0.event_file
(
	SET filename = N'hadr_direct_seeding',
	max_file_size = (2048), 
	max_rollover_files = (10))
WITH 
(
	MAX_MEMORY=4096 KB,
	EVENT_RETENTION_MODE = ALLOW_SINGLE_EVENT_LOSS,
	MAX_DISPATCH_LATENCY = 30 SECONDS,
	MAX_EVENT_SIZE = 0 KB,
	MEMORY_PARTITION_MODE = NONE,
	TRACK_CAUSALITY = OFF,
	STARTUP_STATE = OFF
)
GO

 

And after adding the GestionCom database to the TestGrp availability group, the direct seeding feature comes into play. Honestly, it was a very big surprise! Let’s take a look at the network utilization:

blog 99 - AG direct seeding - 3 - network bandwith during seeding

 

A network usage of 2.2 Gbps on average this time! The direct seeding feature provides a better use of the network bandwidth and we may understand clearly why efforts have been made by Microsoft to improve the synchronization process.

Let’s now move on the CPU and disk utilization respectively from the primary and then the secondary

blog 99 - AG direct seeding - 5 - disk throughput during seeding from primary

blog 99 - AG direct seeding - 6 - disk throughput during seeding from secondary

~ 350 up to 450 MB/s on each side (gain x2) but an increase of the CPU utilization up to 70% during my test (increase x2). So a better resource usage but at the cost of a higher CPU utilization …

Let’s finish by looking at the sys.dm_hadr_automatic_seeding DMV that provides the answer to the question: are we faster in this case?

select
    ag.name as aag_name,
    ar.replica_server_name,
    d.name as database_name,
    has.current_state,
    has.failure_state_desc as failure_state,
    has.error_code,
    has.performed_seeding,
	DATEADD(mi, DATEDIFF(mi, GETUTCDATE(), GETDATE()), has.start_time) as start_time,
	DATEADD(mi, DATEDIFF(mi, GETUTCDATE(), GETDATE()), has.completion_time) as completion_time,
    has.number_of_attempts
from sys.dm_hadr_automatic_seeding as has
join sys.availability_groups as ag
    on ag.group_id = has.ag_id
join sys.availability_replicas as ar
    on ar.replica_id = has.ag_remote_replica_id
join sys.databases as d
    on d.group_database_id = has.ag_db_id

 

 

blog 99 - AG direct seeding - 7 - DMV stats

 

And the answer is yes as we may expect! Only 8 minutes (gain x3) to replicate and to synchronize the GestionCom database between the two high available replicas compared to the first method. But that’s not all … let’s focus on the redo thread activity from the secondary and you may notice a very interesting rate value (~ 12 MB/s). I don’t remember to have seen this value with current availability groups at customer places. This is the second improvement made by Microsoft concerned that has introduce parallel redo capability. As a reminder, before SQL Server 2016, there is only one redo thread per database. In this context, a single redo thread simply could not keep up with applying the changes as persisted in the log.

From the secondary, we may see some changes by looking at the sys.dm_exec_requests DMV:

select 
	r.command, 
	r.status,
	r.wait_type,
	r.wait_time,
	r.last_wait_type,
	r.scheduler_id
from sys.dm_exec_requests as r
where r.command like '%REDO%'
order by r.scheduler_id

 

 

blog 99 - AG direct seeding - 70 - parallel redo thread

Using direct seeding is definitely a solution to take into account to our future database deployment but I think we have to keep in mind two things according to this test: CPU and network consumption from seeding activity may impact the performance of other applications and vis-versa. In real world, there are good chances to be in this situation.

Finally let’s have a look at the extend event output. In respect of what we want to highlight in this blog post, we don’t get any other valuable information but one thing got my attention: LIMIT_CONCURRENT_BACKUPS value from the current value column (underlined in red). What does it mean exactly? Let’s talk about it  in a next blog post because this is a little bit out of scope of the main subject.

blog 99 - AG direct seeding - 12 - Xe with direct seeding

 

Third test by using direct seeding and compression

Let’s talk about the last test I performed. I used direct seeding without compression in the previous test so SQL Server didn’t compress the data stream by default in this case. However we may force SQL Server to use compression by setting a special trace flag 9567. After all, we want to avoid direct seeding flooding the wire and impacting the existing workload from other applications.

I have to admit that enabling compression with direct seeding is not so obvious. For instance I didn’t see any difference from the DMVs that indicates we’re using compression. (is_compression_enabled column from the sys.dm_hadr_physical_seeding_stats DMV is always equal to 0 regardless we use or not compression). The only obvious difference comes from the network throughput usage that is lower with compression (gain x 2.5). However I noticed an important increase of CPU utilization near from 100% on the primary in my case.

blog 99 - AG direct seeding - 8 - resource with seeding and compression from primary

What about seeding time? Well, I didn’t notice any gain on this field. Does compression allow to save network bandwidth? Maybe … hard to say with only this test and one specific environment.

I tried to add 3 VCPUs to each replica and leave one VCPU to the system so a total number of 7 VCPUS dedicated for SQL Server use.

blog 99 - AG direct seeding - 12 - cpu usage during seeding with compression 8 vcpus

At this point, I  admit to be a little bit surprising and I wonder if compression uses correctly all the available processors regarding the uneven distribution of CPU resource usage. The above picture is good representation of what I saw during other tests I performed with compression. In addition, I didn’t see any obvious performance gain in terms of duration except that wire is less used. I’m a little bit disappointed by compressiion but once again it is still much too early to draw a conclision and I’m looking forward direct seeding in action at my customers with real production infrastructure.

The bottom line is that direct seeding is a very promising feature and I love it because it is the direct visible part of the AlwaysOn performance improvements shipped with SQL Server 2016. However, and this is my personal opinion, I think we don’t let it fool us and consider to use direct seeding carefully according to your workload and available resources. Fortunately, in most cases it will be suitable.

Stay tuned!

 

 

 

Cet article SQL Server 2016 AlwaysOn: Direct seeding and performance considerations est apparu en premier sur Blog dbi services.

Compare source and target in a Dbvisit replication

Tue, 2016-07-05 13:51

You’ve setup a logical replication, and you trust it. But before the target goes into production, it will be safer to compare source and target. At least count the number of rows.
But tables are continuously changing, so how can you compare? Not so difficult thanks to Dbvisit replicate heartbeat table and Oracle flashback query.

Here is the state of the replication, with activity on the source and real-time replication to the target:
| Dbvisit Replicate 2.7.06.4485(MAX edition) - Evaluation License expires in 29 days
MINE IS running. Currently at plog 368 and SCN 6119128 (07/06/2016 04:15:21).
APPLY IS running. Currently at plog 368 and SCN 6119114 (07/06/2016 04:15:19).
Progress of replication dbvrep_XE:MINE->APPLY: total/this execution
--------------------------------------------------------------------------------------------------------------------------------------------
REPOE.CUSTOMERS: 100% Mine:961/961 Unrecov:0/0 Applied:961/961 Conflicts:0/0 Last:06/07/2016 04:12:12/OK
REPOE.ADDRESSES: 100% Mine:961/961 Unrecov:0/0 Applied:961/961 Conflicts:0/0 Last:06/07/2016 04:12:12/OK
REPOE.CARD_DETAILS: 100% Mine:894/894 Unrecov:0/0 Applied:894/894 Conflicts:0/0 Last:06/07/2016 04:12:12/OK
REPOE.ORDER_ITEMS: 100% Mine:5955/5955 Unrecov:0/0 Applied:5955/5955 Conflicts:0/0 Last:06/07/2016 04:12:12/OK
REPOE.ORDERS: 99% Mine:4781/4781 Unrecov:0/0 Applied:4780/4780 Conflicts:0/0 Last:06/07/2016 04:12:12/OK
REPOE.INVENTORIES: 100% Mine:5825/5825 Unrecov:0/0 Applied:5825/5825 Conflicts:0/0 Last:06/07/2016 04:12:12/OK
REPOE.LOGON: 99% Mine:6175/6175 Unrecov:0/0 Applied:6173/6173 Conflicts:0/0 Last:06/07/2016 04:12:12/OK
--------------------------------------------------------------------------------------------------------------------------------------------
7 tables listed.

If you wand to compare the rows from source and target, you will always see a difference because modifications on source arrive on target a few seconds later.

Source and target SCN

The first thing to do is to determine a consistent point in time where source and target are the same. This point in time exists because the redo log is sequential by nature, and the commits are done in the same order on target than source. And this order is visible with the SCN. The only problem is that on a logical replication the SCN on source and target are completely different and have their own life.

The first step is determine an SCN from the target and an SCN on the source that show the same state of transactions.

But before that, let’s connect to the target and set the environment:

$ sqlplus /nolog @ compare.sql
 
SQL*Plus: Release 11.2.0.2.0 Production on Tue Jul 5 18:15:34 2016
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
 
SQL> define table_owner=REPOE
SQL> define table_name=ORDERS
SQL>
SQL> connect system/manager@//192.168.56.67/XE
Connected.
SQL> alter session set nls_date_format='DD-MON-YYYY HH24:mi:ss';
Session altered.
SQL> alter session set nls_timestamp_format='DD-MON-YYYY HH24:mi:ss';
Session altered.

My example is on the #repattack environment, with Swingbench running on the source, and I’ll compare the ORDER table.

Heartbeat table

Each Dbvisit replicate configuration comes with an heartbeat table created in the Dbvisit schema on the source and replicated to the target. This table is updated every 10 seconds on the source with timestamp and SCN. This is a great way to check how the replication is working.Here it will be the way to get the SCN information from the source.

Flashback query

Oracle flashback query offers a nice way to get the commit SCN for the rows updated in the heartbeat table. From the target database, this is the commit SCN for the replication transaction (the APPLY process) and it can be displayed along with the SCN from the source transaction that is recorded in the heartbeat table and replicated to the target.

SQL> column versions_startscn new_value scn_target
SQL> column source_scn new_value scn_source
SQL> column mine_process_name format a12
SQL> column versions_starttime format a21
 
SQL> select mine_process_name,wallclock_date,mine_date,source_scn,mine_scn,versions_startscn,versions_starttime,versions_endscn
from DBVREP.DBRSCOMMON_HEARTBEAT versions between timestamp(sysdate-1/24/60) and sysdate
order by versions_endscn nulls last ;
 
MINE_PROCESS WALLCLOCK_DATE MINE_DATE SOURCE_SCN MINE_SCN VERSIONS_STARTSCN VERSIONS_STARTTIME VERSIONS_ENDSCN
------------ -------------------- -------------------- -------------------- -------------------- -------------------- --------------------- --------------------
MINE 06-JUL-2016 04:14:27 06-JUL-2016 04:14:22 6118717 6118661 4791342
MINE 06-JUL-2016 04:14:37 06-JUL-2016 04:14:31 6118786 6118748 4791342 06-JUL-2016 04:11:29 4791376
MINE 06-JUL-2016 04:14:47 06-JUL-2016 04:14:41 6118855 6118821 4791376 06-JUL-2016 04:11:39 4791410
MINE 06-JUL-2016 04:14:57 06-JUL-2016 04:14:51 6118925 6118888 4791410 06-JUL-2016 04:11:49 4791443
MINE 06-JUL-2016 04:15:07 06-JUL-2016 04:15:01 6119011 6118977 4791443 06-JUL-2016 04:11:59 4791479
MINE 06-JUL-2016 04:15:17 06-JUL-2016 04:15:11 6119091 6119059 4791479 06-JUL-2016 04:12:09 4791515
MINE 06-JUL-2016 04:15:27 06-JUL-2016 04:15:21 6119162 6119128 4791515 06-JUL-2016 04:12:19

This shows that the current version of the heartbeat table on target was commited at SCN 4791515 and we know that this state matches the SCN 6119162 on the source. You can choose any pair you want but the latest will probably be the fastest to query.

Counting rows on source

I’ll use flashback query to count the rows from the source at SCN 6119162. I’m doing it in parallel query, but be careful when the table has high modification activity there will be lot of undo blocks to read.

SQL> connect system/manager@//192.168.56.66/XE
Connected.
SQL> alter session force parallel query parallel 8;
Session altered.
 
SQL> select count(*) from "&table_owner."."&table_name." as of scn &scn_source;
old 1: select count(*) from "&table_owner."."&table_name." as of scn &scn_source
new 1: select count(*) from "REPOE"."ORDERS" as of scn 6119162
 
COUNT(*)
--------------------
775433

Counting rows on target

I’m doing the same fron the target, but with the SCN 4791515
SQL> connect system/manager@//192.168.56.67/XE
Connected.
SQL> alter session force parallel query parallel 8;
Session altered.
 
SQL> select count(*) from "&table_owner."."&table_name." as of scn &scn_target;
old 1: select count(*) from "&table_owner."."&table_name." as of scn &scn_target
new 1: select count(*) from "REPOE"."ORDERS" as of scn 4791515
 
COUNT(*)
--------------------
775433

Good. Same number of rows. This proves that even with constantly inserted tables we can find a point of comparison, thanks to Dbvisit heartbeat table and thanks to Oracle flashback query. If you are replicating with another logical replication product, you can simulate the heartbeat table with a job that updates the current SCN to a single row table, and replicate it. If your target is not Oracle, then there are good chances that you cannot do that kind of ‘as of’ query which means that you need to lock the table on source for the time you compare.

ORA_HASH

If you think that counting the rows is not sufficient, you can compare a hash value from the columns. Here is an example.
I get the list of columns, with ORA_HASH() function on it, and sum() between them:

SQL> column columns new_value columns
SQL> select listagg('ORA_HASH('||column_name||')','+') within group (order by column_name) columns
2 from dba_tab_columns where owner='&table_owner.' and table_name='&table_name';
old 2: from dba_tab_columns where owner='&table_owner.' and table_name='&table_name'
new 2: from dba_tab_columns where owner='REPOE' and table_name='ORDERS'
 
COLUMNS
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ORA_HASH(CARD_ID)+ORA_HASH(COST_OF_DELIVERY)+ORA_HASH(CUSTOMER_CLASS)+ORA_HASH(CUSTOMER_ID)+ORA_HASH(DELIVERY_ADDRESS_ID)+ORA_HASH(DELIVERY_TYPE)+ORA_HASH(INVOICE_ADDRESS_ID)+ORA_HASH(ORDER_DATE)+ORA_
HASH(ORDER_ID)+ORA_HASH(ORDER_MODE)+ORA_HASH(ORDER_STATUS)+ORA_HASH(ORDER_TOTAL)+ORA_HASH(PROMOTION_ID)+ORA_HASH(SALES_REP_ID)+ORA_HASH(WAIT_TILL_ALL_AVAILABLE)+ORA_HASH(WAREHOUSE_ID)

With this list defined in a substitution variable, I can compare the sum of hash values:

SQL> select count(*),avg(&columns.) hash from "&table_owner."."&table_name." as of scn &scn_target;
old 1: select count(*),sum(&columns.) hash from "&table_owner."."&table_name." as of scn &scn_target
new 1: select count(*),sum(ORA_HASH(CARD_ID)+ORA_HASH(COST_OF_DELIVERY)+ORA_HASH(CUSTOMER_CLASS)+ORA_HASH(CUSTOMER_ID)+ORA_HASH(DELIVERY_ADDRESS_ID)+ORA_HASH(DELIVERY_TYPE)+ORA_HASH(INVOICE_ADDRESS_ID)+ORA_HASH(ORDER_DATE)+ORA_HASH(ORDER_ID)+ORA_HASH(ORDER_MODE)+ORA_HASH(ORDER_STATUS)+ORA_HASH(ORDER_TOTAL)+ORA_HASH(PROMOTION_ID)+ORA_HASH(SALES_REP_ID)+ORA_HASH(WAIT_TILL_ALL_AVAILABLE)+ORA_HASH(WAREHOUSE_ID)) hash from "REPOE"."ORDERS" as of scn 4791515
 
COUNT(*) HASH
-------------------- --------------------
775433 317531150805040439
 
SQL> connect system/manager@//192.168.56.66/XE
Connected.
SQL> alter session force parallel query parallel 8;
Session altered.
 
SQL> select count(*),avg(&columns.) hash from "&table_owner."."&table_name." as of scn &scn_source;
old 1: select count(*),sum(&columns.) hash from "&table_owner."."&table_name." as of scn &scn_source
new 1: select count(*),sum(ORA_HASH(CARD_ID)+ORA_HASH(COST_OF_DELIVERY)+ORA_HASH(CUSTOMER_CLASS)+ORA_HASH(CUSTOMER_ID)+ORA_HASH(DELIVERY_ADDRESS_ID)+ORA_HASH(DELIVERY_TYPE)+ORA_HASH(INVOICE_ADDRESS_ID)+ORA_HASH(ORDER_DATE)+ORA_HASH(ORDER_ID)+ORA_HASH(ORDER_MODE)+ORA_HASH(ORDER_STATUS)+ORA_HASH(ORDER_TOTAL)+ORA_HASH(PROMOTION_ID)+ORA_HASH(SALES_REP_ID)+ORA_HASH(WAIT_TILL_ALL_AVAILABLE)+ORA_HASH(WAREHOUSE_ID)) hash from "REPOE"."ORDERS" as of scn 6119162
 
COUNT(*) HASH
-------------------- --------------------
775433 17531150805040439

Note that this is only an example. You must adapt for your needs: precision of the comparison and performance.

So what?

Comparing source and target is not a bad idea. With Dbvisit replicate, if you defined the replication properly and did the initial import with the SCN provided by the setup wizard, you should not miss transactions, even when there is lot of activity on source, and even without locking the source for the initialisation. But it’s always good to compare, especially before the ‘Go’ decision of a migration done with Dbvisit replicate to lower the downtime (and the stress). Thanks to heartbeat table and flashback query, a checksum is not too hard to implement.

 

Cet article Compare source and target in a Dbvisit replication est apparu en premier sur Blog dbi services.

Script to suggest FK indexes

Mon, 2016-07-04 10:55

In Oracle, when the referenced key is deleted (by delete on parent table, or update on the referenced columns) the child tables(s) are locked to prevent any concurrent insert that may reference the old key. This lock is a big issue on OLTP applications because it’s a TM Share lock, usually reserved for DDL only, and blocking any modification on the child table and blocking some modifications on tables that have a relationship with that child table. This problem can be be overcome when an index structure which allows to find concurrent inserts that may reference the old value. Here is the script I use to find which index is missing.

The idea is not to suggest to index all foreign keys for three reasons:

  • when there are no delete or update in parent side, you don’t have that locking issue
  • when there is minimal write activity on child side, the lock may not have big consequence
  • you probably have indexes build for performance reasons that can be used to avoid locking even when they have more columns or have different column order

The idea is not to suggest an index for each locking issue but only when blocking locks have been observed. Yes, it is a reactive solution, but proactive ones cannot be automatic. If you know your application well and then you know what you ave to index, then you don’t need this script. If you don’t, then proactive suggestion will suggest too many indexes.

Here is the kind of output that I get with this script:
-- DELETE on APP1.GCO_GOOD has locked APP1.FAL_TASK in mode 5 for 8 minutes between 14-sep 10:36 and 14-sep 10:44
-- blocked statement: DELETE FAL LOT LOT WHERE C FAB TYPE AND EXISTS SELECT
-- blocked statement: UPDATE DOC POSITION DETAIL SET DOC POSITION DETAIL ID B
-- blocked statement: delete from C AP GCO GOOD where rowid doa rowid
-- blocked statement: DELETE FROM FAL LOT WHERE FAL LOT ID B
-- blocked statement: DELETE FROM FAL TASK LINK PROP WHERE FAL LOT PROP ID B
-- blocked statement: INSERT INTO FAL LOT PROGRESS FAL LOT PROGRESS ID FAL LOT
-- blocked statement: insert into FAL TASK LINK FAL SCHEDULE STEP ID
-- FK chain: APP1.GCO_GOOD referenced by(cascade delete) APP1"."GCO_SERVICE referenced by(cascade set null) APP1"."FAL_TASK (APP1.FAL_TASK_S_GCO_SERV) unindexed
-- FK column GCO_GCO_GOOD_ID
-- Suggested index: CREATE INDEX ON "APP1"."FAL_TASK" ("GCO_GCO_GOOD_ID");
-- Other existing Indexes: CREATE INDEX "APP1"."FAL_TASK_S_DIC_FR_TASK_COD7_FK" ON "APP1"."FAL_TASK" ("DIC_FREE_TASK_CODE7_ID")
-- Other existing Indexes: CREATE INDEX "APP1"."FAL_TASK_S_DIC_FR_TASK_COD9_FK" ON "APP1"."FAL_TASK" ("DIC_FREE_TASK_CODE9_ID")
-- Other existing Indexes: CREATE INDEX "APP1"."FAL_TASK_S_PPS_TOOLS13_FK" ON "APP1"."FAL_TASK" ("PPS_TOOLS13_ID")

I’ll detail each part.

ASH

Yes we have to detect blocking issues from the past and I use ASH for that. If you don’t have Diagnostic Pack, then you have to change the query with another way to sample V$SESSION.
-- DELETE on APP1.GCO_GOOD has locked APP1.FAL_TASK in mode 5 for 8 minutes between 14-sep 10:36 and 14-sep 10:44
-- blocked statement: DELETE FAL LOT LOT WHERE C FAB TYPE AND EXISTS SELECT
-- blocked statement: UPDATE DOC POSITION DETAIL SET DOC POSITION DETAIL ID B
-- blocked statement: delete from C AP GCO GOOD where rowid doa rowid
-- blocked statement: DELETE FROM FAL LOT WHERE FAL LOT ID B
-- blocked statement: DELETE FROM FAL TASK LINK PROP WHERE FAL LOT PROP ID B
-- blocked statement: INSERT INTO FAL LOT PROGRESS FAL LOT PROGRESS ID FAL LOT
-- blocked statement: insert into FAL TASK LINK FAL SCHEDULE STEP ID

The first part of the output comes from ASH and detects the blocking situations: which statement, how long, and the statements that were blocked.
This part of the script will probably need to be customized: I join with DBA_HIST_SQL_PLAN supposing that the queries have been captured by AWR as long running queries. I check last 15 days of ASH. You may change those to fit the blocking situation encountered.

Foreign Key

Then, we have to find the unindexed foreign key which is responsible for those locks.

-- FK chain: APP1.GCO_GOOD referenced by(cascade delete) APP1"."GCO_SERVICE referenced by(cascade set null) APP1"."FAL_TASK (APP1.FAL_TASK_S_GCO_SERV) unindexed
-- FK column GCO_GCO_GOOD_ID

Here you see that it’s not easy. Actually, all scripts I’ve seen do not detect that situation where the CASCADE SET NULL cascades the issue. Here “APP1″.”GCO_SERVICE” has its foreign key indexed but the SET NULL, even when not on the referenced column, locks the child (for no reason as far as I know, but it does).
My script goes up to a level 10 using a connect by query to detect this situation.

Suggested Index

The suggested index is an index on the foreign key column:

-- Suggested index: CREATE INDEX ON "APP1"."FAL_TASK" ("GCO_GCO_GOOD_ID");

This is only a suggestion. Any regular index that starts with foreign key column in whatever order can be used to avoid the lock.
Remember to think about performance first. The index may be used to navigate from parent to child.

Existing Index

Finally, when adding an index it’s good to check if there are other indexe that would not be needed anymore, so my script displays all of them.
If you think that some indexes are not required, remember that in 12c you can make them invisible for a while and you will be able to bring them back to visible quickly in case of regression.

Script

Here is the script. Sorry, no comments on it yet and a few display things to change. Just try it, it’s just a query on AWR (need Diag. Pack) and table/index/constraint metadata. You can customize it and don’t hesitate to comment if you have ideas to improve. I used it in several environments and it has always found the chain of foreign key that is responsible for an ‘enq: TM’ blocking situation. And believe me this is not always easy to do just by looking at the data model.


set serveroutput on
declare
procedure print_all(s varchar2) is begin null;
dbms_output.put_line(s);
end;
procedure print_ddl(s varchar2) is begin null;
dbms_output.put_line(s);
end;
begin
dbms_metadata.set_transform_param(dbms_metadata.session_transform,'SEGMENT_ATTRIBUTES',false);
for a in (
select count(*) samples,
event,p1,p2,o.owner c_owner,o.object_name c_object_name,p.object_owner p_owner,p.object_name p_object_name,id,operation,min(p1-1414332420+4) lock_mode,min(sample_time) min_time,max(sample_time) max_time,ceil(10*count(distinct sample_id)/60) minutes
from dba_hist_active_sess_history left outer join dba_hist_sql_plan p using(dbid,sql_id) left outer join dba_objects o on object_id=p2 left outer join dba_objects po on po.object_id=current_obj#
where event like 'enq: TM%' and p1>=1414332420 and sample_time>sysdate-15 and p.id=1 and operation in('DELETE','UPDATE','MERGE')
group by
event,p1,p2,o.owner,o.object_name,p.object_owner,p.object_name,po.owner,po.object_name,id,operation
order by count(*) desc
) loop
print_ddl('-- '||a.operation||' on '||a.p_owner||'.'||a.p_object_name||' has locked '||a.c_owner||'.'||a.c_object_name||' in mode '||a.lock_mode||' for '||a.minutes||' minutes between '||to_char(a.min_time,'dd-mon hh24:mi')||' and '||to_char(a.max_time,'dd-mon hh24:mi'));
for s in (
select distinct regexp_replace(cast(substr(sql_text,1,2000) as varchar2(60)),'[^a-zA-Z ]',' ') sql_text
from dba_hist_active_sess_history join dba_hist_sqltext t using(dbid,sql_id)
where event like 'enq: TM%' and p2=a.p2 and sample_time>sysdate-90
) loop
print_all('-- '||'blocked statement: '||s.sql_text);
end loop;
for c in (
with
c as (
select p.owner p_owner,p.table_name p_table_name,c.owner c_owner,c.table_name c_table_name,c.delete_rule,c.constraint_name
from dba_constraints p
join dba_constraints c on (c.r_owner=p.owner and c.r_constraint_name=p.constraint_name)
where p.constraint_type in ('P','U') and c.constraint_type='R'
)
select c_owner owner,constraint_name,c_table_name,connect_by_root(p_owner||'.'||p_table_name)||sys_connect_by_path(decode(delete_rule,'CASCADE','(cascade delete)','SET NULL','(cascade set null)',' ')||' '||c_owner||'"."'||c_table_name,' referenced by') foreign_keys
from c
where level<=10 and c_owner=a.c_owner and c_table_name=a.c_object_name
connect by nocycle p_owner=prior c_owner and p_table_name=prior c_table_name and ( level=1 or prior delete_rule in ('CASCADE','SET NULL') )
start with p_owner=a.p_owner and p_table_name=a.p_object_name
) loop
print_all('-- '||'FK chain: '||c.foreign_keys||' ('||c.owner||'.'||c.constraint_name||')'||' unindexed');
for l in (select * from dba_cons_columns where owner=c.owner and constraint_name=c.constraint_name) loop
print_all('-- FK column '||l.column_name);
end loop;
print_ddl('-- Suggested index: '||regexp_replace(translate(dbms_metadata.get_ddl('REF_CONSTRAINT',c.constraint_name,c.owner),chr(10)||chr(13),' '),'ALTER TABLE ("[^"]+"[.]"[^"]+") ADD CONSTRAINT ("[^"]+") FOREIGN KEY ([(].*[)]).* REFERENCES ".*','CREATE INDEX ON \1 \3;'));
for x in (
select rtrim(translate(dbms_metadata.get_ddl('INDEX',index_name,index_owner),chr(10)||chr(13),' ')) ddl
from dba_ind_columns where (index_owner,index_name) in (select owner,index_name from dba_indexes where owner=c.owner and table_name=c.c_table_name)
and column_name in (select column_name from dba_cons_columns where owner=c.owner and constraint_name=c.constraint_name)
)
loop
print_ddl('-- Existing candidate indexes '||x.ddl);
end loop;
for x in (
select rtrim(translate(dbms_metadata.get_ddl('INDEX',index_name,index_owner),chr(10)||chr(13),' ')) ddl
from dba_ind_columns where (index_owner,index_name) in (select owner,index_name from dba_indexes where owner=c.owner and table_name=c.c_table_name)
)
loop
print_all('-- Other existing Indexes: '||x.ddl);
end loop;
end loop;
end loop;
end;
/

I didn’t take time to document/comment the script but don’t hesitate to ask what you don’t understand there.

You should not see any ‘enq: TM’ from an OLTP application. If you have them, even short, they will become problematic one day. It’s the kind of thing that can block the whole database.

 

Cet article Script to suggest FK indexes est apparu en premier sur Blog dbi services.

SQL Saturday 510 – Locks, latches et spinlocks – Les slides

Sun, 2016-07-03 13:40

Cette année les gens ont encore répondu présent au SQLSaturday à Paris qui devenu incontournable pour ceux qui aiment échanger autour de la donnée avec les produits Microsoft. Par ailleurs, le temps a plutôt été au rendez-vous cette année et la vue depuis le 40ème étage de la tour Montparnasse est toujours aussi impressionnante.

Pour ma part, j’ai eu le plaisir d’animer une session en fin de journée sur la concurrence d’accès mais vu depuis SQL Server avec 3 principaux mécanismes utilisés comme les locks, latches et spinlocks.

blog 98 - lock latches spinlocks

 

Les slides sont téléchargement directement depuis le site du Pass.

Encore un grand merci aux organisateurs de cet événement (GUSS), aux volontaires de Supinfo, aux sponsors et aux participants!

A bientôt

 

Cet article SQL Saturday 510 – Locks, latches et spinlocks – Les slides est apparu en premier sur Blog dbi services.

GoldenGate 12.2 – Installation of the monitoring agent

Thu, 2016-06-30 02:50

As described in my last post, GoldenGate Studio requires the monitor agent on each GoldenGate instance. The goal of this agent is to allow GoldenGate Studio to interact with GoldenGate, for example, to deploy a new solution.

So lets start with the installation of the agent.

Download

The first step is to download the monitor agent (Oracle GoldenGate Monitor). It is available here.

Install_agent_monitor_8

Installation

The second step is to install the product.

After you have transferred the installer to the server decompress it.

oracle@srvoracle:/tmp/ [DB1] cd monitor/
oracle@srvoracle:/tmp/monitor/ [DB1] ll
total 420092
-rw-r--r-- 1 oracle oinstall 430166267 Feb 29 13:32 fmw_12.2.1.0.0_ogg_Disk1_1of1.zip
oracle@srvoracle:/tmp/monitor/ [DB1] unzip fmw_12.2.1.0.0_ogg_Disk1_1of1.zip 
Archive:  fmw_12.2.1.0.0_ogg_Disk1_1of1.zip
  inflating: fmw_12.2.1.0.0_ogg.jar  
oracle@srvoracle:/tmp/monitor/ [DB1] ll
total 840392
-rw-r--r-- 1 oracle oinstall 430166267 Feb 29 13:32 fmw_12.2.1.0.0_ogg_Disk1_1of1.zip
-r-xr-xr-x 1 oracle oinstall 430387063 Oct 14 08:33 fmw_12.2.1.0.0_ogg.jar

For launching the installer it is mandatory to have at a minimum Java version 8 (1.8). If this is not available it can be downloaded here.

To start the installation, launch the fmw_12.2.1.0.0_ogg.jar.

oracle@srvoracle:/home/oracle/Downloads/jdk1.8.0_73/bin/ [DB1] ./java -jar /tmp/monitor/fmw_12.2.1.0.0_ogg.jar 
Launcher log file is /tmp/OraInstall2016-02-29_01-39-26PM/launcher2016-02-29_01-39-26PM.log.
Extracting files.......
Starting Oracle Universal Installer

Checking if CPU speed is above 300 MHz.   Actual 2494.801 MHz    Passed
Checking monitor: must be configured to display at least 256 colors.   Actual 16777216    Passed
Checking swap space: must be greater than 512 MB.   Actual 4095 MB    Passed
Checking if this platform requires a 64-bit JVM.   Actual 64    Passed (64-bit not required)
Checking temp space: must be greater than 300 MB.   Actual 28817 MB    Passed


Preparing to launch the Oracle Universal Installer from /tmp/OraInstall2016-02-29_01-39-26PM
Log: /tmp/OraInstall2016-02-29_01-39-26PM/install2016-02-29_01-39-26PM.log
Logs successfully copied to /u01/app/oraInventory/logs.

The OUI (Oracle Universal Installer) will start. On the first screen just click on the next button.

Install_agent_monitor_1

On the next screen, we can choose the option for the updates. In my case, I leave the option to skip the auto updates.

Install_agent_monitor_2

Fill up the software location desired for GoldenGate agent.

Install_agent_monitor_3

Select the option to install only the agent monitor.

Install_agent_monitor_4

The OUI will test the system configuration and the java version.

Install_agent_monitor_5

The OUI provides a summary of the configuration. Click on next button, if all is ok.

Install_agent_monitor_9

The installation is done.

Install_agent_monitor_6

At the end OUI provides a summary of the installation with the location of the logs.

Install_agent_monitor_7

Now GoldenGate agent is installed.

Configuration Create instance

To create the instance of the agent, go where the binaries have been installed. In this example, it is /u01/app/oracle/product/jagent/oggmon/ogg_agent.

After that, launch the script createMonitorAgentInstance.sh.

oracle@srvoracle:/u01/app/oracle/product/jagent/oggmon/ogg_agent/ [DB1] ./createMonitorAgentInstance.sh 
Please enter absolute path of Oracle GoldenGate home directory : /u01/app/oracle/product/12.1.0/gg_1
Please enter absolute path of OGG Agent instance : /u01/app/oracle/product/12.1.3.0/jagent
Please enter unique name to replace timestamp in startMonitorAgent script (startMonitorAgentInstance_20160229140552.sh) : 2
Sucessfully created OGG Agent instance.
Create password

The agent needs a password to work. All the passwords will be stored in a wallet. For this go to the ORACLE_HOME_AGENT/bin. In my case, /u01/app/oracle/product/12.1.3.0/jagent/bin.

Launch the script pw_agent_util.sh.

oracle@srvoracle:/u01/app/oracle/product/12.1.3.0/jagent/bin/ [DB1] ./pw_agent_util.sh -jagentonly
Please create a password for Java Agent: 
Please confirm password for Java Agent: 
Feb 29, 2016 2:18:55 PM oracle.security.jps.JpsStartup start
INFO: Jps initializing.
Feb 29, 2016 2:18:56 PM oracle.security.jps.JpsStartup start
INFO: Jps started.
Wallet is created successfully.
Enable monitoring

To enable the monitoring, launch ggsci command and edit the GOLBALS parameter file.

oracle@srvoracle:/u01/app/oracle/product/12.1.3.0/jagent/bin/ [DB1] ggi 

GGSCI (srvoracle) 2> edit params ./GLOBALS

GGSCI (srvoracle) 4> view params ./GLOBALS

GGSCHEMA ggadmin
CHECKPOINTTABLE ggadmin.checkpoint
ENABLEMONITORING

Now restart the ggsci command and the jagent appears when doing an “info all”.

oracle@srvoracle:/u01/app/oracle/product/12.1.3.0/jagent/bin/ [DB1] ggi

GGSCI (srvoracle) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED                                           
JAGENT      STOPPED

To finalize, delete the datastore and recreate one.

oracle@srvoracle:/u01/app/oracle/product/12.1.3.0/jagent/ [DB1] ggi

GGSCI (srvoracle) 1> delete datastore
Are you sure you want to delete the datastore? yes

2016-02-29 14:33:30  INFO    OGG-06492  Datastore deleted.

GGSCI (srvoracle) 2> create datastore
Profile 'Trace' added.

2016-02-29 14:33:55  INFO    OGG-06489  Datastore created.

Now you can start the manager process and the jagent.

Conclusion

Now that the agents have been installed on each instance, all the prerequisite for GoldenGate Studio are met.

In the next blog, I will show you how to use GoldenGate Studio to deploy a solution

 

Cet article GoldenGate 12.2 – Installation of the monitoring agent est apparu en premier sur Blog dbi services.

SQL Server 2016: Always Encrypted – part 2

Thu, 2016-06-30 02:39

In my last blog post about SQL Server 2016 Always Encrypted, here, I showed how to use this new functionality but also that you have to separate the different execution context with an Application server, a database server and a security server to avoid that certificate will be available for all users and break the segregation.

Let’s see how to build those environment.
In my security server named SQL2016-2, I first create a Self-signed certificate with PowerShell after having import the PKI module. The certificate is stored in the local machine with:

  • DnsName, DNS name of the certificate: Demo_AlwaysEncrypted.cert
  • CertStoreLocation, specifies the certificate store in which to store the new certificate: Cert:\LocalMachine\MY
  • Provider, specifies the name of the KSP or CSP that this cmdlet uses to create the certificate: Microsoft Strong Cryptographic Provider
  • KeyAlgorithm, specifies the name of the algorithm that creates the asymmetric keys that are associated with the new certificate: RSA
  • KeyLength, specifies the length, in bits, of the key that is associated with the new certificate: 2048
  • HashAlgorithm, specifies the name of the hash algorithm to use to sign the new certificate: sha256

PowerShell script:

Import-Module PKI
New-SelfSignedCertificate -DnsName "Demo_AlwaysEncrypted.cert" -CertStoreLocation Cert:\LocalMachine\MY -Provider "Microsoft Strong Cryptographic Provider" -KeyAlgorithm RSA -KeyLength 2048 -HashAlgorithm sha256

Blog_AlwaysEncrypted_part2_1

I have now my new certificate.
I open my SQL Server instance and navigate in my database SecurityDB to Security, Always Encrypted Keys and right click on Column Master Keys and select “New Column Master Key…”.
I create a new Column Master Key with the certificate I have just created on my local Machine and I name it CMK2:

Blog_AlwaysEncrypted_part2_2

The code to create my Master Key is the following, you can have it by clicking on the Script button in the above screen:

CREATE COLUMN MASTER KEY [CMK2]
WITH
(
       KEY_STORE_PROVIDER_NAME = N'MSSQL_CERTIFICATE_STORE',
       KEY_PATH = N'LocalMachine/My/A3DF88FA53ED5347ABCF709D7A15621CDC358456'
)
GO

I create now a Column Encryption Key named CEK2 with the Column Master Key I have create before and named CMK2:

Blog_AlwaysEncrypted_part2_3

The code to create my Encryption Key is the following:

CREATE COLUMN ENCRYPTION KEY [CEK2]
WITH VALUES
(
       COLUMN_MASTER_KEY = [CMK2],
       ALGORITHM = 'RSA_OAEP',
       ENCRYPTED_VALUE = 0x01700000016C006F00630061006C006D0061006300680069006E0065002F006D0079002F00610033006400660038003800660061003500330065006400350033003400370061006200630066003700300039006400370061003100350036003200310063006400630033003500380034003500360098721948A1283B0E5375B4E7F0A5F2EED05EDB0A87814BF66EABCBE2B4E75B5A759CF48D7D87D6A2C44A6DB2B587992E412EAF20D3FE826844FE76B1BC764E769C911486150E08EB972DBF406FF51738EFAA7F831A27A540F948CBC673273B1205A2CEF33AE77A0AF01DB3589A4A746151DC8C699DAB7AD1B99910A598EA7258670F1D5C0600704A9BB440F86761F41E8C72E52FFF6A7348B65F62E60D90B60CF1FC00B1B473F40E82B140219A317760CBC8591E07215AD114FAF80F08361C607823DB44037C455514B3789661560613638D058D7965846962B9FD3D9ECF0AC62325043AB40D892683744DC994E868C5F0E3FD14277D8941977426AC59DEA7EEF1394641DB314C2DCB083D3D5A12B97A131986D9834CBF2837908D3EF54C9104D92371C8A9EDC9B22555D8D615440D21B03393303FD4A9C0047282091178F31323A6EF4313576470349F32F106D38ABC3B0A1C4F47E870EA43F7B90D85D4F6C6686368E4E5EFC416EA08241F75F2D7B4F4B700114CA545AA4310829B2691A6D2C553F8F8B7E3983FC99F4E0A28BD79C84F50B7F6ABB17419DEAE06ACA694D68B00B48DBC0D66B46847A389C8A587F138CEF7791845A999EA45849D2FAA45BDA88FE134C70E592937981A7B2A18397CC615BB435A1EDDD66CF58119D89B24190744F4326718CBCDC1182AD20C8023EECB3994569844270DC90020B20D70B5CCC5
)
GO

I will now transfer those two certificates to my SQL Server VM, named SQL2016-1, in order to encrypt my table columns with them. To do that, I will execute the scripts I have generated before in my SQL Server instance in my security VM. I have created a second database named TestDB2 where I will execute my scripts:

Blog_AlwaysEncrypted_part2_4

After execution of the script I have my Column Master Key and my Column Encryption Key in the Security\Always Encrypted Keys path of my TestDB2 database.

I will now create the same table that I have on my TestDB database (see my previous blog post) in my new database TestDB2. For that I will execute the following script with the same encrypted columns (CustAccountNumber, CustAccountBalance, CustCreditCardNumber) but with my new Column Encryption Key CEK2:

Blog_AlwaysEncrypted_part2_5

My table is created with the specified columns encrypted.
I will now export my certificate from my security server to my application server with PowerShell.
First, I retrieve my certificate from the store and export it to a file with a password:

PowerShell script:

# I retrieve the Certificate from the store
$MyCert = Get-ChildItem Cert:\LocalMachine\My -DnsName "Demo_AlwaysEncrypted.cert"
# I create a password to protect my certificate
$pwd = ConvertTo-SecureString -String "Demo" -Force –AsPlainText
# I export my certificate as PFX Certificate Archive
Export-PfxCertificate -Cert $MyCert -FilePath C:\Temp\MyCert.pfx -Password $pwd

Blog_AlwaysEncrypted_part2_6

I’m able now to copy my certificate in my application server and import it:

PowerShell script:

# I import the certificate from my file
$pwd = ConvertTo-SecureString -String "Demo" -Force –AsPlainText
Import-PfxCertificate -CertStoreLocation Cert:\LocalMachine\My -FilePath C:\Temp\MyCert.pfx -Password $pwd

Blog_AlwaysEncrypted_part2_7

My certificate is now in the store of my application server and I will be able at this time to insert and read plain text data from my application server. I will use a PowerShell script to insert data. I will not explain all this script but it’s good to know that:

  • The connection string needs to enable Always Encrypted for the client application, here PowerShell, with this parameter: Column Encryption Setting=enabled
  • The application passes plain text data in SqlParameter objects to the driver. The driver will encrypt the values and send encrypted values to the SQL Server VM

PowerShell script:

$sqlConn = New-Object System.Data.SqlClient.SqlConnection
$sqlConn.ConnectionString = "Server=SQL2016-1\SQL2016_1;Integrated Security=true; Initial Catalog=TestDB2; Column Encryption Setting=enabled;"
$sqlConn.Open()
$sqlcmd = New-Object System.Data.SqlClient.SqlCommand
$sqlcmd.Connection = $sqlConn
$sqlcmd.CommandText = "INSERT INTO dbo.Customer (CustNb, CustName, CustSurname, CustAccountNumber, CustAccountBalance, CustCreditCardNumber, CustCountry) VALUES (@CustNb, @CustName, @CustSurname, @CustAccountNumber, @CustAccountBalance, @CustCreditCardNumber, @CustCountry)"
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustNb",[Data.SQLDBType]::uniqueidentifier)))
$sqlcmd.Parameters["@CustNb"].Value = [GUID]::NewGuid()
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustName",[Data.SQLDBType]::VarChar,50)))
$sqlcmd.Parameters["@CustName"].Value = "Durand"
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustSurname",[Data.SQLDBType]::VarChar,50)))
$sqlcmd.Parameters["@CustSurname"].Value = "Laurent"
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustAccountNumber",[Data.SQLDBType]::VarChar,50)))
$sqlcmd.Parameters["@CustAccountNumber"].Value = "1234-234589"
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustAccountBalance",[Data.SQLDBType]::Int)))
$sqlcmd.Parameters["@CustAccountBalance"].Value = 1256890
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustCreditCardNumber",[Data.SQLDBType]::VarChar,50)))
$sqlcmd.Parameters["@CustCreditCardNumber"].Value = "1234-4245-4563-4564"
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustCountry",[Data.SQLDBType]::VarChar,200)))
$sqlcmd.Parameters["@CustCountry"].Value = "France"
$sqlcmd.ExecuteNonQuery();
$sqlConn.Close()

Blog_AlwaysEncrypted_part2_9

I have now inserted a row in my Customer table. If I try to do the same insert without the parameter Column Encryption Setting=enabled, I receive an error message:

Blog_AlwaysEncrypted_part2_10

Now, if I want to read my table from my application server I can execute a simple select * with this PowerShell script and my parameter Column Encryption Setting=enabled:

PowerShell script:

$sqlConn = New-Object System.Data.SqlClient.SqlConnection
$sqlConn.ConnectionString = “Server=SQL2016-1\SQL2016_1;Integrated Security=true; Initial Catalog=TestDB2; Column Encryption Setting=enabled;”
$sqlConn.Open()
$sqlcmd = $sqlConn.CreateCommand()
$query = "SELECT * FROM dbo.customer"
$sqlcmd.CommandText = $query
$adp = New-Object System.Data.SqlClient.SqlDataAdapter $sqlcmd
$data = New-Object System.Data.DataSet
$adp.Fill($data) | Out-Null
$data.Tables

Blog_AlwaysEncrypted_part2_11

I see plan text data but without the parameter:

Blog_AlwaysEncrypted_part2_12

I’m able to read the table but the encrypted columns appear as blob and cannot by readable.
If I go back to my SQL Server VM, even with SYSADMIN right and with the connection string parameter Column Encryption Setting=enabled I am not allowed to query the table:

Blog_AlwaysEncrypted_part2_13

The error indicates that I don’t have access on my SQL Server VM at the certificate used to encrypt the column master key and so I’m not able to decrypt the encrypted columns.
If I don’t use the parameter Column Encryption Setting=enabled during my connection via SQL Server Management Studio and try to query my table:

Blog_AlwaysEncrypted_part2_14

I can see my table row(s) but the encrypted columns appear as BLOB and so are protected to unexpected reader which is exactly the goal of this new feature: avoid that SQL Server DBAs or users with some rights should able to visualize critical data.
In order to validate this security, the certificate must not be saved in the certificate store of the SQL Server VM.
That’s all Folks! ;-)

 

Cet article SQL Server 2016: Always Encrypted – part 2 est apparu en premier sur Blog dbi services.

Documentum Multiple ADTS: Switching rendition queue

Wed, 2016-06-29 08:29

As part of my previous posts about having 2 rendition servers for one docbase (see below), I’ll show you how to simply switch a rendition queue item to the other server
http://blog.dbi-services.com/documentum-multiple-adts-ratio-of-rendition-creations-between-instances-part-1/
http://blog.dbi-services.com/documentum-multiple-adts-ratio-of-rendition-creations-between-instances-part-2/

I had an issue by a customer where one of the two rendition server was stuck since 2 days. As I explained in my previous posts, each server will reserve a group of items from the queue for it to process. Let’s say we got the threshold to 5 items. Each server will reserve 5 items in the dmi_queue_item and set the attribute sign_off_user to itself. E.g. RNDSERVER_DOCBASE1.

Then it will process each items one by one; onces one is done it will reserve a new one from the queue, and so on.

The problem is: if the rendition server is stuck for whatever reason all reserved items will NOT go back to the available pool. It means that they will be reserved by THIS rendition server until you fix the server and it starts processing them again.

You can imagine what I got by the customer, some documents were not rendered since 2 days!

So here is the simplest solution to put the items back in the pool:

update dmi_queue_item objects set sign_off_user ='' where sign_off_user ='RNDSERVER_DOCBASE1';

Hence all items will be set as available. The other rendition server should reserve them now as the current server is stuck and can’t reserve more items.

In the case of a big file beeing processed by the first server and you want the documents to be processed by the other one you can reserver items by yourself manually with:

update dmi_queue_item objects set sign_off_user='RNDSERVER2_DOCBASE1' where item_id in ('09xxxx','09xxxx');

If you have any questions please use the comment section.

 

Cet article Documentum Multiple ADTS: Switching rendition queue est apparu en premier sur Blog dbi services.

PostgreSQL as a central reporting hub? – Yes, of course

Tue, 2016-06-28 15:31

For every client we do projects for there usually are database systems from various vendors. In most places you at least find some Oracle and MS SQL Server databases. Probably somewhere there are even some MySQL/MariaDB or PostgreSQL instance running. The Oracle and MS SQL Server stuff almost always is there because vendors require it. For MySQL/MariaDB the common use case are web applications, ticketing systems and home grown developments. At some point in time there might be a requirement to centralize important data of all these databases into a central reporting hub. The more data you have the more there is a requirement to analyze it and to create reports that drive some sorts of decisions. PostgreSQL is very well prepared to assist in this. If you have read the last posts about connecting your PostgreSQL instance to either Oracle, MS SQL Server, MariaDB/MySQL or even other PostgreSQL instances you might already know into what direction this post will go.

This is a sample use case: There is a requirement to centralize all the sales figures of a company into one big reporting database. The (simplified) database landscape in this company looks exactly as described above:

pg_reporting_1

There is one Oracle database, one MS SQL Server instance, one MariaDB/MySQL instance and one PostgreSQL instance. Each of them holds some sales figures which shall be integrated into the reporting database. For the demo setup I use the Dell DVD Store Database sample schema. You can read how to set this up for:

As the goal is to have a PostgreSQL instance as a central reporting hub the desired database landscape will approximately look like this:

pg_reporting_2

Lets assume all the foreign data wrappers are already setup in the central PostgreSQL instance:

(postgres@[local]:4445) [postgres] > \dx
                                                 List of installed extensions
     Name     |    Version    |   Schema   |                                    Description                                    
--------------+---------------+------------+-----------------------------------------------------------------------------------
 mysql_fdw    | 1.0           | public     | Foreign data wrapper for querying a MySQL server
 oracle_fdw   | 1.1           | public     | foreign data wrapper for Oracle access
 plpgsql      | 1.0           | pg_catalog | PL/pgSQL procedural language
 postgres_fdw | 1.0           | ds2_oracle | foreign-data wrapper for remote PostgreSQL servers
 tds_fdw      | 2.0.0-alpha.1 | public     | Foreign data wrapper for querying a TDS database (Sybase or Microsoft SQL Server)

In addition lets assume all the foreign servers and all the user mappings are there and are working:

(postgres@[local]:4445) [postgres] > select srvname,srvoptions from pg_foreign_server;
   srvname    |                                      srvoptions                                       
--------------+---------------------------------------------------------------------------------------
 mysql_server | {host=192.168.22.41,port=3306}
 mssql_svr    | {servername=192.168.22.102,port=1433,database=ds2,tds_version=7.3,msg_handler=notice}
 oracle_srv   | {dbserver=//192.168.22.42/DELLDVD}
 postgres_srv | {host=192.168.22.40,port=5432,dbname=ds2}
(4 rows)

(postgres@[local]:4445) [postgres] > select * from pg_user_mappings;
 umid  | srvid |   srvname    | umuser | usename  |               umoptions                
-------+-------+--------------+--------+----------+----------------------------------------
 65547 | 65546 | mysql_server |     10 | postgres | {username=web,password=web}
 65587 | 65586 | mssql_svr    |     10 | postgres | {username=ds2user,password=ds2}
 65615 | 65614 | oracle_srv   |     10 | postgres | {user=DS2,password=ds2}
 65676 | 65675 | postgres_srv |     10 | postgres | {user=ds2,password=ds2}

As the central reporting database is already connected to all the other database systems how could you organize the fetching of the data? One approach is to create a separate schema for each of the foreign databases and one additional schema that combines the data:

(postgres@[local]:4445) [postgres] > create schema ds2_mssql;
(postgres@[local]:4445) [postgres] > create schema ds2_oracle;
(postgres@[local]:4445) [postgres] > create schema ds2_mysql;
(postgres@[local]:4445) [postgres] > create schema ds2_postgresql;
(postgres@[local]:4445) [postgres] > create schema ds2_combined;
(postgres@[local]:4445) [postgres] > \dn
      List of schemas
      Name      |  Owner   
----------------+----------
 ds2_combined   | postgres
 ds2_mssql      | postgres
 ds2_mysql      | postgres
 ds2_oracle     | postgres
 ds2_postgresql | postgres
 public         | postgres
(6 rows)

The next step is to create the foreign tables we want to work with.

For Oracle, MySQL/MariaDB and PostgreSQL we can use the import foreign schema command:

postgres=# import foreign schema "DS2" from server oracle_srv into ds2_oracle;
IMPORT FOREIGN SCHEMA
postgres= import foreign schema "DS2" from server mysql_srv into ds2_mysql;
IMPORT FOREIGN SCHEMA
postgres= import foreign schema "public" from server postgres_srv into ds2_postgresql;
IMPORT FOREIGN SCHEMA

For MS SQL Server we need to specify the foreign tables on our own (there is a feature request to implement import foreign schema):

create foreign table ds2_mssql.orders 
  ( orderid numeric not null
  , orderdate timestamp(0) without time zone not null
  , customerid numeric                                
  , netamount numeric(12,2) not null 
  , tax numeric(12,2) not null 
  , totalamount numeric(12,2) not null 
  )
SERVER mssql_svr
OPTIONS (table 'dbo.orders', row_estimate_method 'showplan_all');

create foreign table ds2_mssql.orderlines
  ( orderlineid numeric not null 
  , orderid numeric not null 
  , prod_id numeric not null 
  , quantity numeric not null 
  , orderdate timestamp(0) without time zone not null 
  )
SERVER mssql_svr
OPTIONS (table 'dbo.orderlines', row_estimate_method 'showplan_all');

Having all the foreign tables available we can start to work with the data either by creating materialized views:

create materialized view ds2_combined.mv_orders as
  select * from ds2_mssql.orders
  union all
  select * from ds2_mysql."ORDERS"
  union all
  select * from ds2_oracle.orders
  union all
  select * from ds2_postgresql.orders
  with no data;
refresh materialized view ds2_combined.mv_orders with data;

… or by importing the data into PostgreSQL and then build reports on top of that:

BEGIN;
  insert into ds2_combined.orders select * from ds2_mssql.orders;
  insert into ds2_combined.orders select * from ds2_mysql."ORDERS";
  insert into ds2_combined.orders select * from ds2_oracle.orders;
  insert into ds2_combined.orders select * from ds2_postgresql.orders;
END;
BEGIN;
  insert into ds2_combined.orderlines 
         select * from ds2_mssql.orderlines;
  insert into ds2_combined.orderlines 
         select * from ds2_mysql."ORDERLINES";
  insert into ds2_combined.orderlines 
         select * from ds2_oracle.orderlines;
  insert into ds2_combined.orderlines 
         select * from ds2_postgresql.orderlines;
END;

Having all the data locally available you can do whatever you want with it. Have fun with reporting on your data …

Btw: The source for this was a session at the Swiss PGDAY 2016. You can download the slides there.

 

Cet article PostgreSQL as a central reporting hub? – Yes, of course est apparu en premier sur Blog dbi services.

Statspack Idle Events

Tue, 2016-06-28 11:26

When you don’t have Enterprise Edition + Diagnostic Pack, you cannot use AWR but you can, and should, install Statspack. Statspack is still there, but unfortunately do not evolve a lot. The most important section, the ‘Top 5 Timed Events’ shows only foreground events, or is supposed to do so. When a user process waits on a background process, this section must count only the foreground wait and not the activity of the background process or we will have double accounting. The background activity is included in ‘Idle’ events in order to be excluded from this section.
But unfortunately, new versions come with new wait events, and the list of Statspack idle events is not up to date anymore.

Here is the ‘Top 5 Timed Events’ I got from a database between 22:00 and 23:00 where there is no application activity:

Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
----------------------------------------- ------------ ----------- ------ ------
LGWR worker group idle 6 22,049 ###### 65.2
AQPC idle 120 3,602 30014 10.7
heartbeat redo informer 3,568 3,601 1009 10.7
lreg timer 1,195 3,598 3011 10.6
direct path read 31,221 466 15 1.4
-------------------------------------------------------------

Humm. What can you do with that? Idle events and timers are at the top. Direct path read seem to be minimal. And no CPU usage?
Obviously, something is wrong here.

Statspack uses a fixed list of wait events that are considered as ‘idle’ events and it is stored at Statspack installation into STATS$IDLE_EVENT.
This comes from an age where wait classes were not there. In current version, a more realistic list of wait events is in V$EVENT_NAME where class_name=’Idle’

Let’s compare them (that’s in 12.1.0.1)

First, are there some non-idle events that are considered as ‘idle’ by Statspack?


SQL> select s.event statspack_idle_event,v.name,v.wait_class from STATS$IDLE_EVENT s left outer join V$EVENT_NAME v on s.event=v.name where wait_class'Idle';
 
STATSPACK_IDLE_EVENT NAME WAIT_CLASS
--------------------------------------- --------------------------------------- ----------
null event null event Other
SQL*Net message to client SQL*Net message to client Network
SQL*Net more data from client SQL*Net more data from client Network
KSV master wait KSV master wait Other
parallel recovery slave wait for change parallel recovery slave wait for change Other

The goal of this blog post is not to detail the meaning of each of those events (search for them on tanelpoder.com as a good start for that), but if they are now considered as non-idle, Statspack should obey the same rule.

Then we can check which idle events are not in Statspack list:


SQL> select s.event statspack_idle_event,v.name,v.wait_class from STATS$IDLE_EVENT s right outer join V$EVENT_NAME v on s.event=v.name where wait_class
 
STATSPACK_IDLE_EVENT NAME WAIT_CLASS
-------------------- ------------------------------------------ ----------
OFS idle Idle
heartbeat redo informer Idle
LGWR worker group idle Idle
Recovery Server waiting for work Idle
Recovery Server waiting restore start Idle
Recovery Server Surrogate wait Idle
Recovery Server Servlet wait Idle
Recovery Server Comm SGA setup wait Idle
parallel recovery coordinator idle wait Idle
recovery sender idle wait Idle
recovery receiver idle wait Idle
recovery merger idle wait Idle
virtual circuit next request Idle
lreg timer Idle
REPL Apply: txns Idle
REPL Capture/Apply: messages Idle
REPL Capture: archive log Idle
PL/SQL lock timer Idle
Emon coordinator main loop Idle
Emon slave main loop Idle
AQ: 12c message cache init wait Idle
AQ Cross Master idle Idle
AQPC idle Idle
Streams AQ: load balancer idle Idle
Sharded Queues : Part Maintenance idle Idle
REPL Capture/Apply: RAC AQ qmn coordinator Idle
iowp msg Idle
iowp file id Idle
netp network Idle
gopp msg Idle

There are a lot of them. We can see lot of idle events that have been introduced in recent versions.

The Statspack list is an old list. Here is how to refresh it:


delete from STATS$IDLE_EVENT;
insert into STATS$IDLE_EVENT select name from V$EVENT_NAME where wait_class='Idle';
commit;

Once I did that and run a new Statspack report on the same snapshots as above, I get a more realistic ‘Top 5 Timed Events':


Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
----------------------------------------- ------------ ----------- ------ ------
direct path read 31,221 466 15 48.7
CPU time 310 32.4
db file sequential read 49,137 77 2 8.0
SQL*Net vector data to client 15,189 31 2 3.3
enq: TM - contention 1 24 23937 2.5

Actually, this 22:00 to 23:00 time period is where the maintenance jobs are running. dbms_space.auto_space_advisor_job_proc likes to read your tables in bulk in order to see if there is some free space. And I don’t like this 24 seconds TM lock wait at a time where I though the database was not in use. This was hidden from the original report.

Patch?

Statspack is still supported and there’s a patch to add the following events as idle:
"virtual circuit next request" "AQ Cross Master idle" "AQ: 12c message cache init wait" "AQPC idle" "Emon coordinator main loop" "Emon slave main loop" "LGWR worker group idle" "OFS idle" "REPL Apply: txns" "REPL Capture/Apply: RAC AQ qmn coordinator" "REPL Capture/Apply: messages" "REPL Capture: archive log" "Recovery Server Comm SGA setup wait" "Recovery Server Servlet wait" "Recovery Server Surrogate wait" "Recovery Server waiting for work" "Recovery Server waiting restore start" "Sharded Queues : Part Maintenance idle" "Streams AQ: load balancer idle" "gopp msggopp msg" "heartbeat redo informer" "iowp file id" "iowp msg" "lreg timer" "netp network" "parallel recovery coordinator idle wait" "recovery merger idle wait" "recovery receiver idle wait" "recovery sender idle wait" "imco timer" "process in prespawned state"
(Nice way to be referenced by google for all those improbable wait events, isn’t it?)

However, I think that filing STATS$IDLE_EVENT from V$EVENTNAME, or maybe even replacing it as a view can be a better long term solution. Each version comes with new wait events and it seems that Statspack evolves only through patches.

 

Cet article Statspack Idle Events est apparu en premier sur Blog dbi services.

SQL Server 2016 – Query Store: retrieve query that doesn’t appear!

Tue, 2016-06-28 07:25

For our event SQL Server 2016 in September, I am studying the new functionality Query Store.
My colleague David Barbarin have written few months ago about Query Store and how it’s working.

Simple SELECT Query

To begin, I execute a simple SELECT on a table with 3 different methods:

SELECT * FROM QS_test;
exec sp_executesql N'SELECT * FROM QS_test'
EXEC sp_GetQS_test;

The “sp_GetQS_test” is a stored procedure with the select statement.
I created a little query with specific DMVs for Query Store to analyze the query:

SELECT  qsqt.query_text_id,qsqt.query_sql_text,qsq.query_id,qsq.query_hash,
   qsq.query_parameterization_type_desc,qsq.initial_compile_start_time,
   qsq.last_compile_start_time,qsq.last_execution_time,qsq.avg_compile_duration,
   qsp.query_id,qsp.plan_id,qsrs.execution_type_desc
  FROM sys.query_store_query_text qsqt
  inner join sys.query_store_query qsq on qsq.query_text_id=qsqt.query_text_id
  inner join sys.query_store_plan qsp on qsp.query_id=qsq.query_id 
  inner join sys.query_store_runtime_stats qsrs on qsrs.plan_id=qsp.plan_id   
  WHERE query_sql_text='SELECT * FROM QS_test';

QS_01
As you can see in the result, all 3 queries are present with the same query text (‘SELECT * FROM QS_test’) in the query store.

SELECT Query with a Where clause

I continue my test with a select and a where clause:

  SELECT * FROM QS_test WHERE rid=5

I run my query to find the query in the query store:
QS_02
And, Oh surprise, no query found! The query does not appear in the query store….
I rerun my query without the where clause to see if I find something:
QS_03
The result give me a query written differently:
(@1 tinyint)SELECT * FROM [QS_test] WHERE [rid]=@1

This query goes through a parametrization and to retrieve this information we use a new function in SQL Server 2016: fn_stmt_sql_handle_from_sql_stmt

Function sys.fn_stmt_sql_handle_from_sql_stmt

This function give us the SQL handle for the query
QS_04
After, I add the function in my query to find it in the Query Store:

SELECT  qsqt.query_text_id,qsqt.query_sql_text,qsq.query_id,qsq.query_hash,

qsq.query_parameterization_type_desc,qsq.initial_compile_start_time,qsq.last_compile_start_time,

qsq.last_execution_time,qsq.avg_compile_duration,qsp.query_id,qsp.plan_id,qsrs.execution_type_desc

FROM sys.query_store_query_text qsqt

inner join sys.query_store_query qsq on qsq.query_text_id=qsqt.query_text_id

inner join sys.query_store_plan qsp on qsp.query_id=qsq.query_id

inner join sys.query_store_runtime_stats qsrs on qsrs.plan_id=qsp.plan_id

CROSS APPLY sys.fn_stmt_sql_handle_from_sql_stmt('SELECT * FROM QS_test WHERE rid=5',NULL) fsshfss

WHERE qsqt.statement_sql_handle=fsshfss.statement_sql_handle;

QS_05

It’s done, I retrieve the query thanks to this new function.
You can notice that I use the statement_sql_handle column and not the query_sql_text column in the clause where.

I have tested with query_sql_text column and you can see here the error that I get…
QS_06
The query_sql_text from the function is SQL_Latin1_General_CP1_CI_AS and both my database and my instance are using French_CI_AS. This is not hopeless…
Then, if you want to use the query with query_sql_text, you just need just to precise the collation with the keyword COLLATE
QS_07

 

Cet article SQL Server 2016 – Query Store: retrieve query that doesn’t appear! est apparu en premier sur Blog dbi services.

Trace Flag 4199 or not in SQL Server 2016?

Tue, 2016-06-28 04:34

Some Trace Flag like T1117 and T1118 are no more needed as you can read on David’s blog.
But that’s not all, you have also the T4199…

What’ does the Trace Flag T4199?

This Trace Flag enables all query optimizer fixes.
A lot of DBAs have enabled this Trace Flag globally during the build of a new server.
If you want to take advantage of an enhancement or a fix, the T4199 becomes a necessity…

But now, with SQL Server 2016, this is an old story!

SQL Server 2016 eliminates the T4199

In SQL Server 2016, you don’t need to enable this Trace Flag, forget your old habit!
It is automatically included when you change the COMPATIBILY LEVEL to 130:

USE [master]
GO
ALTER DATABASE [MyDatabase] SET COMPATIBILITY_LEVEL = 130
GO

Advise: If you find a unexpected/poor plan, use the Query Store to analyze and force a plan!
More information on the KB974006

 

Cet article Trace Flag 4199 or not in SQL Server 2016? est apparu en premier sur Blog dbi services.

Swiss PGDAY 2016, Slides are online

Tue, 2016-06-28 03:15

Last Friday the Swiss PGDAY happened in Rapperswil. All the slides as well as some pictures are now availble (tab “Programm”):

>pgday_logo

 

Cet article Swiss PGDAY 2016, Slides are online est apparu en premier sur Blog dbi services.

Manage DDL inside SharePlex

Fri, 2016-06-24 15:30

In a precedent blog (http://blog.dbi-services.com/discovering-shareplex-for-oracle/) about SharePlex, we presented how to setup a replication between 2 environments. In this article we will try to see how SharePlex deals with replicating DDL statements.
Before starting we present below our environment and our configuration file. We are just  replicating scott_atlas to scott_atlasrep2. We suppose that SharePlex is already configured.

Server Name Database Name Oracle Version What atlas.localdomain SPLEXDB 12.1.0.2 Source atlasrep2.localdomain SPLEXSTR2 12.1.0.2 Target
[oracle@atlas config]$ cat ddl_config_atlasrep2.cnf
datasource:o.SPLEXDB

#source tables            target tables            routing map

expand scott_atlas.%       scott_atlasrep2.%       atlasrep2@o.SPLEXSTR2

 

Let’s start SharePlex on both source and target

[oracle@atlas ~]$ $SP_HOME/bin/sp_cop -u $NAME_FOR_ALERTING &
[1] 2617
[oracle@atlas ~]$

*******************************************************
* SharePlex for Oracle Startup
* Copyright 2016 Dell, Inc.
* ALL RIGHTS RESERVED.
* Protected by U.S. Patents: 7,461,103 and 7,065,538
* Version: 8.6.4.66-m64-oracle120
* VarDir : /u01/app/shareplex/vardir/splexdb_864_12_2105
* Port : 2105
*******************************************************
[oracle@atlasrep2 ~]$ $SP_HOME/bin/sp_cop -u $NAME_FOR_ALERTING &
[1] 2437
[oracle@atlasrep2 ~]$

*******************************************************
* SharePlex for Oracle Startup
* Copyright 2016 Dell, Inc.
* ALL RIGHTS RESERVED.
* Protected by U.S. Patents: 7,461,103 and 7,065,538
* Version: 8.6.4.66-m64-oracle120
* VarDir : /u01/app/shareplex/vardir/splexstr2_864_12_2105
* Port   : 2105
*******************************************************

Now let’s verify that SharePlex processes are running on both source and target

[oracle@atlas ~]$ $SP_HOME/bin/sp_ctrl


sp_ctrl (atlas:2105)> show

Process    Source                               Target                 State                   PID
---------- ------------------------------------ ---------------------- -------------------- ------
Capture    o.SPLEXDB                                                   Running                2618
Read       o.SPLEXDB                                                   Running                2620
Export     atlas                                atlasrep2              Running
[oracle@atlasrep2 ~]$  rlwrap $SP_HOME/bin/sp_ctrl

sp_ctrl (atlasrep2:2105)> show

Process    Source                               Target                 State                   PID
---------- ------------------------------------ ---------------------- -------------------- ------
Import     atlas                                atlasrep2              Running                2734
Post       o.SPLEXDB-atlas                      o.SPLEXSTR2            Running                2438

SharePlex provides many parameters to handle DDL replication. We are going to see some of them. To display SharePlex parameters, just execute

sp_ctrl (atlas:2105)> list param

SP_OCT_REPLICATE_DDL:  Possible values 0/1/2/3

Manages the replication of ALTER and TRUNCATE statements. It can have the following values

0: No replication of both ALTER and TRUNCATE

1: Only ALTER replication

2: Only TRUNCATE replication

3: Both replication of ALTER and TRUNCATE  –default value

Let’s show an example with the default value 3, ALTER and TRUNCATE DDL performed in scott_atlas schema should be replicated. From the source and the target we can see that table bonus is not empty

From the source

SQL> select name from v$database;

NAME
---------
SPLEXDB
SQL> conn scott_atlas/tiger
Connected.
SQL> select * from bonus;

ENAME      JOB              SAL       COMM
---------- --------- ---------- ----------
Moise      IT              2300        200

 

From the target

SQL> select name from v$database;

NAME
---------
SPLEXSTR

SQL> conn scott_atlasrep2/tiger
Connected.
SQL> select * from bonus;

ENAME      JOB              SAL       COMM
---------- --------- ---------- ----------
Moise      IT              2300        200

Let’s truncate table bonus and let’s add 2 new columns on the source

SQL> show user
USER is "SCOTT_ATLAS"
SQL> truncate table bonus;

Table truncated.

SQL> alter table bonus add (col_net number,col_var number);

Table altered.

SQL>  select * from bonus;

no rows selected

We can see that both DDL are replicated on the target

SQL> show user
USER is "SCOTT_ATLASREP2"
SQL> select * from bonus;

no rows selected

SQL> desc bonus
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ENAME                                              VARCHAR2(10)
 JOB                                                VARCHAR2(9)
 SAL                                                NUMBER
 COMM                                               NUMBER
 COL_NET                                            NUMBER
 COL_VAR                                            NUMBER

SP_OCT_AUTOADD_ENABLE:  Possible values 0/1

Manages newly created objects in the active configuration (objects in the configuration file). The default value is 1, meaning that newly created table in scott_atlas schema will be automatically replicated on the source. SharePlex will also replicated all corresponding DML.
Let’s create a table on the source and let’s insert some data

SQL> show user
USER is "SCOTT_ATLAS"
SQL> create table mytable (id number primary key,message long);

Table created.

SQL> insert into mytable values (1,'Ceci est un message important concernant...');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from mytable;

        ID
----------  MESSAGE
------------------------------------------------------------
         1
Ceci est un message important concernant...

We can verify that the new table and corresponding data are replicated on the target.

SQL> show user
USER is "SCOTT_ATLASREP2"
SQL> select * from mytable;

        ID
----------
MESSAGE
--------------------------------------------------------------------------------
         1
Ceci est un message important concernant...

 

SP_OCT_REPLICATE_ALL_DDL: Possible values 0/1

Manages  DDL replication  for certain objects that are not listed in the configuration file. The default value is 0 and means that DDL concerning objects which are not in the active configuration will not be replicated. Let’s take an example with a toto schema (Just remember that in our configuration file only scott_atlas objects are being replicated).
From the source let’s create a table test1 in toto schema

SQL> connect toto/toto

Connected.

SQL> create table test1(id number);

 
Table created.

As expected we can verify that table test1 is not replicated in toto schema on the target

SQL> conn toto/toto
Connected.
SQL> desc test1
ERROR:
ORA-04043: object test1 does not exist

Now let’s set the value of the parameter to 1 on the source

sp_ctrl (atlas:2105)> set param SP_OCT_REPLICATE_ALL_DDL 1

And let’s create a table test2 on toto schema and let’s insert a row on the source

SQL> create table test2(id number);

Table created.

SQL> insert into test2 values (1);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from test2;

        ID
----------
         1

We can see that DDL is replicated (table creation) on target but not the corresponding DML.

SQL> select * from test2;

no rows selected

SQL>

That means that if an object is not in the active replication and if the parameter SP_OCT_REPLICATE_ALL_DDL is set to 1, DDL will be replicated but data will not.

 

Cet article Manage DDL inside SharePlex est apparu en premier sur Blog dbi services.

Pages