Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 17 hours 44 min ago

Oracle Multitenant feature name

Sun, 2016-07-10 15:46

We all know what is the multitenant option: a container database (CDB) with multiple user pluggable databases (PDB). This requires Enterprise Edition plus option. But you can use the same architecture without the option and even in Standard Edition: a CDB with only one user PDB. It is called “single-tenant” or “lone PDB”. How do you call this new 12c architecture?

I call “multitenant architecture” the following set of features, available in all editions:

  • dictionary separation between system metadata/objects and user metadata/object
  • unplug / plug features to transport PDBs
  • create a new PDB as a clone of another existing PDB
  • ability for a session to switch between PDB and CDB$ROOT explicitly with ‘ALTER SESSION’ or implicitly through metadata and object links
  • ability to read data for several containers in one query

But it may look strange to use the “multitenant” term when in Standard Edition or Enterprise edition without option.

Container database (CDB)

One idea would be to simply call it ‘CDB architecture’, because it’s the architecture of the container database, and because a database without this architecture is called “non-CDB”:

@franckpachot Oracle docs says „CDB architecture“ and „Multitenant Architecture“ – I prefer CDB, because there’s Non-CDB, too :)

— Markus Flechtner (@markusdba) July 8, 2016

consolidated database (CDB)

However, as very often with Oracle, the names change and the acronyms remain. Old names are found in the software, before new name is made public to market it. We all know that CDB stands for “container database” because a CDB is a container that contains at least 3 containers (CDB$ROOT, PDB$SEED and your PDB) and more when you have the multitenant option. However I see no mention of that name in ORACLE_HOME/rdbms/admin scripts:
[oracle@CDB ~]$ grep -i "container database" $ORACLE_HOME/rdbms/admin/* | wc
0 0 0

It seems that the original meaning of CDB was “consolidated database”:

[oracle@CDB ~]$ grep -i "consolidated database" $ORACLE_HOME/rdbms/admin/* | wc
58 465 7276
 
[oracle@CDB ~]$ grep -i "consolidated database" $ORACLE_HOME/rdbms/admin/*.bsq
/u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/dcore.bsq:REM Consolidated Database.
/u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/dcore.bsq:REM SEED Pluggable Database in a Consolidated Database.
/u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/doptim.bsq:Rem the caching in library cache in a Consolidated Database.
/u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/dsec.bsq: * Consolidated Database
/u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/dsec.bsq: * Consolidated Database

So, the idea behind CDB is the same as behind multitenant: it’s consolidation of several pluggable databases. And then, consolidation is not really what is done by single-tenant where we have only one user PDB per CDB and where this architecture requires 3 containers instead of one non-CDB.

Pluggable databases

Another idea would be to call it “pluggable databases architecture” because this is the real evolution. User tablespaces are transportable for a long time, since 8i introduction of extended rowid and locally managed tablespaces. 12c brought the same for system tablespaces so that PDBs are fully transportable physically, thanks to separation of dictionary.

The real point of this new architecture is the total separation of system data/metadata and user data/metadata, the separation of system DBA role and application DBA role, and this is pluggable databases. Multitenancy is just one thing that is possible with this new architecture. It’s important today because it’s required for the cloud (for consolidation and easy provisioning).

Oracle Multitenant

Actually, the feature was called ‘Oracle Pluggable Database’ until 12.1.0.1 was released publicly with a new name: Oracle Multitenant. And this is the name we can see in Database Feature Usage Statistics. Well, if you look at the first patchset of 12cR1, 12.1.0.2, you will see the old name ‘Oracle Pluggable Databases’ but this is a bug (Patch 20718081 changes back the name).

Here is what you see from Database Feature Usage Statistics after an upgrade from 12.1.0.1 to 12.1.0.2, in EM Express:

CaptureFUMultitenant

And from DBA_FEATURE_USAGE_STATISTICS view:


SQL> select name, version, detected_usages, currently_used, aux_count from dba_feature_usage_statistics where name like '%Multitenant%' or name like '%Pluggable%';
 
NAME VERSION DETECTED_USAGES CURRE AUX_COUNT
------------------------------ ----------------- --------------- ----- ----------
Oracle Multitenant 12.1.0.1.0 98 FALSE 1
Oracle Pluggable Databases 12.1.0.2.0 32 FALSE 1

So, bug aside, the name of the feature is ‘Oracle Multitenant’ and this is true for any container database, because the feature is considered as used as soon as V$DATABASE.CDB=’YES':
select count(*) into feature_boolean from v$database where cdb = 'YES'

The usage of the multitenant option is detected by the AUX_COUNT which is the number of user PDBs:
select count(*) into aux_count from v$pdbs where con_id > 2
CON_ID=0 is for the CDB, CON_ID=1 is for CDB$ROOT, CON_ID=2 is for PDB$SEED and user PDBs start at CON_ID > 2

So the name is “multitenant” whatever the number of PDBs.

So what?

I prefer to stick with “multitenant architecture” even when used without the multitenant option. It’s a good way to keep in mind that, even if it brings lot of interesting features for single-tenant as well, Oracle has taken the step of dictionary separation with the motivation of Cloud, Consolidation and Multitenancy. Charged options are a good trigger to increase priority of evolution requests…
However, this “multitenant architecture” brings very interesting features to Standard Edition and Enterprise Edition even without option. And it you doubt, I’ll try to convince you in San Francisco, September 18th.

 

Cet article Oracle Multitenant feature name est apparu en premier sur Blog dbi services.

SQL Server 2016 AlwaysOn: Direct seeding and performance considerations

Fri, 2016-07-08 13:05

This blog post follows the previous one about new direct seeding feature shipped with SQL Server 2016. As a reminder, I had some doubts about using direct seeding with large databases because log stream is not compressed by default but I forgot the performance improvements described into the Microsoft BOL. I remembered to talk about it a couple of months ago in this blog post.

Microsoft did a good work of improving the AlwaysOn log transport layer and it could be very interesting to compare two methods: Adding a 100 GB database by using usual way that includes backup and restore operations from the primary to the secondary or using direct seeding feature. Which one is the quickest method?

Let’s just have a quick look at my test environment. Two virtual machines with the following configuration:

  • 4x Intel Core i7-3630QM 2.3 GHz
  • 4GB of RAM
  • 2 10Gbps network cards
  • One disk that will host both the database data and log files on my primary replica (Samsung Portable SSD T3 500GB with S.M.A.R.T, NCQ and TRIM)
  • One disk that will host both the database data and log files on my secondary replica (Samsung Portable SSD T1 500GB with S.M.A.R.T, NCQ and TRIM)
  • One disk that will host backups (Samsung SSD 840 EVC, 250GB with S.M.A.R.T, NCQ and TRIM) used by both virtual machines

As an aside, each SSD disk is able to deliver at least 450MB/s and 35000 IOPS.

blog 99 - AG direct seeding - 0 - lab environment

I also used a custom database named GestionCom that contains roughly 100GB of data for my test. 100GB would be sufficient to get relevant results.

 

blog 99 - AG direct seeding - 01 - db GestionCom

 

Go ahead and let’s compare both synchronization methods

 

First test by using an usual way to add  a database to an availability group

As said earlier, my first test will consist in using the usual way so far to add a database to an availability group. Let’s say that we may use 3 ways for data synchronization: FULL, join only and skip initial synchronization. We will use the first method for this test that includes all the steps: backup and restore the concerned database and then join it to the availability group. At this point we may easily image that the most part of the time will be consumed in the backup and restore step. I also want to precise that I did not use voluntary fine tuning options like BUFFERCOUNT, MAXTRANSFERSIZE or splitting backups to several media files in order to stay compliant with the availability group wizard.

 

Step Duration Backup database to backup local disk (primary)WITH CHECKSUM, COMPRESSION 06’55’’ Restore database from network share (secondary)WITH CHECKSUM, NORECOVERY 17’10’’ Join database to availability group + start synchronization 00’01’’ Total 24’06’’

 

What about resource consumption?

On the primary …

blog 99 - AG direct seeding - 10 - CPU and disk activity during backup

 

blog 99 - AG direct seeding - 1 - network bandwith during restore from primary

On the secondary …

 

blog 99 - AG direct seeding - 11 - CPU and disk activity during restore

blog 99 - AG direct seeding - 2 - network bandwith during restore from secondary

CPU utilization is equal to 35% on average during the test. Moreover, disk write throughput seems to stagnate to 130 MB/s on average and includes both backup and restore activities. The network throughput utilization seems also to stagnate between 135 Mbps and 174 Mbps according to my test.

So it is clear that my environment is under-used regarding resource consumption in this first test.

 

Second test by using new direct seeding method

This time I will use the new database deployment method: direct seeding. As said in my previous blog, using this feature will simplify a lot the adding database process but what about the synchronization speed and resource consumption in this case?

Well, to get a good picture of what happens during the seeding process, we will use different tools as the new sys.dm_hadr_automatic_seeding DMV and extended events as well. Extended events will help us to understand what happens under the cover in this case but to measure only the time duration of the operation we don’t need them. If you look at the event list as well as categories, you will probably notice a new dbseed category available that corresponds to the direct seeding. Events in this category are only available from the debug channel. That’s fine because we want to track when the seeding process starts, when it finishes and what’s happen between these two events (like failure, timeout, progress). By the way, the hadr_physical_progress may be very useful to get a picture of network activity for the concerned seeding session if your network card is shared between other sessions or availability group replication activities. In my case, I’m the only one and I will get this information directly from the task manager panel.

So let’s create the extended event session:

CREATE EVENT SESSION [hadr_direct_seeding] 
ON SERVER 
ADD EVENT sqlserver.hadr_automatic_seeding_start
(
    ACTION(sqlserver.database_name,sqlserver.sql_text)
	)
,
ADD EVENT sqlserver.hadr_automatic_seeding_state_transition
(
    ACTION(sqlserver.database_name,sqlserver.sql_text)
),
ADD EVENT sqlserver.hadr_automatic_seeding_success
(
    ACTION(sqlserver.database_name,sqlserver.sql_text)
),
ADD EVENT sqlserver.hadr_automatic_seeding_timeout
(
    ACTION(sqlserver.database_name,sqlserver.sql_text)
),
ADD EVENT sqlserver.hadr_physical_seeding_progress
(
    ACTION(sqlserver.database_name,sqlserver.sql_text)
)
ADD TARGET package0.event_file
(
	SET filename = N'hadr_direct_seeding',
	max_file_size = (2048), 
	max_rollover_files = (10))
WITH 
(
	MAX_MEMORY=4096 KB,
	EVENT_RETENTION_MODE = ALLOW_SINGLE_EVENT_LOSS,
	MAX_DISPATCH_LATENCY = 30 SECONDS,
	MAX_EVENT_SIZE = 0 KB,
	MEMORY_PARTITION_MODE = NONE,
	TRACK_CAUSALITY = OFF,
	STARTUP_STATE = OFF
)
GO

 

And after adding the GestionCom database to the TestGrp availability group, the direct seeding feature comes into play. Honestly, it was a very big surprise! Let’s take a look at the network utilization:

blog 99 - AG direct seeding - 3 - network bandwith during seeding

 

A network usage of 2.2 Gbps on average this time! The direct seeding feature provides a better use of the network bandwidth and we may understand clearly why efforts have been made by Microsoft to improve the synchronization process.

Let’s now move on the CPU and disk utilization respectively from the primary and then the secondary

blog 99 - AG direct seeding - 5 - disk throughput during seeding from primary

blog 99 - AG direct seeding - 6 - disk throughput during seeding from secondary

~ 350 up to 450 MB/s on each side (gain x2) but an increase of the CPU utilization up to 70% during my test (increase x2). So a better resource usage but at the cost of a higher CPU utilization …

Let’s finish by looking at the sys.dm_hadr_automatic_seeding DMV that provides the answer to the question: are we faster in this case?

select
    ag.name as aag_name,
    ar.replica_server_name,
    d.name as database_name,
    has.current_state,
    has.failure_state_desc as failure_state,
    has.error_code,
    has.performed_seeding,
	DATEADD(mi, DATEDIFF(mi, GETUTCDATE(), GETDATE()), has.start_time) as start_time,
	DATEADD(mi, DATEDIFF(mi, GETUTCDATE(), GETDATE()), has.completion_time) as completion_time,
    has.number_of_attempts
from sys.dm_hadr_automatic_seeding as has
join sys.availability_groups as ag
    on ag.group_id = has.ag_id
join sys.availability_replicas as ar
    on ar.replica_id = has.ag_remote_replica_id
join sys.databases as d
    on d.group_database_id = has.ag_db_id

 

 

blog 99 - AG direct seeding - 7 - DMV stats

 

And the answer is yes as we may expect! Only 8 minutes (gain x3) to replicate and to synchronize the GestionCom database between the two high available replicas compared to the first method. But that’s not all … let’s focus on the redo thread activity from the secondary and you may notice a very interesting rate value (~ 12 MB/s). I don’t remember to have seen this value with current availability groups at customer places. This is the second improvement made by Microsoft concerned that has introduce parallel redo capability. As a reminder, before SQL Server 2016, there is only one redo thread per database. In this context, a single redo thread simply could not keep up with applying the changes as persisted in the log.

From the secondary, we may see some changes by looking at the sys.dm_exec_requests DMV:

select 
	r.command, 
	r.status,
	r.wait_type,
	r.wait_time,
	r.last_wait_type,
	r.scheduler_id
from sys.dm_exec_requests as r
where r.command like '%REDO%'
order by r.scheduler_id

 

 

blog 99 - AG direct seeding - 70 - parallel redo thread

Using direct seeding is definitely a solution to take into account to our future database deployment but I think we have to keep in mind two things according to this test: CPU and network consumption from seeding activity may impact the performance of other applications and vis-versa. In real world, there are good chances to be in this situation.

Finally let’s have a look at the extend event output. In respect of what we want to highlight in this blog post, we don’t get any other valuable information but one thing got my attention: LIMIT_CONCURRENT_BACKUPS value from the current value column (underlined in red). What does it mean exactly? Let’s talk about it  in a next blog post because this is a little bit out of scope of the main subject.

blog 99 - AG direct seeding - 12 - Xe with direct seeding

 

Third test by using direct seeding and compression

Let’s talk about the last test I performed. I used direct seeding without compression in the previous test so SQL Server didn’t compress the data stream by default in this case. However we may force SQL Server to use compression by setting a special trace flag 9567. After all, we want to avoid direct seeding flooding the wire and impacting the existing workload from other applications.

I have to admit that enabling compression with direct seeding is not so obvious. For instance I didn’t see any difference from the DMVs that indicates we’re using compression. (is_compression_enabled column from the sys.dm_hadr_physical_seeding_stats DMV is always equal to 0 regardless we use or not compression). The only obvious difference comes from the network throughput usage that is lower with compression (gain x 2.5). However I noticed an important increase of CPU utilization near from 100% on the primary in my case.

blog 99 - AG direct seeding - 8 - resource with seeding and compression from primary

What about seeding time? Well, I didn’t notice any gain on this field. Does compression allow to save network bandwidth? Maybe … hard to say with only this test and one specific environment.

I tried to add 3 VCPUs to each replica and leave one VCPU to the system so a total number of 7 VCPUS dedicated for SQL Server use.

blog 99 - AG direct seeding - 12 - cpu usage during seeding with compression 8 vcpus

At this point, I  admit to be a little bit surprising and I wonder if compression uses correctly all the available processors regarding the uneven distribution of CPU resource usage. The above picture is good representation of what I saw during other tests I performed with compression. In addition, I didn’t see any obvious performance gain in terms of duration except that wire is less used. I’m a little bit disappointed by compressiion but once again it is still much too early to draw a conclision and I’m looking forward direct seeding in action at my customers with real production infrastructure.

The bottom line is that direct seeding is a very promising feature and I love it because it is the direct visible part of the AlwaysOn performance improvements shipped with SQL Server 2016. However, and this is my personal opinion, I think we don’t let it fool us and consider to use direct seeding carefully according to your workload and available resources. Fortunately, in most cases it will be suitable.

Stay tuned!

 

 

 

Cet article SQL Server 2016 AlwaysOn: Direct seeding and performance considerations est apparu en premier sur Blog dbi services.

Compare source and target in a Dbvisit replication

Tue, 2016-07-05 13:51

You’ve setup a logical replication, and you trust it. But before the target goes into production, it will be safer to compare source and target. At least count the number of rows.
But tables are continuously changing, so how can you compare? Not so difficult thanks to Dbvisit replicate heartbeat table and Oracle flashback query.

Here is the state of the replication, with activity on the source and real-time replication to the target:
| Dbvisit Replicate 2.7.06.4485(MAX edition) - Evaluation License expires in 29 days
MINE IS running. Currently at plog 368 and SCN 6119128 (07/06/2016 04:15:21).
APPLY IS running. Currently at plog 368 and SCN 6119114 (07/06/2016 04:15:19).
Progress of replication dbvrep_XE:MINE->APPLY: total/this execution
--------------------------------------------------------------------------------------------------------------------------------------------
REPOE.CUSTOMERS: 100% Mine:961/961 Unrecov:0/0 Applied:961/961 Conflicts:0/0 Last:06/07/2016 04:12:12/OK
REPOE.ADDRESSES: 100% Mine:961/961 Unrecov:0/0 Applied:961/961 Conflicts:0/0 Last:06/07/2016 04:12:12/OK
REPOE.CARD_DETAILS: 100% Mine:894/894 Unrecov:0/0 Applied:894/894 Conflicts:0/0 Last:06/07/2016 04:12:12/OK
REPOE.ORDER_ITEMS: 100% Mine:5955/5955 Unrecov:0/0 Applied:5955/5955 Conflicts:0/0 Last:06/07/2016 04:12:12/OK
REPOE.ORDERS: 99% Mine:4781/4781 Unrecov:0/0 Applied:4780/4780 Conflicts:0/0 Last:06/07/2016 04:12:12/OK
REPOE.INVENTORIES: 100% Mine:5825/5825 Unrecov:0/0 Applied:5825/5825 Conflicts:0/0 Last:06/07/2016 04:12:12/OK
REPOE.LOGON: 99% Mine:6175/6175 Unrecov:0/0 Applied:6173/6173 Conflicts:0/0 Last:06/07/2016 04:12:12/OK
--------------------------------------------------------------------------------------------------------------------------------------------
7 tables listed.

If you wand to compare the rows from source and target, you will always see a difference because modifications on source arrive on target a few seconds later.

Source and target SCN

The first thing to do is to determine a consistent point in time where source and target are the same. This point in time exists because the redo log is sequential by nature, and the commits are done in the same order on target than source. And this order is visible with the SCN. The only problem is that on a logical replication the SCN on source and target are completely different and have their own life.

The first step is determine an SCN from the target and an SCN on the source that show the same state of transactions.

But before that, let’s connect to the target and set the environment:

$ sqlplus /nolog @ compare.sql
 
SQL*Plus: Release 11.2.0.2.0 Production on Tue Jul 5 18:15:34 2016
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
 
SQL> define table_owner=REPOE
SQL> define table_name=ORDERS
SQL>
SQL> connect system/manager@//192.168.56.67/XE
Connected.
SQL> alter session set nls_date_format='DD-MON-YYYY HH24:mi:ss';
Session altered.
SQL> alter session set nls_timestamp_format='DD-MON-YYYY HH24:mi:ss';
Session altered.

My example is on the #repattack environment, with Swingbench running on the source, and I’ll compare the ORDER table.

Heartbeat table

Each Dbvisit replicate configuration comes with an heartbeat table created in the Dbvisit schema on the source and replicated to the target. This table is updated every 10 seconds on the source with timestamp and SCN. This is a great way to check how the replication is working.Here it will be the way to get the SCN information from the source.

Flashback query

Oracle flashback query offers a nice way to get the commit SCN for the rows updated in the heartbeat table. From the target database, this is the commit SCN for the replication transaction (the APPLY process) and it can be displayed along with the SCN from the source transaction that is recorded in the heartbeat table and replicated to the target.

SQL> column versions_startscn new_value scn_target
SQL> column source_scn new_value scn_source
SQL> column mine_process_name format a12
SQL> column versions_starttime format a21
 
SQL> select mine_process_name,wallclock_date,mine_date,source_scn,mine_scn,versions_startscn,versions_starttime,versions_endscn
from DBVREP.DBRSCOMMON_HEARTBEAT versions between timestamp(sysdate-1/24/60) and sysdate
order by versions_endscn nulls last ;
 
MINE_PROCESS WALLCLOCK_DATE MINE_DATE SOURCE_SCN MINE_SCN VERSIONS_STARTSCN VERSIONS_STARTTIME VERSIONS_ENDSCN
------------ -------------------- -------------------- -------------------- -------------------- -------------------- --------------------- --------------------
MINE 06-JUL-2016 04:14:27 06-JUL-2016 04:14:22 6118717 6118661 4791342
MINE 06-JUL-2016 04:14:37 06-JUL-2016 04:14:31 6118786 6118748 4791342 06-JUL-2016 04:11:29 4791376
MINE 06-JUL-2016 04:14:47 06-JUL-2016 04:14:41 6118855 6118821 4791376 06-JUL-2016 04:11:39 4791410
MINE 06-JUL-2016 04:14:57 06-JUL-2016 04:14:51 6118925 6118888 4791410 06-JUL-2016 04:11:49 4791443
MINE 06-JUL-2016 04:15:07 06-JUL-2016 04:15:01 6119011 6118977 4791443 06-JUL-2016 04:11:59 4791479
MINE 06-JUL-2016 04:15:17 06-JUL-2016 04:15:11 6119091 6119059 4791479 06-JUL-2016 04:12:09 4791515
MINE 06-JUL-2016 04:15:27 06-JUL-2016 04:15:21 6119162 6119128 4791515 06-JUL-2016 04:12:19

This shows that the current version of the heartbeat table on target was commited at SCN 4791515 and we know that this state matches the SCN 6119162 on the source. You can choose any pair you want but the latest will probably be the fastest to query.

Counting rows on source

I’ll use flashback query to count the rows from the source at SCN 6119162. I’m doing it in parallel query, but be careful when the table has high modification activity there will be lot of undo blocks to read.

SQL> connect system/manager@//192.168.56.66/XE
Connected.
SQL> alter session force parallel query parallel 8;
Session altered.
 
SQL> select count(*) from "&table_owner."."&table_name." as of scn &scn_source;
old 1: select count(*) from "&table_owner."."&table_name." as of scn &scn_source
new 1: select count(*) from "REPOE"."ORDERS" as of scn 6119162
 
COUNT(*)
--------------------
775433

Counting rows on target

I’m doing the same fron the target, but with the SCN 4791515
SQL> connect system/manager@//192.168.56.67/XE
Connected.
SQL> alter session force parallel query parallel 8;
Session altered.
 
SQL> select count(*) from "&table_owner."."&table_name." as of scn &scn_target;
old 1: select count(*) from "&table_owner."."&table_name." as of scn &scn_target
new 1: select count(*) from "REPOE"."ORDERS" as of scn 4791515
 
COUNT(*)
--------------------
775433

Good. Same number of rows. This proves that even with constantly inserted tables we can find a point of comparison, thanks to Dbvisit heartbeat table and thanks to Oracle flashback query. If you are replicating with another logical replication product, you can simulate the heartbeat table with a job that updates the current SCN to a single row table, and replicate it. If your target is not Oracle, then there are good chances that you cannot do that kind of ‘as of’ query which means that you need to lock the table on source for the time you compare.

ORA_HASH

If you think that counting the rows is not sufficient, you can compare a hash value from the columns. Here is an example.
I get the list of columns, with ORA_HASH() function on it, and sum() between them:

SQL> column columns new_value columns
SQL> select listagg('ORA_HASH('||column_name||')','+') within group (order by column_name) columns
2 from dba_tab_columns where owner='&table_owner.' and table_name='&table_name';
old 2: from dba_tab_columns where owner='&table_owner.' and table_name='&table_name'
new 2: from dba_tab_columns where owner='REPOE' and table_name='ORDERS'
 
COLUMNS
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ORA_HASH(CARD_ID)+ORA_HASH(COST_OF_DELIVERY)+ORA_HASH(CUSTOMER_CLASS)+ORA_HASH(CUSTOMER_ID)+ORA_HASH(DELIVERY_ADDRESS_ID)+ORA_HASH(DELIVERY_TYPE)+ORA_HASH(INVOICE_ADDRESS_ID)+ORA_HASH(ORDER_DATE)+ORA_
HASH(ORDER_ID)+ORA_HASH(ORDER_MODE)+ORA_HASH(ORDER_STATUS)+ORA_HASH(ORDER_TOTAL)+ORA_HASH(PROMOTION_ID)+ORA_HASH(SALES_REP_ID)+ORA_HASH(WAIT_TILL_ALL_AVAILABLE)+ORA_HASH(WAREHOUSE_ID)

With this list defined in a substitution variable, I can compare the sum of hash values:

SQL> select count(*),avg(&columns.) hash from "&table_owner."."&table_name." as of scn &scn_target;
old 1: select count(*),sum(&columns.) hash from "&table_owner."."&table_name." as of scn &scn_target
new 1: select count(*),sum(ORA_HASH(CARD_ID)+ORA_HASH(COST_OF_DELIVERY)+ORA_HASH(CUSTOMER_CLASS)+ORA_HASH(CUSTOMER_ID)+ORA_HASH(DELIVERY_ADDRESS_ID)+ORA_HASH(DELIVERY_TYPE)+ORA_HASH(INVOICE_ADDRESS_ID)+ORA_HASH(ORDER_DATE)+ORA_HASH(ORDER_ID)+ORA_HASH(ORDER_MODE)+ORA_HASH(ORDER_STATUS)+ORA_HASH(ORDER_TOTAL)+ORA_HASH(PROMOTION_ID)+ORA_HASH(SALES_REP_ID)+ORA_HASH(WAIT_TILL_ALL_AVAILABLE)+ORA_HASH(WAREHOUSE_ID)) hash from "REPOE"."ORDERS" as of scn 4791515
 
COUNT(*) HASH
-------------------- --------------------
775433 317531150805040439
 
SQL> connect system/manager@//192.168.56.66/XE
Connected.
SQL> alter session force parallel query parallel 8;
Session altered.
 
SQL> select count(*),avg(&columns.) hash from "&table_owner."."&table_name." as of scn &scn_source;
old 1: select count(*),sum(&columns.) hash from "&table_owner."."&table_name." as of scn &scn_source
new 1: select count(*),sum(ORA_HASH(CARD_ID)+ORA_HASH(COST_OF_DELIVERY)+ORA_HASH(CUSTOMER_CLASS)+ORA_HASH(CUSTOMER_ID)+ORA_HASH(DELIVERY_ADDRESS_ID)+ORA_HASH(DELIVERY_TYPE)+ORA_HASH(INVOICE_ADDRESS_ID)+ORA_HASH(ORDER_DATE)+ORA_HASH(ORDER_ID)+ORA_HASH(ORDER_MODE)+ORA_HASH(ORDER_STATUS)+ORA_HASH(ORDER_TOTAL)+ORA_HASH(PROMOTION_ID)+ORA_HASH(SALES_REP_ID)+ORA_HASH(WAIT_TILL_ALL_AVAILABLE)+ORA_HASH(WAREHOUSE_ID)) hash from "REPOE"."ORDERS" as of scn 6119162
 
COUNT(*) HASH
-------------------- --------------------
775433 17531150805040439

Note that this is only an example. You must adapt for your needs: precision of the comparison and performance.

So what?

Comparing source and target is not a bad idea. With Dbvisit replicate, if you defined the replication properly and did the initial import with the SCN provided by the setup wizard, you should not miss transactions, even when there is lot of activity on source, and even without locking the source for the initialisation. But it’s always good to compare, especially before the ‘Go’ decision of a migration done with Dbvisit replicate to lower the downtime (and the stress). Thanks to heartbeat table and flashback query, a checksum is not too hard to implement.

 

Cet article Compare source and target in a Dbvisit replication est apparu en premier sur Blog dbi services.

Script to suggest FK indexes

Mon, 2016-07-04 10:55

In Oracle, when the referenced key is deleted (by delete on parent table, or update on the referenced columns) the child tables(s) are locked to prevent any concurrent insert that may reference the old key. This lock is a big issue on OLTP applications because it’s a TM Share lock, usually reserved for DDL only, and blocking any modification on the child table and blocking some modifications on tables that have a relationship with that child table. This problem can be be overcome when an index structure which allows to find concurrent inserts that may reference the old value. Here is the script I use to find which index is missing.

The idea is not to suggest to index all foreign keys for three reasons:

  • when there are no delete or update in parent side, you don’t have that locking issue
  • when there is minimal write activity on child side, the lock may not have big consequence
  • you probably have indexes build for performance reasons that can be used to avoid locking even when they have more columns or have different column order

The idea is not to suggest an index for each locking issue but only when blocking locks have been observed. Yes, it is a reactive solution, but proactive ones cannot be automatic. If you know your application well and then you know what you ave to index, then you don’t need this script. If you don’t, then proactive suggestion will suggest too many indexes.

Here is the kind of output that I get with this script:
-- DELETE on APP1.GCO_GOOD has locked APP1.FAL_TASK in mode 5 for 8 minutes between 14-sep 10:36 and 14-sep 10:44
-- blocked statement: DELETE FAL LOT LOT WHERE C FAB TYPE AND EXISTS SELECT
-- blocked statement: UPDATE DOC POSITION DETAIL SET DOC POSITION DETAIL ID B
-- blocked statement: delete from C AP GCO GOOD where rowid doa rowid
-- blocked statement: DELETE FROM FAL LOT WHERE FAL LOT ID B
-- blocked statement: DELETE FROM FAL TASK LINK PROP WHERE FAL LOT PROP ID B
-- blocked statement: INSERT INTO FAL LOT PROGRESS FAL LOT PROGRESS ID FAL LOT
-- blocked statement: insert into FAL TASK LINK FAL SCHEDULE STEP ID
-- FK chain: APP1.GCO_GOOD referenced by(cascade delete) APP1"."GCO_SERVICE referenced by(cascade set null) APP1"."FAL_TASK (APP1.FAL_TASK_S_GCO_SERV) unindexed
-- FK column GCO_GCO_GOOD_ID
-- Suggested index: CREATE INDEX ON "APP1"."FAL_TASK" ("GCO_GCO_GOOD_ID");
-- Other existing Indexes: CREATE INDEX "APP1"."FAL_TASK_S_DIC_FR_TASK_COD7_FK" ON "APP1"."FAL_TASK" ("DIC_FREE_TASK_CODE7_ID")
-- Other existing Indexes: CREATE INDEX "APP1"."FAL_TASK_S_DIC_FR_TASK_COD9_FK" ON "APP1"."FAL_TASK" ("DIC_FREE_TASK_CODE9_ID")
-- Other existing Indexes: CREATE INDEX "APP1"."FAL_TASK_S_PPS_TOOLS13_FK" ON "APP1"."FAL_TASK" ("PPS_TOOLS13_ID")

I’ll detail each part.

ASH

Yes we have to detect blocking issues from the past and I use ASH for that. If you don’t have Diagnostic Pack, then you have to change the query with another way to sample V$SESSION.
-- DELETE on APP1.GCO_GOOD has locked APP1.FAL_TASK in mode 5 for 8 minutes between 14-sep 10:36 and 14-sep 10:44
-- blocked statement: DELETE FAL LOT LOT WHERE C FAB TYPE AND EXISTS SELECT
-- blocked statement: UPDATE DOC POSITION DETAIL SET DOC POSITION DETAIL ID B
-- blocked statement: delete from C AP GCO GOOD where rowid doa rowid
-- blocked statement: DELETE FROM FAL LOT WHERE FAL LOT ID B
-- blocked statement: DELETE FROM FAL TASK LINK PROP WHERE FAL LOT PROP ID B
-- blocked statement: INSERT INTO FAL LOT PROGRESS FAL LOT PROGRESS ID FAL LOT
-- blocked statement: insert into FAL TASK LINK FAL SCHEDULE STEP ID

The first part of the output comes from ASH and detects the blocking situations: which statement, how long, and the statements that were blocked.
This part of the script will probably need to be customized: I join with DBA_HIST_SQL_PLAN supposing that the queries have been captured by AWR as long running queries. I check last 15 days of ASH. You may change those to fit the blocking situation encountered.

Foreign Key

Then, we have to find the unindexed foreign key which is responsible for those locks.

-- FK chain: APP1.GCO_GOOD referenced by(cascade delete) APP1"."GCO_SERVICE referenced by(cascade set null) APP1"."FAL_TASK (APP1.FAL_TASK_S_GCO_SERV) unindexed
-- FK column GCO_GCO_GOOD_ID

Here you see that it’s not easy. Actually, all scripts I’ve seen do not detect that situation where the CASCADE SET NULL cascades the issue. Here “APP1″.”GCO_SERVICE” has its foreign key indexed but the SET NULL, even when not on the referenced column, locks the child (for no reason as far as I know, but it does).
My script goes up to a level 10 using a connect by query to detect this situation.

Suggested Index

The suggested index is an index on the foreign key column:

-- Suggested index: CREATE INDEX ON "APP1"."FAL_TASK" ("GCO_GCO_GOOD_ID");

This is only a suggestion. Any regular index that starts with foreign key column in whatever order can be used to avoid the lock.
Remember to think about performance first. The index may be used to navigate from parent to child.

Existing Index

Finally, when adding an index it’s good to check if there are other indexe that would not be needed anymore, so my script displays all of them.
If you think that some indexes are not required, remember that in 12c you can make them invisible for a while and you will be able to bring them back to visible quickly in case of regression.

Script

Here is the script. Sorry, no comments on it yet and a few display things to change. Just try it, it’s just a query on AWR (need Diag. Pack) and table/index/constraint metadata. You can customize it and don’t hesitate to comment if you have ideas to improve. I used it in several environments and it has always found the chain of foreign key that is responsible for an ‘enq: TM’ blocking situation. And believe me this is not always easy to do just by looking at the data model.


set serveroutput on
declare
procedure print_all(s varchar2) is begin null;
dbms_output.put_line(s);
end;
procedure print_ddl(s varchar2) is begin null;
dbms_output.put_line(s);
end;
begin
dbms_metadata.set_transform_param(dbms_metadata.session_transform,'SEGMENT_ATTRIBUTES',false);
for a in (
select count(*) samples,
event,p1,p2,o.owner c_owner,o.object_name c_object_name,p.object_owner p_owner,p.object_name p_object_name,id,operation,min(p1-1414332420+4) lock_mode,min(sample_time) min_time,max(sample_time) max_time,ceil(10*count(distinct sample_id)/60) minutes
from dba_hist_active_sess_history left outer join dba_hist_sql_plan p using(dbid,sql_id) left outer join dba_objects o on object_id=p2 left outer join dba_objects po on po.object_id=current_obj#
where event like 'enq: TM%' and p1>=1414332420 and sample_time>sysdate-15 and p.id=1 and operation in('DELETE','UPDATE','MERGE')
group by
event,p1,p2,o.owner,o.object_name,p.object_owner,p.object_name,po.owner,po.object_name,id,operation
order by count(*) desc
) loop
print_ddl('-- '||a.operation||' on '||a.p_owner||'.'||a.p_object_name||' has locked '||a.c_owner||'.'||a.c_object_name||' in mode '||a.lock_mode||' for '||a.minutes||' minutes between '||to_char(a.min_time,'dd-mon hh24:mi')||' and '||to_char(a.max_time,'dd-mon hh24:mi'));
for s in (
select distinct regexp_replace(cast(substr(sql_text,1,2000) as varchar2(60)),'[^a-zA-Z ]',' ') sql_text
from dba_hist_active_sess_history join dba_hist_sqltext t using(dbid,sql_id)
where event like 'enq: TM%' and p2=a.p2 and sample_time>sysdate-90
) loop
print_all('-- '||'blocked statement: '||s.sql_text);
end loop;
for c in (
with
c as (
select p.owner p_owner,p.table_name p_table_name,c.owner c_owner,c.table_name c_table_name,c.delete_rule,c.constraint_name
from dba_constraints p
join dba_constraints c on (c.r_owner=p.owner and c.r_constraint_name=p.constraint_name)
where p.constraint_type in ('P','U') and c.constraint_type='R'
)
select c_owner owner,constraint_name,c_table_name,connect_by_root(p_owner||'.'||p_table_name)||sys_connect_by_path(decode(delete_rule,'CASCADE','(cascade delete)','SET NULL','(cascade set null)',' ')||' '||c_owner||'"."'||c_table_name,' referenced by') foreign_keys
from c
where level<=10 and c_owner=a.c_owner and c_table_name=a.c_object_name
connect by nocycle p_owner=prior c_owner and p_table_name=prior c_table_name and ( level=1 or prior delete_rule in ('CASCADE','SET NULL') )
start with p_owner=a.p_owner and p_table_name=a.p_object_name
) loop
print_all('-- '||'FK chain: '||c.foreign_keys||' ('||c.owner||'.'||c.constraint_name||')'||' unindexed');
for l in (select * from dba_cons_columns where owner=c.owner and constraint_name=c.constraint_name) loop
print_all('-- FK column '||l.column_name);
end loop;
print_ddl('-- Suggested index: '||regexp_replace(translate(dbms_metadata.get_ddl('REF_CONSTRAINT',c.constraint_name,c.owner),chr(10)||chr(13),' '),'ALTER TABLE ("[^"]+"[.]"[^"]+") ADD CONSTRAINT ("[^"]+") FOREIGN KEY ([(].*[)]).* REFERENCES ".*','CREATE INDEX ON \1 \3;'));
for x in (
select rtrim(translate(dbms_metadata.get_ddl('INDEX',index_name,index_owner),chr(10)||chr(13),' ')) ddl
from dba_ind_columns where (index_owner,index_name) in (select owner,index_name from dba_indexes where owner=c.owner and table_name=c.c_table_name)
and column_name in (select column_name from dba_cons_columns where owner=c.owner and constraint_name=c.constraint_name)
)
loop
print_ddl('-- Existing candidate indexes '||x.ddl);
end loop;
for x in (
select rtrim(translate(dbms_metadata.get_ddl('INDEX',index_name,index_owner),chr(10)||chr(13),' ')) ddl
from dba_ind_columns where (index_owner,index_name) in (select owner,index_name from dba_indexes where owner=c.owner and table_name=c.c_table_name)
)
loop
print_all('-- Other existing Indexes: '||x.ddl);
end loop;
end loop;
end loop;
end;
/

I didn’t take time to document/comment the script but don’t hesitate to ask what you don’t understand there.

You should not see any ‘enq: TM’ from an OLTP application. If you have them, even short, they will become problematic one day. It’s the kind of thing that can block the whole database.

 

Cet article Script to suggest FK indexes est apparu en premier sur Blog dbi services.

SQL Saturday 510 – Locks, latches et spinlocks – Les slides

Sun, 2016-07-03 13:40

Cette année les gens ont encore répondu présent au SQLSaturday à Paris qui devenu incontournable pour ceux qui aiment échanger autour de la donnée avec les produits Microsoft. Par ailleurs, le temps a plutôt été au rendez-vous cette année et la vue depuis le 40ème étage de la tour Montparnasse est toujours aussi impressionnante.

Pour ma part, j’ai eu le plaisir d’animer une session en fin de journée sur la concurrence d’accès mais vu depuis SQL Server avec 3 principaux mécanismes utilisés comme les locks, latches et spinlocks.

blog 98 - lock latches spinlocks

 

Les slides sont téléchargement directement depuis le site du Pass.

Encore un grand merci aux organisateurs de cet événement (GUSS), aux volontaires de Supinfo, aux sponsors et aux participants!

A bientôt

 

Cet article SQL Saturday 510 – Locks, latches et spinlocks – Les slides est apparu en premier sur Blog dbi services.

GoldenGate 12.2 – Installation of the monitoring agent

Thu, 2016-06-30 02:50

As described in my last post, GoldenGate Studio requires the monitor agent on each GoldenGate instance. The goal of this agent is to allow GoldenGate Studio to interact with GoldenGate, for example, to deploy a new solution.

So lets start with the installation of the agent.

Download

The first step is to download the monitor agent (Oracle GoldenGate Monitor). It is available here.

Install_agent_monitor_8

Installation

The second step is to install the product.

After you have transferred the installer to the server decompress it.

oracle@srvoracle:/tmp/ [DB1] cd monitor/
oracle@srvoracle:/tmp/monitor/ [DB1] ll
total 420092
-rw-r--r-- 1 oracle oinstall 430166267 Feb 29 13:32 fmw_12.2.1.0.0_ogg_Disk1_1of1.zip
oracle@srvoracle:/tmp/monitor/ [DB1] unzip fmw_12.2.1.0.0_ogg_Disk1_1of1.zip 
Archive:  fmw_12.2.1.0.0_ogg_Disk1_1of1.zip
  inflating: fmw_12.2.1.0.0_ogg.jar  
oracle@srvoracle:/tmp/monitor/ [DB1] ll
total 840392
-rw-r--r-- 1 oracle oinstall 430166267 Feb 29 13:32 fmw_12.2.1.0.0_ogg_Disk1_1of1.zip
-r-xr-xr-x 1 oracle oinstall 430387063 Oct 14 08:33 fmw_12.2.1.0.0_ogg.jar

For launching the installer it is mandatory to have at a minimum Java version 8 (1.8). If this is not available it can be downloaded here.

To start the installation, launch the fmw_12.2.1.0.0_ogg.jar.

oracle@srvoracle:/home/oracle/Downloads/jdk1.8.0_73/bin/ [DB1] ./java -jar /tmp/monitor/fmw_12.2.1.0.0_ogg.jar 
Launcher log file is /tmp/OraInstall2016-02-29_01-39-26PM/launcher2016-02-29_01-39-26PM.log.
Extracting files.......
Starting Oracle Universal Installer

Checking if CPU speed is above 300 MHz.   Actual 2494.801 MHz    Passed
Checking monitor: must be configured to display at least 256 colors.   Actual 16777216    Passed
Checking swap space: must be greater than 512 MB.   Actual 4095 MB    Passed
Checking if this platform requires a 64-bit JVM.   Actual 64    Passed (64-bit not required)
Checking temp space: must be greater than 300 MB.   Actual 28817 MB    Passed


Preparing to launch the Oracle Universal Installer from /tmp/OraInstall2016-02-29_01-39-26PM
Log: /tmp/OraInstall2016-02-29_01-39-26PM/install2016-02-29_01-39-26PM.log
Logs successfully copied to /u01/app/oraInventory/logs.

The OUI (Oracle Universal Installer) will start. On the first screen just click on the next button.

Install_agent_monitor_1

On the next screen, we can choose the option for the updates. In my case, I leave the option to skip the auto updates.

Install_agent_monitor_2

Fill up the software location desired for GoldenGate agent.

Install_agent_monitor_3

Select the option to install only the agent monitor.

Install_agent_monitor_4

The OUI will test the system configuration and the java version.

Install_agent_monitor_5

The OUI provides a summary of the configuration. Click on next button, if all is ok.

Install_agent_monitor_9

The installation is done.

Install_agent_monitor_6

At the end OUI provides a summary of the installation with the location of the logs.

Install_agent_monitor_7

Now GoldenGate agent is installed.

Configuration Create instance

To create the instance of the agent, go where the binaries have been installed. In this example, it is /u01/app/oracle/product/jagent/oggmon/ogg_agent.

After that, launch the script createMonitorAgentInstance.sh.

oracle@srvoracle:/u01/app/oracle/product/jagent/oggmon/ogg_agent/ [DB1] ./createMonitorAgentInstance.sh 
Please enter absolute path of Oracle GoldenGate home directory : /u01/app/oracle/product/12.1.0/gg_1
Please enter absolute path of OGG Agent instance : /u01/app/oracle/product/12.1.3.0/jagent
Please enter unique name to replace timestamp in startMonitorAgent script (startMonitorAgentInstance_20160229140552.sh) : 2
Sucessfully created OGG Agent instance.
Create password

The agent needs a password to work. All the passwords will be stored in a wallet. For this go to the ORACLE_HOME_AGENT/bin. In my case, /u01/app/oracle/product/12.1.3.0/jagent/bin.

Launch the script pw_agent_util.sh.

oracle@srvoracle:/u01/app/oracle/product/12.1.3.0/jagent/bin/ [DB1] ./pw_agent_util.sh -jagentonly
Please create a password for Java Agent: 
Please confirm password for Java Agent: 
Feb 29, 2016 2:18:55 PM oracle.security.jps.JpsStartup start
INFO: Jps initializing.
Feb 29, 2016 2:18:56 PM oracle.security.jps.JpsStartup start
INFO: Jps started.
Wallet is created successfully.
Enable monitoring

To enable the monitoring, launch ggsci command and edit the GOLBALS parameter file.

oracle@srvoracle:/u01/app/oracle/product/12.1.3.0/jagent/bin/ [DB1] ggi 

GGSCI (srvoracle) 2> edit params ./GLOBALS

GGSCI (srvoracle) 4> view params ./GLOBALS

GGSCHEMA ggadmin
CHECKPOINTTABLE ggadmin.checkpoint
ENABLEMONITORING

Now restart the ggsci command and the jagent appears when doing an “info all”.

oracle@srvoracle:/u01/app/oracle/product/12.1.3.0/jagent/bin/ [DB1] ggi

GGSCI (srvoracle) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED                                           
JAGENT      STOPPED

To finalize, delete the datastore and recreate one.

oracle@srvoracle:/u01/app/oracle/product/12.1.3.0/jagent/ [DB1] ggi

GGSCI (srvoracle) 1> delete datastore
Are you sure you want to delete the datastore? yes

2016-02-29 14:33:30  INFO    OGG-06492  Datastore deleted.

GGSCI (srvoracle) 2> create datastore
Profile 'Trace' added.

2016-02-29 14:33:55  INFO    OGG-06489  Datastore created.

Now you can start the manager process and the jagent.

Conclusion

Now that the agents have been installed on each instance, all the prerequisite for GoldenGate Studio are met.

In the next blog, I will show you how to use GoldenGate Studio to deploy a solution

 

Cet article GoldenGate 12.2 – Installation of the monitoring agent est apparu en premier sur Blog dbi services.

SQL Server 2016: Always Encrypted – part 2

Thu, 2016-06-30 02:39

In my last blog post about SQL Server 2016 Always Encrypted, here, I showed how to use this new functionality but also that you have to separate the different execution context with an Application server, a database server and a security server to avoid that certificate will be available for all users and break the segregation.

Let’s see how to build those environment.
In my security server named SQL2016-2, I first create a Self-signed certificate with PowerShell after having import the PKI module. The certificate is stored in the local machine with:

  • DnsName, DNS name of the certificate: Demo_AlwaysEncrypted.cert
  • CertStoreLocation, specifies the certificate store in which to store the new certificate: Cert:\LocalMachine\MY
  • Provider, specifies the name of the KSP or CSP that this cmdlet uses to create the certificate: Microsoft Strong Cryptographic Provider
  • KeyAlgorithm, specifies the name of the algorithm that creates the asymmetric keys that are associated with the new certificate: RSA
  • KeyLength, specifies the length, in bits, of the key that is associated with the new certificate: 2048
  • HashAlgorithm, specifies the name of the hash algorithm to use to sign the new certificate: sha256

PowerShell script:

Import-Module PKI
New-SelfSignedCertificate -DnsName "Demo_AlwaysEncrypted.cert" -CertStoreLocation Cert:\LocalMachine\MY -Provider "Microsoft Strong Cryptographic Provider" -KeyAlgorithm RSA -KeyLength 2048 -HashAlgorithm sha256

Blog_AlwaysEncrypted_part2_1

I have now my new certificate.
I open my SQL Server instance and navigate in my database SecurityDB to Security, Always Encrypted Keys and right click on Column Master Keys and select “New Column Master Key…”.
I create a new Column Master Key with the certificate I have just created on my local Machine and I name it CMK2:

Blog_AlwaysEncrypted_part2_2

The code to create my Master Key is the following, you can have it by clicking on the Script button in the above screen:

CREATE COLUMN MASTER KEY [CMK2]
WITH
(
       KEY_STORE_PROVIDER_NAME = N'MSSQL_CERTIFICATE_STORE',
       KEY_PATH = N'LocalMachine/My/A3DF88FA53ED5347ABCF709D7A15621CDC358456'
)
GO

I create now a Column Encryption Key named CEK2 with the Column Master Key I have create before and named CMK2:

Blog_AlwaysEncrypted_part2_3

The code to create my Encryption Key is the following:

CREATE COLUMN ENCRYPTION KEY [CEK2]
WITH VALUES
(
       COLUMN_MASTER_KEY = [CMK2],
       ALGORITHM = 'RSA_OAEP',
       ENCRYPTED_VALUE = 0x01700000016C006F00630061006C006D0061006300680069006E0065002F006D0079002F00610033006400660038003800660061003500330065006400350033003400370061006200630066003700300039006400370061003100350036003200310063006400630033003500380034003500360098721948A1283B0E5375B4E7F0A5F2EED05EDB0A87814BF66EABCBE2B4E75B5A759CF48D7D87D6A2C44A6DB2B587992E412EAF20D3FE826844FE76B1BC764E769C911486150E08EB972DBF406FF51738EFAA7F831A27A540F948CBC673273B1205A2CEF33AE77A0AF01DB3589A4A746151DC8C699DAB7AD1B99910A598EA7258670F1D5C0600704A9BB440F86761F41E8C72E52FFF6A7348B65F62E60D90B60CF1FC00B1B473F40E82B140219A317760CBC8591E07215AD114FAF80F08361C607823DB44037C455514B3789661560613638D058D7965846962B9FD3D9ECF0AC62325043AB40D892683744DC994E868C5F0E3FD14277D8941977426AC59DEA7EEF1394641DB314C2DCB083D3D5A12B97A131986D9834CBF2837908D3EF54C9104D92371C8A9EDC9B22555D8D615440D21B03393303FD4A9C0047282091178F31323A6EF4313576470349F32F106D38ABC3B0A1C4F47E870EA43F7B90D85D4F6C6686368E4E5EFC416EA08241F75F2D7B4F4B700114CA545AA4310829B2691A6D2C553F8F8B7E3983FC99F4E0A28BD79C84F50B7F6ABB17419DEAE06ACA694D68B00B48DBC0D66B46847A389C8A587F138CEF7791845A999EA45849D2FAA45BDA88FE134C70E592937981A7B2A18397CC615BB435A1EDDD66CF58119D89B24190744F4326718CBCDC1182AD20C8023EECB3994569844270DC90020B20D70B5CCC5
)
GO

I will now transfer those two certificates to my SQL Server VM, named SQL2016-1, in order to encrypt my table columns with them. To do that, I will execute the scripts I have generated before in my SQL Server instance in my security VM. I have created a second database named TestDB2 where I will execute my scripts:

Blog_AlwaysEncrypted_part2_4

After execution of the script I have my Column Master Key and my Column Encryption Key in the Security\Always Encrypted Keys path of my TestDB2 database.

I will now create the same table that I have on my TestDB database (see my previous blog post) in my new database TestDB2. For that I will execute the following script with the same encrypted columns (CustAccountNumber, CustAccountBalance, CustCreditCardNumber) but with my new Column Encryption Key CEK2:

Blog_AlwaysEncrypted_part2_5

My table is created with the specified columns encrypted.
I will now export my certificate from my security server to my application server with PowerShell.
First, I retrieve my certificate from the store and export it to a file with a password:

PowerShell script:

# I retrieve the Certificate from the store
$MyCert = Get-ChildItem Cert:\LocalMachine\My -DnsName "Demo_AlwaysEncrypted.cert"
# I create a password to protect my certificate
$pwd = ConvertTo-SecureString -String "Demo" -Force –AsPlainText
# I export my certificate as PFX Certificate Archive
Export-PfxCertificate -Cert $MyCert -FilePath C:\Temp\MyCert.pfx -Password $pwd

Blog_AlwaysEncrypted_part2_6

I’m able now to copy my certificate in my application server and import it:

PowerShell script:

# I import the certificate from my file
$pwd = ConvertTo-SecureString -String "Demo" -Force –AsPlainText
Import-PfxCertificate -CertStoreLocation Cert:\LocalMachine\My -FilePath C:\Temp\MyCert.pfx -Password $pwd

Blog_AlwaysEncrypted_part2_7

My certificate is now in the store of my application server and I will be able at this time to insert and read plain text data from my application server. I will use a PowerShell script to insert data. I will not explain all this script but it’s good to know that:

  • The connection string needs to enable Always Encrypted for the client application, here PowerShell, with this parameter: Column Encryption Setting=enabled
  • The application passes plain text data in SqlParameter objects to the driver. The driver will encrypt the values and send encrypted values to the SQL Server VM

PowerShell script:

$sqlConn = New-Object System.Data.SqlClient.SqlConnection
$sqlConn.ConnectionString = "Server=SQL2016-1\SQL2016_1;Integrated Security=true; Initial Catalog=TestDB2; Column Encryption Setting=enabled;"
$sqlConn.Open()
$sqlcmd = New-Object System.Data.SqlClient.SqlCommand
$sqlcmd.Connection = $sqlConn
$sqlcmd.CommandText = "INSERT INTO dbo.Customer (CustNb, CustName, CustSurname, CustAccountNumber, CustAccountBalance, CustCreditCardNumber, CustCountry) VALUES (@CustNb, @CustName, @CustSurname, @CustAccountNumber, @CustAccountBalance, @CustCreditCardNumber, @CustCountry)"
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustNb",[Data.SQLDBType]::uniqueidentifier)))
$sqlcmd.Parameters["@CustNb"].Value = [GUID]::NewGuid()
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustName",[Data.SQLDBType]::VarChar,50)))
$sqlcmd.Parameters["@CustName"].Value = "Durand"
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustSurname",[Data.SQLDBType]::VarChar,50)))
$sqlcmd.Parameters["@CustSurname"].Value = "Laurent"
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustAccountNumber",[Data.SQLDBType]::VarChar,50)))
$sqlcmd.Parameters["@CustAccountNumber"].Value = "1234-234589"
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustAccountBalance",[Data.SQLDBType]::Int)))
$sqlcmd.Parameters["@CustAccountBalance"].Value = 1256890
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustCreditCardNumber",[Data.SQLDBType]::VarChar,50)))
$sqlcmd.Parameters["@CustCreditCardNumber"].Value = "1234-4245-4563-4564"
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustCountry",[Data.SQLDBType]::VarChar,200)))
$sqlcmd.Parameters["@CustCountry"].Value = "France"
$sqlcmd.ExecuteNonQuery();
$sqlConn.Close()

Blog_AlwaysEncrypted_part2_9

I have now inserted a row in my Customer table. If I try to do the same insert without the parameter Column Encryption Setting=enabled, I receive an error message:

Blog_AlwaysEncrypted_part2_10

Now, if I want to read my table from my application server I can execute a simple select * with this PowerShell script and my parameter Column Encryption Setting=enabled:

PowerShell script:

$sqlConn = New-Object System.Data.SqlClient.SqlConnection
$sqlConn.ConnectionString = “Server=SQL2016-1\SQL2016_1;Integrated Security=true; Initial Catalog=TestDB2; Column Encryption Setting=enabled;”
$sqlConn.Open()
$sqlcmd = $sqlConn.CreateCommand()
$query = "SELECT * FROM dbo.customer"
$sqlcmd.CommandText = $query
$adp = New-Object System.Data.SqlClient.SqlDataAdapter $sqlcmd
$data = New-Object System.Data.DataSet
$adp.Fill($data) | Out-Null
$data.Tables

Blog_AlwaysEncrypted_part2_11

I see plan text data but without the parameter:

Blog_AlwaysEncrypted_part2_12

I’m able to read the table but the encrypted columns appear as blob and cannot by readable.
If I go back to my SQL Server VM, even with SYSADMIN right and with the connection string parameter Column Encryption Setting=enabled I am not allowed to query the table:

Blog_AlwaysEncrypted_part2_13

The error indicates that I don’t have access on my SQL Server VM at the certificate used to encrypt the column master key and so I’m not able to decrypt the encrypted columns.
If I don’t use the parameter Column Encryption Setting=enabled during my connection via SQL Server Management Studio and try to query my table:

Blog_AlwaysEncrypted_part2_14

I can see my table row(s) but the encrypted columns appear as BLOB and so are protected to unexpected reader which is exactly the goal of this new feature: avoid that SQL Server DBAs or users with some rights should able to visualize critical data.
In order to validate this security, the certificate must not be saved in the certificate store of the SQL Server VM.
That’s all Folks! ;-)

 

Cet article SQL Server 2016: Always Encrypted – part 2 est apparu en premier sur Blog dbi services.

Documentum Multiple ADTS: Switching rendition queue

Wed, 2016-06-29 08:29

As part of my previous posts about having 2 rendition servers for one docbase (see below), I’ll show you how to simply switch a rendition queue item to the other server
http://blog.dbi-services.com/documentum-multiple-adts-ratio-of-rendition-creations-between-instances-part-1/
http://blog.dbi-services.com/documentum-multiple-adts-ratio-of-rendition-creations-between-instances-part-2/

I had an issue by a customer where one of the two rendition server was stuck since 2 days. As I explained in my previous posts, each server will reserve a group of items from the queue for it to process. Let’s say we got the threshold to 5 items. Each server will reserve 5 items in the dmi_queue_item and set the attribute sign_off_user to itself. E.g. RNDSERVER_DOCBASE1.

Then it will process each items one by one; onces one is done it will reserve a new one from the queue, and so on.

The problem is: if the rendition server is stuck for whatever reason all reserved items will NOT go back to the available pool. It means that they will be reserved by THIS rendition server until you fix the server and it starts processing them again.

You can imagine what I got by the customer, some documents were not rendered since 2 days!

So here is the simplest solution to put the items back in the pool:

update dmi_queue_item objects set sign_off_user ='' where sign_off_user ='RNDSERVER_DOCBASE1';

Hence all items will be set as available. The other rendition server should reserve them now as the current server is stuck and can’t reserve more items.

In the case of a big file beeing processed by the first server and you want the documents to be processed by the other one you can reserver items by yourself manually with:

update dmi_queue_item objects set sign_off_user='RNDSERVER2_DOCBASE1' where item_id in ('09xxxx','09xxxx');

If you have any questions please use the comment section.

 

Cet article Documentum Multiple ADTS: Switching rendition queue est apparu en premier sur Blog dbi services.

PostgreSQL as a central reporting hub? – Yes, of course

Tue, 2016-06-28 15:31

For every client we do projects for there usually are database systems from various vendors. In most places you at least find some Oracle and MS SQL Server databases. Probably somewhere there are even some MySQL/MariaDB or PostgreSQL instance running. The Oracle and MS SQL Server stuff almost always is there because vendors require it. For MySQL/MariaDB the common use case are web applications, ticketing systems and home grown developments. At some point in time there might be a requirement to centralize important data of all these databases into a central reporting hub. The more data you have the more there is a requirement to analyze it and to create reports that drive some sorts of decisions. PostgreSQL is very well prepared to assist in this. If you have read the last posts about connecting your PostgreSQL instance to either Oracle, MS SQL Server, MariaDB/MySQL or even other PostgreSQL instances you might already know into what direction this post will go.

This is a sample use case: There is a requirement to centralize all the sales figures of a company into one big reporting database. The (simplified) database landscape in this company looks exactly as described above:

pg_reporting_1

There is one Oracle database, one MS SQL Server instance, one MariaDB/MySQL instance and one PostgreSQL instance. Each of them holds some sales figures which shall be integrated into the reporting database. For the demo setup I use the Dell DVD Store Database sample schema. You can read how to set this up for:

As the goal is to have a PostgreSQL instance as a central reporting hub the desired database landscape will approximately look like this:

pg_reporting_2

Lets assume all the foreign data wrappers are already setup in the central PostgreSQL instance:

(postgres@[local]:4445) [postgres] > \dx
                                                 List of installed extensions
     Name     |    Version    |   Schema   |                                    Description                                    
--------------+---------------+------------+-----------------------------------------------------------------------------------
 mysql_fdw    | 1.0           | public     | Foreign data wrapper for querying a MySQL server
 oracle_fdw   | 1.1           | public     | foreign data wrapper for Oracle access
 plpgsql      | 1.0           | pg_catalog | PL/pgSQL procedural language
 postgres_fdw | 1.0           | ds2_oracle | foreign-data wrapper for remote PostgreSQL servers
 tds_fdw      | 2.0.0-alpha.1 | public     | Foreign data wrapper for querying a TDS database (Sybase or Microsoft SQL Server)

In addition lets assume all the foreign servers and all the user mappings are there and are working:

(postgres@[local]:4445) [postgres] > select srvname,srvoptions from pg_foreign_server;
   srvname    |                                      srvoptions                                       
--------------+---------------------------------------------------------------------------------------
 mysql_server | {host=192.168.22.41,port=3306}
 mssql_svr    | {servername=192.168.22.102,port=1433,database=ds2,tds_version=7.3,msg_handler=notice}
 oracle_srv   | {dbserver=//192.168.22.42/DELLDVD}
 postgres_srv | {host=192.168.22.40,port=5432,dbname=ds2}
(4 rows)

(postgres@[local]:4445) [postgres] > select * from pg_user_mappings;
 umid  | srvid |   srvname    | umuser | usename  |               umoptions                
-------+-------+--------------+--------+----------+----------------------------------------
 65547 | 65546 | mysql_server |     10 | postgres | {username=web,password=web}
 65587 | 65586 | mssql_svr    |     10 | postgres | {username=ds2user,password=ds2}
 65615 | 65614 | oracle_srv   |     10 | postgres | {user=DS2,password=ds2}
 65676 | 65675 | postgres_srv |     10 | postgres | {user=ds2,password=ds2}

As the central reporting database is already connected to all the other database systems how could you organize the fetching of the data? One approach is to create a separate schema for each of the foreign databases and one additional schema that combines the data:

(postgres@[local]:4445) [postgres] > create schema ds2_mssql;
(postgres@[local]:4445) [postgres] > create schema ds2_oracle;
(postgres@[local]:4445) [postgres] > create schema ds2_mysql;
(postgres@[local]:4445) [postgres] > create schema ds2_postgresql;
(postgres@[local]:4445) [postgres] > create schema ds2_combined;
(postgres@[local]:4445) [postgres] > \dn
      List of schemas
      Name      |  Owner   
----------------+----------
 ds2_combined   | postgres
 ds2_mssql      | postgres
 ds2_mysql      | postgres
 ds2_oracle     | postgres
 ds2_postgresql | postgres
 public         | postgres
(6 rows)

The next step is to create the foreign tables we want to work with.

For Oracle, MySQL/MariaDB and PostgreSQL we can use the import foreign schema command:

postgres=# import foreign schema "DS2" from server oracle_srv into ds2_oracle;
IMPORT FOREIGN SCHEMA
postgres= import foreign schema "DS2" from server mysql_srv into ds2_mysql;
IMPORT FOREIGN SCHEMA
postgres= import foreign schema "public" from server postgres_srv into ds2_postgresql;
IMPORT FOREIGN SCHEMA

For MS SQL Server we need to specify the foreign tables on our own (there is a feature request to implement import foreign schema):

create foreign table ds2_mssql.orders 
  ( orderid numeric not null
  , orderdate timestamp(0) without time zone not null
  , customerid numeric                                
  , netamount numeric(12,2) not null 
  , tax numeric(12,2) not null 
  , totalamount numeric(12,2) not null 
  )
SERVER mssql_svr
OPTIONS (table 'dbo.orders', row_estimate_method 'showplan_all');

create foreign table ds2_mssql.orderlines
  ( orderlineid numeric not null 
  , orderid numeric not null 
  , prod_id numeric not null 
  , quantity numeric not null 
  , orderdate timestamp(0) without time zone not null 
  )
SERVER mssql_svr
OPTIONS (table 'dbo.orderlines', row_estimate_method 'showplan_all');

Having all the foreign tables available we can start to work with the data either by creating materialized views:

create materialized view ds2_combined.mv_orders as
  select * from ds2_mssql.orders
  union all
  select * from ds2_mysql."ORDERS"
  union all
  select * from ds2_oracle.orders
  union all
  select * from ds2_postgresql.orders
  with no data;
refresh materialized view ds2_combined.mv_orders with data;

… or by importing the data into PostgreSQL and then build reports on top of that:

BEGIN;
  insert into ds2_combined.orders select * from ds2_mssql.orders;
  insert into ds2_combined.orders select * from ds2_mysql."ORDERS";
  insert into ds2_combined.orders select * from ds2_oracle.orders;
  insert into ds2_combined.orders select * from ds2_postgresql.orders;
END;
BEGIN;
  insert into ds2_combined.orderlines 
         select * from ds2_mssql.orderlines;
  insert into ds2_combined.orderlines 
         select * from ds2_mysql."ORDERLINES";
  insert into ds2_combined.orderlines 
         select * from ds2_oracle.orderlines;
  insert into ds2_combined.orderlines 
         select * from ds2_postgresql.orderlines;
END;

Having all the data locally available you can do whatever you want with it. Have fun with reporting on your data …

Btw: The source for this was a session at the Swiss PGDAY 2016. You can download the slides there.

 

Cet article PostgreSQL as a central reporting hub? – Yes, of course est apparu en premier sur Blog dbi services.

Statspack Idle Events

Tue, 2016-06-28 11:26

When you don’t have Enterprise Edition + Diagnostic Pack, you cannot use AWR but you can, and should, install Statspack. Statspack is still there, but unfortunately do not evolve a lot. The most important section, the ‘Top 5 Timed Events’ shows only foreground events, or is supposed to do so. When a user process waits on a background process, this section must count only the foreground wait and not the activity of the background process or we will have double accounting. The background activity is included in ‘Idle’ events in order to be excluded from this section.
But unfortunately, new versions come with new wait events, and the list of Statspack idle events is not up to date anymore.

Here is the ‘Top 5 Timed Events’ I got from a database between 22:00 and 23:00 where there is no application activity:

Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
----------------------------------------- ------------ ----------- ------ ------
LGWR worker group idle 6 22,049 ###### 65.2
AQPC idle 120 3,602 30014 10.7
heartbeat redo informer 3,568 3,601 1009 10.7
lreg timer 1,195 3,598 3011 10.6
direct path read 31,221 466 15 1.4
-------------------------------------------------------------

Humm. What can you do with that? Idle events and timers are at the top. Direct path read seem to be minimal. And no CPU usage?
Obviously, something is wrong here.

Statspack uses a fixed list of wait events that are considered as ‘idle’ events and it is stored at Statspack installation into STATS$IDLE_EVENT.
This comes from an age where wait classes were not there. In current version, a more realistic list of wait events is in V$EVENT_NAME where class_name=’Idle’

Let’s compare them (that’s in 12.1.0.1)

First, are there some non-idle events that are considered as ‘idle’ by Statspack?


SQL> select s.event statspack_idle_event,v.name,v.wait_class from STATS$IDLE_EVENT s left outer join V$EVENT_NAME v on s.event=v.name where wait_class'Idle';
 
STATSPACK_IDLE_EVENT NAME WAIT_CLASS
--------------------------------------- --------------------------------------- ----------
null event null event Other
SQL*Net message to client SQL*Net message to client Network
SQL*Net more data from client SQL*Net more data from client Network
KSV master wait KSV master wait Other
parallel recovery slave wait for change parallel recovery slave wait for change Other

The goal of this blog post is not to detail the meaning of each of those events (search for them on tanelpoder.com as a good start for that), but if they are now considered as non-idle, Statspack should obey the same rule.

Then we can check which idle events are not in Statspack list:


SQL> select s.event statspack_idle_event,v.name,v.wait_class from STATS$IDLE_EVENT s right outer join V$EVENT_NAME v on s.event=v.name where wait_class
 
STATSPACK_IDLE_EVENT NAME WAIT_CLASS
-------------------- ------------------------------------------ ----------
OFS idle Idle
heartbeat redo informer Idle
LGWR worker group idle Idle
Recovery Server waiting for work Idle
Recovery Server waiting restore start Idle
Recovery Server Surrogate wait Idle
Recovery Server Servlet wait Idle
Recovery Server Comm SGA setup wait Idle
parallel recovery coordinator idle wait Idle
recovery sender idle wait Idle
recovery receiver idle wait Idle
recovery merger idle wait Idle
virtual circuit next request Idle
lreg timer Idle
REPL Apply: txns Idle
REPL Capture/Apply: messages Idle
REPL Capture: archive log Idle
PL/SQL lock timer Idle
Emon coordinator main loop Idle
Emon slave main loop Idle
AQ: 12c message cache init wait Idle
AQ Cross Master idle Idle
AQPC idle Idle
Streams AQ: load balancer idle Idle
Sharded Queues : Part Maintenance idle Idle
REPL Capture/Apply: RAC AQ qmn coordinator Idle
iowp msg Idle
iowp file id Idle
netp network Idle
gopp msg Idle

There are a lot of them. We can see lot of idle events that have been introduced in recent versions.

The Statspack list is an old list. Here is how to refresh it:


delete from STATS$IDLE_EVENT;
insert into STATS$IDLE_EVENT select name from V$EVENT_NAME where wait_class='Idle';
commit;

Once I did that and run a new Statspack report on the same snapshots as above, I get a more realistic ‘Top 5 Timed Events':


Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
----------------------------------------- ------------ ----------- ------ ------
direct path read 31,221 466 15 48.7
CPU time 310 32.4
db file sequential read 49,137 77 2 8.0
SQL*Net vector data to client 15,189 31 2 3.3
enq: TM - contention 1 24 23937 2.5

Actually, this 22:00 to 23:00 time period is where the maintenance jobs are running. dbms_space.auto_space_advisor_job_proc likes to read your tables in bulk in order to see if there is some free space. And I don’t like this 24 seconds TM lock wait at a time where I though the database was not in use. This was hidden from the original report.

Patch?

Statspack is still supported and there’s a patch to add the following events as idle:
"virtual circuit next request" "AQ Cross Master idle" "AQ: 12c message cache init wait" "AQPC idle" "Emon coordinator main loop" "Emon slave main loop" "LGWR worker group idle" "OFS idle" "REPL Apply: txns" "REPL Capture/Apply: RAC AQ qmn coordinator" "REPL Capture/Apply: messages" "REPL Capture: archive log" "Recovery Server Comm SGA setup wait" "Recovery Server Servlet wait" "Recovery Server Surrogate wait" "Recovery Server waiting for work" "Recovery Server waiting restore start" "Sharded Queues : Part Maintenance idle" "Streams AQ: load balancer idle" "gopp msggopp msg" "heartbeat redo informer" "iowp file id" "iowp msg" "lreg timer" "netp network" "parallel recovery coordinator idle wait" "recovery merger idle wait" "recovery receiver idle wait" "recovery sender idle wait" "imco timer" "process in prespawned state"
(Nice way to be referenced by google for all those improbable wait events, isn’t it?)

However, I think that filing STATS$IDLE_EVENT from V$EVENTNAME, or maybe even replacing it as a view can be a better long term solution. Each version comes with new wait events and it seems that Statspack evolves only through patches.

 

Cet article Statspack Idle Events est apparu en premier sur Blog dbi services.

SQL Server 2016 – Query Store: retrieve query that doesn’t appear!

Tue, 2016-06-28 07:25

For our event SQL Server 2016 in September, I am studying the new functionality Query Store.
My colleague David Barbarin have written few months ago about Query Store and how it’s working.

Simple SELECT Query

To begin, I execute a simple SELECT on a table with 3 different methods:

SELECT * FROM QS_test;
exec sp_executesql N'SELECT * FROM QS_test'
EXEC sp_GetQS_test;

The “sp_GetQS_test” is a stored procedure with the select statement.
I created a little query with specific DMVs for Query Store to analyze the query:

SELECT  qsqt.query_text_id,qsqt.query_sql_text,qsq.query_id,qsq.query_hash,
   qsq.query_parameterization_type_desc,qsq.initial_compile_start_time,
   qsq.last_compile_start_time,qsq.last_execution_time,qsq.avg_compile_duration,
   qsp.query_id,qsp.plan_id,qsrs.execution_type_desc
  FROM sys.query_store_query_text qsqt
  inner join sys.query_store_query qsq on qsq.query_text_id=qsqt.query_text_id
  inner join sys.query_store_plan qsp on qsp.query_id=qsq.query_id 
  inner join sys.query_store_runtime_stats qsrs on qsrs.plan_id=qsp.plan_id   
  WHERE query_sql_text='SELECT * FROM QS_test';

QS_01
As you can see in the result, all 3 queries are present with the same query text (‘SELECT * FROM QS_test’) in the query store.

SELECT Query with a Where clause

I continue my test with a select and a where clause:

  SELECT * FROM QS_test WHERE rid=5

I run my query to find the query in the query store:
QS_02
And, Oh surprise, no query found! The query does not appear in the query store….
I rerun my query without the where clause to see if I find something:
QS_03
The result give me a query written differently:
(@1 tinyint)SELECT * FROM [QS_test] WHERE [rid]=@1

This query goes through a parametrization and to retrieve this information we use a new function in SQL Server 2016: fn_stmt_sql_handle_from_sql_stmt

Function sys.fn_stmt_sql_handle_from_sql_stmt

This function give us the SQL handle for the query
QS_04
After, I add the function in my query to find it in the Query Store:

SELECT  qsqt.query_text_id,qsqt.query_sql_text,qsq.query_id,qsq.query_hash,

qsq.query_parameterization_type_desc,qsq.initial_compile_start_time,qsq.last_compile_start_time,

qsq.last_execution_time,qsq.avg_compile_duration,qsp.query_id,qsp.plan_id,qsrs.execution_type_desc

FROM sys.query_store_query_text qsqt

inner join sys.query_store_query qsq on qsq.query_text_id=qsqt.query_text_id

inner join sys.query_store_plan qsp on qsp.query_id=qsq.query_id

inner join sys.query_store_runtime_stats qsrs on qsrs.plan_id=qsp.plan_id

CROSS APPLY sys.fn_stmt_sql_handle_from_sql_stmt('SELECT * FROM QS_test WHERE rid=5',NULL) fsshfss

WHERE qsqt.statement_sql_handle=fsshfss.statement_sql_handle;

QS_05

It’s done, I retrieve the query thanks to this new function.
You can notice that I use the statement_sql_handle column and not the query_sql_text column in the clause where.

I have tested with query_sql_text column and you can see here the error that I get…
QS_06
The query_sql_text from the function is SQL_Latin1_General_CP1_CI_AS and both my database and my instance are using French_CI_AS. This is not hopeless…
Then, if you want to use the query with query_sql_text, you just need just to precise the collation with the keyword COLLATE
QS_07

 

Cet article SQL Server 2016 – Query Store: retrieve query that doesn’t appear! est apparu en premier sur Blog dbi services.

Trace Flag 4199 or not in SQL Server 2016?

Tue, 2016-06-28 04:34

Some Trace Flag like T1117 and T1118 are no more needed as you can read on David’s blog.
But that’s not all, you have also the T4199…

What’ does the Trace Flag T4199?

This Trace Flag enables all query optimizer fixes.
A lot of DBAs have enabled this Trace Flag globally during the build of a new server.
If you want to take advantage of an enhancement or a fix, the T4199 becomes a necessity…

But now, with SQL Server 2016, this is an old story!

SQL Server 2016 eliminates the T4199

In SQL Server 2016, you don’t need to enable this Trace Flag, forget your old habit!
It is automatically included when you change the COMPATIBILY LEVEL to 130:

USE [master]
GO
ALTER DATABASE [MyDatabase] SET COMPATIBILITY_LEVEL = 130
GO

Advise: If you find a unexpected/poor plan, use the Query Store to analyze and force a plan!
More information on the KB974006

 

Cet article Trace Flag 4199 or not in SQL Server 2016? est apparu en premier sur Blog dbi services.

Swiss PGDAY 2016, Slides are online

Tue, 2016-06-28 03:15

Last Friday the Swiss PGDAY happened in Rapperswil. All the slides as well as some pictures are now availble (tab “Programm”):

>pgday_logo

 

Cet article Swiss PGDAY 2016, Slides are online est apparu en premier sur Blog dbi services.

Manage DDL inside SharePlex

Fri, 2016-06-24 15:30

In a precedent blog (http://blog.dbi-services.com/discovering-shareplex-for-oracle/) about SharePlex, we presented how to setup a replication between 2 environments. In this article we will try to see how SharePlex deals with replicating DDL statements.
Before starting we present below our environment and our configuration file. We are just  replicating scott_atlas to scott_atlasrep2. We suppose that SharePlex is already configured.

Server Name Database Name Oracle Version What atlas.localdomain SPLEXDB 12.1.0.2 Source atlasrep2.localdomain SPLEXSTR2 12.1.0.2 Target
[oracle@atlas config]$ cat ddl_config_atlasrep2.cnf
datasource:o.SPLEXDB

#source tables            target tables            routing map

expand scott_atlas.%       scott_atlasrep2.%       atlasrep2@o.SPLEXSTR2

 

Let’s start SharePlex on both source and target

[oracle@atlas ~]$ $SP_HOME/bin/sp_cop -u $NAME_FOR_ALERTING &
[1] 2617
[oracle@atlas ~]$

*******************************************************
* SharePlex for Oracle Startup
* Copyright 2016 Dell, Inc.
* ALL RIGHTS RESERVED.
* Protected by U.S. Patents: 7,461,103 and 7,065,538
* Version: 8.6.4.66-m64-oracle120
* VarDir : /u01/app/shareplex/vardir/splexdb_864_12_2105
* Port : 2105
*******************************************************
[oracle@atlasrep2 ~]$ $SP_HOME/bin/sp_cop -u $NAME_FOR_ALERTING &
[1] 2437
[oracle@atlasrep2 ~]$

*******************************************************
* SharePlex for Oracle Startup
* Copyright 2016 Dell, Inc.
* ALL RIGHTS RESERVED.
* Protected by U.S. Patents: 7,461,103 and 7,065,538
* Version: 8.6.4.66-m64-oracle120
* VarDir : /u01/app/shareplex/vardir/splexstr2_864_12_2105
* Port   : 2105
*******************************************************

Now let’s verify that SharePlex processes are running on both source and target

[oracle@atlas ~]$ $SP_HOME/bin/sp_ctrl


sp_ctrl (atlas:2105)> show

Process    Source                               Target                 State                   PID
---------- ------------------------------------ ---------------------- -------------------- ------
Capture    o.SPLEXDB                                                   Running                2618
Read       o.SPLEXDB                                                   Running                2620
Export     atlas                                atlasrep2              Running
[oracle@atlasrep2 ~]$  rlwrap $SP_HOME/bin/sp_ctrl

sp_ctrl (atlasrep2:2105)> show

Process    Source                               Target                 State                   PID
---------- ------------------------------------ ---------------------- -------------------- ------
Import     atlas                                atlasrep2              Running                2734
Post       o.SPLEXDB-atlas                      o.SPLEXSTR2            Running                2438

SharePlex provides many parameters to handle DDL replication. We are going to see some of them. To display SharePlex parameters, just execute

sp_ctrl (atlas:2105)> list param

SP_OCT_REPLICATE_DDL:  Possible values 0/1/2/3

Manages the replication of ALTER and TRUNCATE statements. It can have the following values

0: No replication of both ALTER and TRUNCATE

1: Only ALTER replication

2: Only TRUNCATE replication

3: Both replication of ALTER and TRUNCATE  –default value

Let’s show an example with the default value 3, ALTER and TRUNCATE DDL performed in scott_atlas schema should be replicated. From the source and the target we can see that table bonus is not empty

From the source

SQL> select name from v$database;

NAME
---------
SPLEXDB
SQL> conn scott_atlas/tiger
Connected.
SQL> select * from bonus;

ENAME      JOB              SAL       COMM
---------- --------- ---------- ----------
Moise      IT              2300        200

 

From the target

SQL> select name from v$database;

NAME
---------
SPLEXSTR

SQL> conn scott_atlasrep2/tiger
Connected.
SQL> select * from bonus;

ENAME      JOB              SAL       COMM
---------- --------- ---------- ----------
Moise      IT              2300        200

Let’s truncate table bonus and let’s add 2 new columns on the source

SQL> show user
USER is "SCOTT_ATLAS"
SQL> truncate table bonus;

Table truncated.

SQL> alter table bonus add (col_net number,col_var number);

Table altered.

SQL>  select * from bonus;

no rows selected

We can see that both DDL are replicated on the target

SQL> show user
USER is "SCOTT_ATLASREP2"
SQL> select * from bonus;

no rows selected

SQL> desc bonus
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ENAME                                              VARCHAR2(10)
 JOB                                                VARCHAR2(9)
 SAL                                                NUMBER
 COMM                                               NUMBER
 COL_NET                                            NUMBER
 COL_VAR                                            NUMBER

SP_OCT_AUTOADD_ENABLE:  Possible values 0/1

Manages newly created objects in the active configuration (objects in the configuration file). The default value is 1, meaning that newly created table in scott_atlas schema will be automatically replicated on the source. SharePlex will also replicated all corresponding DML.
Let’s create a table on the source and let’s insert some data

SQL> show user
USER is "SCOTT_ATLAS"
SQL> create table mytable (id number primary key,message long);

Table created.

SQL> insert into mytable values (1,'Ceci est un message important concernant...');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from mytable;

        ID
----------  MESSAGE
------------------------------------------------------------
         1
Ceci est un message important concernant...

We can verify that the new table and corresponding data are replicated on the target.

SQL> show user
USER is "SCOTT_ATLASREP2"
SQL> select * from mytable;

        ID
----------
MESSAGE
--------------------------------------------------------------------------------
         1
Ceci est un message important concernant...

 

SP_OCT_REPLICATE_ALL_DDL: Possible values 0/1

Manages  DDL replication  for certain objects that are not listed in the configuration file. The default value is 0 and means that DDL concerning objects which are not in the active configuration will not be replicated. Let’s take an example with a toto schema (Just remember that in our configuration file only scott_atlas objects are being replicated).
From the source let’s create a table test1 in toto schema

SQL> connect toto/toto

Connected.

SQL> create table test1(id number);

 
Table created.

As expected we can verify that table test1 is not replicated in toto schema on the target

SQL> conn toto/toto
Connected.
SQL> desc test1
ERROR:
ORA-04043: object test1 does not exist

Now let’s set the value of the parameter to 1 on the source

sp_ctrl (atlas:2105)> set param SP_OCT_REPLICATE_ALL_DDL 1

And let’s create a table test2 on toto schema and let’s insert a row on the source

SQL> create table test2(id number);

Table created.

SQL> insert into test2 values (1);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from test2;

        ID
----------
         1

We can see that DDL is replicated (table creation) on target but not the corresponding DML.

SQL> select * from test2;

no rows selected

SQL>

That means that if an object is not in the active replication and if the parameter SP_OCT_REPLICATE_ALL_DDL is set to 1, DDL will be replicated but data will not.

 

Cet article Manage DDL inside SharePlex est apparu en premier sur Blog dbi services.

Upgrading to 7.2 created a new ACS. How to remove it and test it?

Fri, 2016-06-24 03:46

I had this strange behavior that once upgraded from 6.7 to 7.2 a new ACS was created. I think it’s because the current ACS name didn’t fit the new ACS name pattern. Well it’s not a big issue to have 2 ACS configured. But in my case they pointed both to the same port and servlet so… I had to remove one.

Hence, how can we know which one is used?

That’s easy, just find the acs.properties file located in:

$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties

 

In this file you should find the line:

repository.acsconfig=YOUR_ACS_NAME.comACS1

 

In fact my previous ACS was named YOUR_ACS_NAME.cACS1 that’s why I think a new one was created. So here you have the ACS used and you just have to remove the other one:

delete dm_acs_config objects where object_name = ‘YOUR_OLD_ACS_NAME';

Fine, now how can we check that the ACS is working properly?

First you can paste the ACS url i your browser to check if it’s running, it should look like this:

http://your-content-server-host:9080/ACS/servlet/ACS

 

If you installed your method server on another port than 9080, use it.

You should see the following result (maybe with a different version):

ACS Server Is Running - Version : 7.2.012.0.0114

 

If you can’t find the ACS url, login to Documentum Administrator and navigate to:
Administration -> Distributed Content Configuration -> ACS Server
If you right click on it you will see the url at the bottom of the page.

At this point the ACS is running but is documentum using it properly?

In order to verify this point a bit of configuration is needed. Login to the server on which you have DA installed, in the DA application search for a log4j.properties file and add the following lines:

log4j.logger.com.documentum.acs=DEBUG, ACS_LOG
log4j.logger.com.documentum.fc.client.impl.acs=DEBUG, ACS_LOG
log4j.appender.ACS_LOG=org.apache.log4j.RollingFileAppender
log4j.appender.ACS_LOG.File=${catalina.home}/logs/AcsServer.log
log4j.appender.ACS_LOG.MaxFileSize=10MB
log4j.appender.ACS_LOG.layout=org.apache.log4j.PatternLayout
log4j.appender.ACS_LOG.layout.ConversionPattern=%d{ABSOLUTE} %5p [%t] %c - %m%n

You may have to update the line log4j.appender.ACS_LOG.File.

Restart the tomcat or whatever webapp server you have. In order to generate logs you’ll have to open a document from DA. Let’s say we have a document called TESTDOC.doc.
Once you open it you’ll have around 3 to 4 lines in AcsServer.log. In order to verify that everything went fine, you should NOT see the following line:
INFO [Timer-161] com.documentum.acs.dfc – [DFC_ACS_LOG_UNAVAILABLE] “userName=”test”, objectId=”0903d0908010000″, objectName=”TESTDOC.doc””, skip unavailable “ACS” serverName=”YOUR_ACS_NAME_HERE” protocol “http”

Instead you must have a kind of ticket/key formed by a lot of letters/numbers. This step will validate that you have been served by the ACS.

 

Cet article Upgrading to 7.2 created a new ACS. How to remove it and test it? est apparu en premier sur Blog dbi services.

SQL Server 2016: Always Encrypted

Fri, 2016-06-24 01:58

One of the top new features of SQL Server 2016 is the Always Encrypted functionality. Always Encrypted provides that data, store in a database, remains encrypted the all times there are in the database. There is a complete separation between persons who own the data and person who manage it. Only persons who own the data can see plain text data and person like DBAs, sys admins or privilege logins cannot have access to the data.

Data are encrypted and decrypted in flight between the database and the client application inside a Client Driver on the client side.
The client manages encryption keys which are stored outside of SQL Server.
Let’s start to explain via a concrete example.

I have a table in my database with sensitive data and I want to encrypt those data to avoid that Database Administrator can see plain text data for Credit Card number, Account Number and Account Balance:

Blog_AlwaysEncrypted_1

To enable encryption columns right click on our Customer table and select “Encrypt Columns…”:

Blog_AlwaysEncrypted_2

An introduction screen appears explaining how Always Encrypted works, click Next:

Blog_AlwaysEncrypted_3

The next screen shows all the columns of our table and we have to select which ones we want to encrypt. Here the Credit card number, the account balance and the account number:

Blog_AlwaysEncrypted_4

We need to choose the Encryption Type between two options which are described if we click in the “Encryption Type” text:

Blog_AlwaysEncrypted_5

I will choose Randomized for Credit Card Number and Account Number as I don’t want to query  on those columns and it is more secure. But I choose Deterministic for Account Balance as I want to filtering by equality with this field.
Please note that Deterministic encryption use a column collation with a binary2 sort order for character columns, so the collation for our char columns will be changed from French_CI_AS to French_BIN2 in my example.

For the column Encryption Key which are the key which will encrypt the data for each column, I will let the wizard generated one for me. I will also use the same column encryption key for all my encrypted columns:

Blog_AlwaysEncrypted_6

The next screen is used for the master key configuration. The column encryption key is used to encrypt the data in the column and the column master key protect, encrypt the column encryption key. I will, here also, use an auto generated column master key which will be a self-signed certificate saved in the Windows Certificate Store:

Blog_AlwaysEncrypted_7

In the Run Settings screen, first of all, a warning points the fact that if the encryption/decryption is executed during some insert statement, there could be a risk of data loss.
You could normally choose if you want to run the encryption immediately or if you want to generate a PowerShell script to do it later. For the time being the PowerShell generation could not be chosen… So I will run it now:

Blog_AlwaysEncrypted_8

A summary explains the operation that will be proceeded. A column master key will be generated and saved in the Windows Certificate store, the column encryption key will be also generated and used to encrypt my three columns:

Blog_AlwaysEncrypted_9

My columns have been encrypted:

Blog_AlwaysEncrypted_10

Now , I go back to my query, refresh it and I see that I cannot anymore read plain text for my three columns but instead I have varbinary encrypted blobs:

Blog_AlwaysEncrypted_11

There is just a problem in this demo… In fact, I have created my column master key certificate with a self-signed certificate in the context of the current user.
So, this user have access to my certificate and can decrypt the encrypted columns if we add in the connection string “Column Encryption Setting=Enabled”,  which is the change required to use Always Encrypted.

Blog_AlwaysEncrypted_12

Now, as the certificate used to create the Column Master Key is available the encrypted columns appear in plain text…

Blog_AlwaysEncrypted_13

We will have to separate physically the certificate used to create my column master key from the SQL Server machine used to create the Column Master Key and the Column Encryption Key.
I will show you how to do that in a future blog post.

 

Cet article SQL Server 2016: Always Encrypted est apparu en premier sur Blog dbi services.

Question is: upgrade now to 12.1.0.2 or wait for 12.2 ?

Thu, 2016-06-23 04:27

Let’s look at Release Schedule of Current Database Releases (Doc ID 742060.1)
12.2.0.1 is planned for 2HCY2016 on platforms Linux x86-64, Oracle Solaris SPARC (64-bit), Oracle Solaris x86-64 (64-bit).
2HCY2016 starts next week but we can imagine that it will not be released immediately and anyway we will have to wait a few months to download the on-premise(s) version. Add another couple of months to get at least one Proactive Bundle Patch to stabilize that new release. So maybe we can plan for production upgrade on Jan. 2017 for Linux platform, and Apr. or Jul. 2017 for Windows platform, right? How does that cope with 11.2.0.4 and 12.1.0.1 end of support?

Is delay for 12.2 a problem?

My opinion is that long time for new release is not a problem. Most of customers want stable supported release, not new features available only with options and that may introduce bugs. As long as we have support, PSUs and Proactive Bundle patchsets, everything is ok. We can’t blame software regressions after upgrade, and at the same time look forward to get new releases in a short period of time.
So in my opinion, waiting 6 months or 1 year to get 12.2 is not a problem except for book authors that wait for the general availability of 12.2 to release their book https://www.amazon.com/Oracle-Database-Release-Multitenant-Press/dp/1259836096 ;)

Is ‘cloud first’ a problem?

I don’t think that ‘cloud first’ is a problem by itself. We will have to learn 12.2 features and test them before upgrading our databases, and the Oracle Public Cloud is good for that. But I fear that customers will feel forced to go to the cloud, which is wrong. Was the same when 12.1.0.2 was released for Enterprise Edition. They feel forced to qui Standard Edition but that was probably not the goal. Especially when those that have quit Standard Edition One did it to go to open-source RDBMS.

Is ‘multitenant first’ a problem?

Yes, ‘cloud first’ may mean ‘multitenant first’ because that’s the only architecture available for 12c on the Oracle DBaaS. First, you can install a non-CDB if you choose ‘virtual image’. And anyway, OPC trial is the good occasion to test 12.2 and multitenant at the same time. Let me repeat that multitenant architecture has lot of features available without the multitenant option.

Upgrade planning

Back to the ground, the problem in my opinion is the incertitude.
Free extended support for 11.2.0.4 ends on 31-May-2017 and we don’t know yet if we will have a stable (i.e with few PSUs) 12.2 release at that time for on-premises, especially for Windows which will come later than Linux.
Remember that 12.1.0.2 on Windows came two months after the Linux one. And another two months for AIX.

12.1.0.1 support ends on 31-Aug-2016 and 12.2 will not be out at that time, at least for on-premises.

So what?

Customers that expected to get 12.2 before the end of 12.1.0.1 or 11.2.0.4 support will now (since the announcement of 2HCY2016 last month and the ‘cloud first’ recent announcement) have to plan an intermediate upgrade to 12.1.0.2 before going to 12.2. And because of the ‘Release 1′ myth, they are afraid of that. Our mission, as consultants and Oracle partners, is to explain that the myth has no reason behind it. Look at Mike Dietrich blog about that. Hope you will be convinced that version, releases and patchsets can bring regressions and should be carefully tested, whatever it’s the 1st, 2nd or 4th number on the version identification that is incremented. New ORACLE_HOME is new software.

Then, once in 12.1.0.2 you will have the time to plan an upgrade to 12.2 after learning, testing, changing administration scripts/procedures/habits to the era of multitenant. And you will be ready for the future.

The customers in 11.2.0.4 that do not want to plan that intermediate upgrade will have the option to pay for extended support which ends on 31-DEC-2019.

 

Cet article Question is: upgrade now to 12.1.0.2 or wait for 12.2 ? est apparu en premier sur Blog dbi services.

ODA X6-2S and ODA X6-2M for EE and SE2

Tue, 2016-06-21 13:07

After the announcement of the death of SE1 and SE we wondered what Oracle will do for Small and Medium Enterprises and entry level products. The answer was postgres the Oracle Cloud Services, but that’s for dev/test only because Public Cloud is not for production and SME will not build a Private Cloud for their few Oracle databases.

Find out how the @Oracle #Database Appliance can simplify your DB administration: #OracleODA https://t.co/iJT0roXrgF pic.twitter.com/mOBPQfu8Tg

— Oracle Hardware (@oraclehardware) June 16, 2016

The answer was announced to partners a few week ago and is now official: the 5th generation of ODA, the X6, has now an entry level version for Standard Edition 2.

ODA, until today, was only for Enterprise Edition. You could install Standard Edition on a Guest VM (described here) but with poor I/O performance and no advantages of ODA.

The smallest ODA was a 2 nodes cluster. The hardware is not expensive for what it is, but it is still expensive when you want only one small server.
Lot of our customers asked for a small ODA with Standard Edition. Even big customers with only a small database. They consolidate everything on VMWare, but for licencing reason need to isolate Oracle on physical machine. Two ODAs (as they want a DR site) is too large for their needs.

Good news, there are now smaller, one node, ODAs that can run Standard Edition and fully automated, even more automated than the previous ODAs.
New products but same values:

  • simplicity with automated deployment,
  • automated patching,
  • zero-admin storage,
  • integrated VM management,
  • performance (new NVMe to access flash storage)

The new products are:

What is new in the #OracleODA: pic.twitter.com/np3sKjcFXq

— Oracle Hardware (@oraclehardware) June 21, 2016

ODA X6-2S: the entry level for SE2

‘Only one single database’ but this is not a limit, just a recommandation related to the capacity.
Virtualized (can run application server on it)
1 socket with 10 cores Xeon, 126GB RAM, up to 384 GB, 10Gbase-T for public network and/or 10 GbE SFP+ Public Network
All Flash Storage. 6.4 TB NVMe Flash Storage, up to 12.3 -> usable for database is 2.4 TB up to 4.8
Cost: 18000$ which is the same as a Dell equivalent server (Oracle says it’s 2x cheaper but we may not have the same price list)

ODA X6-2M: similar but ‘multiple databases’

More resource: (2x sockets, 2x memory)
Same storage as X6-2S but NFS is also supported for additional storage
cost: 24000$

Both can be used for Standard Edition 2 (licensed in socket or NUP) or Enterprise Edition (Licences with Capacity On Demand activated cores or NUP)

ODA HA

This is the X5-2 we already know with 2 servers (X6 is planned for end of year)
2 nodes there, for RAC, RON, or single instance consolidation. It can be Bare Metal or Virtualized

Just my guess, but there are good chances that only Multitenant is supported for them in 12.2 – single-tenant when in SE2 of course – so that it is easy to move database to and from the public cloud services. Actually, the goal is:

Oracle’s new #database appliances provide a bridge between on-premise systems and the #cloud #OracleODA pic.twitter.com/D86QzYLsUD

— Oracle Hardware (@oraclehardware) June 21, 2016

You run dev and test on Oracle Public Cloud and your production on-premises. That’s a full Oracle solution where you can move your (pluggable) databases between the two platforms.

The ODA automation has evolved. It was easy configuration and patching. It is now also easy provisioning with the same interface that cou can found in Enterprise Manager or the Cloud. The Appliance Manager is accessible though web console or command line and helps to automate deployment, management, support and monitoring of the ODA.

 

Cet article ODA X6-2S and ODA X6-2M for EE and SE2 est apparu en premier sur Blog dbi services.

Adaptive Plans and SQL Baselines

Tue, 2016-06-21 12:44

I encountered recently an issue with Adaptive Plan and SPM. Documentations says that it works perfectly together but I remembered a presentation from Nelson Calero at UKOUG TECH15 mentioning strange behavior. I reproduced the issue and share the test case here as you may encounter it in 12.1 leading to regressions when you capture SQL Plan Baselines.

Cleanup

Whith all those adaptive features, you need to start clean if you want a reproductible testcase
SQL> -- drop tables
SQL> drop table DEMO1;
Table dropped.
SQL> drop table DEMO2;
Table dropped.
SQL>
SQL> whenever sqlerror exit failure
SQL>
SQL> -- drop all sql plan baselines
SQL> set serveroutput on long 100000 longc 100000
SQL> exec for i in (select sql_handle, plan_name, accepted, origin, created from dba_sql_plan_baselines) loop dbms_output.put_line(''||dbms_spm.drop_sql_plan_baseline(sql_handle=>i.sql_handle,plan_name=>i.plan_name)); end loop;
1
 
PL/SQL procedure successfully completed.
 
SQL> set serveroutput off
SQL> select 0/0 from dba_sql_plan_baselines;
no rows selected
SQL>
SQL> -- flush shared pool
SQL> alter system flush shared_pool;
System altered.
SQL> select 0/0 from v$sql where sql_id='agw7bn072730a';
no rows selected

Create the tables

SQL> -- create two tables with few rows for L=1 and lot of rows for L=15
SQL> create table DEMO2 (id constraint PK1 primary key,l) as select rownum,floor(log(2,rownum)) from xmltable('1 to 100000');
Table created.
 
SQL> create table DEMO1 as select * from DEMO2;
Table created.

Run the query


SQL> -- run a join for the few rows case
SQL> alter session set statistics_level=all;
Session altered.
SQL> select count(DEMO2.l) from DEMO1 join DEMO2 using(id) where DEMO1.l=1;
 
COUNT(DEMO2.L)
--------------
3

And here is the adaptive plan:

SQL> select * from table(dbms_xplan.display_cursor(null,null,format=>'allstats last +adaptive +outline'));
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------
SQL_ID agw7bn072730a, child number 0
-------------------------------------
select count(DEMO2.l) from DEMO1 join DEMO2 using(id) where DEMO1.l=1
 
Plan hash value: 2870612662
 
-------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads |
-------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 203 | 183 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 203 | 183 |
|- * 2 | HASH JOIN | | 1 | 5882 | 3 |00:00:00.01 | 203 | 183 |
| 3 | NESTED LOOPS | | 1 | 5882 | 3 |00:00:00.01 | 203 | 183 |
| 4 | NESTED LOOPS | | 1 | | 3 |00:00:00.01 | 200 | 183 |
|- 5 | STATISTICS COLLECTOR | | 1 | | 3 |00:00:00.01 | 195 | 179 |
| * 6 | TABLE ACCESS FULL | DEMO1 | 1 | 5882 | 3 |00:00:00.01 | 195 | 179 |
| * 7 | INDEX UNIQUE SCAN | PK1 | 3 | | 3 |00:00:00.01 | 5 | 4 |
| 8 | TABLE ACCESS BY INDEX ROWID| DEMO2 | 3 | 1 | 3 |00:00:00.01 | 3 | 0 |
|- 9 | TABLE ACCESS FULL | DEMO2 | 0 | 100K| 0 |00:00:00.01 | 0 | 0 |
-------------------------------------------------------------------------------------------------------------
 
Outline Data
-------------
 
/*+
BEGIN_OUTLINE_DATA
INDEX(@"SEL$58A6D7F6" "DEMO2"@"SEL$1" ("DEMO2"."ID"))
NLJ_BATCHING(@"SEL$58A6D7F6" "DEMO2"@"SEL$1")
USE_NL(@"SEL$58A6D7F6" "DEMO2"@"SEL$1")
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
DB_VERSION('12.2.0.1')
ALL_ROWS
OUTLINE_LEAF(@"SEL$58A6D7F6")
MERGE(@"SEL$1")
OUTLINE(@"SEL$2")
OUTLINE(@"SEL$1")
FULL(@"SEL$58A6D7F6" "DEMO1"@"SEL$1")
LEADING(@"SEL$58A6D7F6" "DEMO1"@"SEL$1" "DEMO2"@"SEL$1")
END_OUTLINE_DATA
*/
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - access("DEMO1"."ID"="DEMO2"."ID")
6 - filter("DEMO1"."L"=1)
7 - access("DEMO1"."ID"="DEMO2"."ID")
 
Note
-----
- this is an adaptive plan (rows marked '-' are inactive)

It’s an adaptive plan, HASH JOIN was the initial choice but first execution activated the NESTED LOOP.

SQL Baseline Capture

SQL> alter session set optimizer_capture_sql_plan_baselines=true;
Session altered.
 
SQL> select count(DEMO2.l) from DEMO1 join DEMO2 using(id) where DEMO1.l=1;
 
COUNT(DEMO2.L)
--------------
3
 
SQL> alter session set optimizer_capture_sql_plan_baselines=false;
Session altered.

Here is the SQL Baseline:

SQL> select sql_handle, plan_name, accepted, origin, created from dba_sql_plan_baselines;
 
SQL_HANDLE PLAN_NAME ACC ORIGIN CREATED
------------------------------ ---------------------------------------- --- ----------------------------- ---------------------------------------------------------------------------
SQL_4c1b404640b73a81 SQL_PLAN_4s6u08t0bffn1e47b6a4d YES AUTO-CAPTURE 28-MAY-16 09.13.04.000000 PM

and its plan:

SQL> select plan_table_output from dba_sql_plan_baselines,table(dbms_xplan.display_sql_plan_baseline(sql_handle, plan_name, format=>'+adaptive'));
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
 
--------------------------------------------------------------------------------
SQL handle: SQL_4c1b404640b73a81
SQL text: select count(DEMO2.l) from DEMO1 join DEMO2 using(id) where DEMO1.l=1
--------------------------------------------------------------------------------
 
--------------------------------------------------------------------------------
Plan name: SQL_PLAN_4s6u08t0bffn1e47b6a4d Plan id: 3833293389
Enabled: YES Fixed: NO Accepted: YES Origin: AUTO-CAPTURE
Plan rows: From dictionary
--------------------------------------------------------------------------------
 
Plan hash value: 740165205
 
-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 16 | 108 (2)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 16 | | |
| * 2 | HASH JOIN | | 5882 | 94112 | 108 (2)| 00:00:01 |
| * 3 | TABLE ACCESS FULL| DEMO1 | 5882 | 47056 | 54 (2)| 00:00:01 |
| 4 | TABLE ACCESS FULL| DEMO2 | 100K| 781K| 54 (2)| 00:00:01 |
-------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - access("DEMO1"."ID"="DEMO2"."ID")
3 - filter("DEMO1"."L"=1)
 
Note
-----
- this is an adaptive plan (rows marked '-' are inactive)

Unfortunately, the baseline captured only the ‘initial’ plan with the HASH JOIN.
This is not what is documented in Maria Colgan paper:
SPM plan capture and Adaptive Plans: When automatic plan capture is enabled and a SQL statement that has an adaptive plan is executed, only the final plan used will be captured in the SQL plan baseline.

Run with baseline

SQL> alter session set statistics_level=all;
Session altered.
 
SQL> select count(DEMO2.l) from DEMO1 join DEMO2 using(id) where DEMO1.l=1;
 
COUNT(DEMO2.L)
--------------
3
 
SQL> select * from table(dbms_xplan.display_cursor(null,null,format=>'allstats last +adaptive'));
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------
SQL_ID agw7bn072730a, child number 1
-------------------------------------
select count(DEMO2.l) from DEMO1 join DEMO2 using(id) where DEMO1.l=1
 
Plan hash value: 740165205
 
------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 390 | | | |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 390 | | | |
|* 2 | HASH JOIN | | 1 | 5882 | 3 |00:00:00.01 | 390 | 2545K| 2545K| 826K (0)|
|* 3 | TABLE ACCESS FULL| DEMO1 | 1 | 5882 | 3 |00:00:00.01 | 195 | | | |
| 4 | TABLE ACCESS FULL| DEMO2 | 1 | 100K| 100K|00:00:00.01 | 195 | | | |
------------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - access("DEMO1"."ID"="DEMO2"."ID")
3 - filter("DEMO1"."L"=1)
 
Note
-----
- SQL plan baseline SQL_PLAN_4s6u08t0bffn1e47b6a4d used for this statement

This confirms that the SQL baseline forces the initial HASH JOIN plan. It’s a bug that should be fixed in 12.2 so for the moment, be very careful when you want to fix an adaptive plan with SQL Baselines: your goal is to stabilize once you have the optimal plan, but the result may be a regression to a bad plan.

 

Cet article Adaptive Plans and SQL Baselines est apparu en premier sur Blog dbi services.

ORA-01775: looping chain of synonyms

Tue, 2016-06-21 07:13

This error message is misleading. You may encounter it when you expect ORA-00942: table or view does not exist. Let’s explain

I’m connected as SCOTT and create a PUBLIC SYNONYM for an object that do not exists:

SQL> create public synonym MONEY for NOTHING;
Synonym created.

No error message.
Only when I read it I have an error message telling me that there are no table or view behind it:

SQL> select * from NOTHING;
select * from NOTHING
*
ERROR at line 1:
ORA-00942: table or view does not exist

Let’s do the same but call it BONUS instead of MONEY:

SQL> create public synonym BONUS for NOTHING;
Synonym created.
 
SQL> select * from BONUS;
no rows selected

No error here. Why? because I’ve a table that is called BONUS. So the name is resolved with the table and the synonym is not even tried.

I’ll now drop that synonym and create it for the table BONUS. Same name for the public synonym and for the table.

SQL> drop public synonym BONUS;
Synonym dropped.
 
SQL> create public synonym BONUS for BONUS;
Synonym created.

As user SCOTT, when I query BONUS the name is resolved as the table:

SQL> show user
USER is "SCOTT"
SQL> select * from BONUS;
no rows selected

As another user, when I query BONUS the name is resolved as the synonym, which finally reads SCOTT.BONUS:

SQL> show user
USER is "SCOTT"
SQL> select * from BONUS;
no rows selected

In 12c it is easy to see the final query:

SQL> variable c clob
SQL> exec dbms_utility.expand_sql_text('select * from BONUS',:c);
PL/SQL procedure successfully completed.
 
SQL> print c
 
C
----------------------------------------------------------------------------------------------------------
SELECT "A1"."ENAME" "ENAME","A1"."JOB" "JOB","A1"."SAL" "SAL","A1"."COMM" "COMM" FROM "SCOTT"."BONUS" "A1"

But now, what happens when we drop the table?

SQL> drop table SCOTT.BONUS;
Table dropped.

Do you expect a ORA-00942: table or view does not exist?

SQL> select * from BONUS;
select * from BONUS
*
ERROR at line 1:
ORA-01775: looping chain of synonyms

Here is the ‘looping chain of synonyms’. I ask for BONUS. The name resolution first check for an object in my schema, but there are none:

SQL> select object_type from user_objects where object_name='BONUS';
no rows selected

Then it looks for public synonym and there is one:

SQL> select object_type from all_objects where owner='PUBLIC' and object_name='BONUS';
 
OBJECT_TYPE
-----------------------
SYNONYM

So we check what it is a synonym for:

SQL> select table_owner,table_name from all_synonyms where owner='PUBLIC' and synonym_name='BONUS';
 
TABLE_OWNER TABLE_NAME
------------ ------------
SCOTT BONUS

And there it is interesting. Besides the column names that includes ‘TABLE’ a synonym can reference any object. So it’s not just replacing the synonym with ‘SCOTT.BONUS’ which would raise an ORA-00942. It is doing name resolution of BONUS in the context of the user SCOTT. Something similar to:

SQL> alter session set current_schema=SCOTT;
Session altered.
SQL> select * from BONUS;

And then, what do you expect from that? There is no table named BONUS but there is a public synonym… and you’re back to the begining:

select * from BONUS
*
ERROR at line 1:
ORA-01775: looping chain of synonyms

Most of the time, you don’t have synonyms that reference other synonyms, so you don’t really have a ‘chain’ of synonyms. Except when there is only synonym in the namespace and it’s a self-reference loop. So if you see ORA-01775, check if the referenced object is not missing.

 

Cet article ORA-01775: looping chain of synonyms est apparu en premier sur Blog dbi services.

Pages