Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 6 hours 42 min ago

GoldenGate 12.2 – Installation of the monitoring agent

20 hours 37 min ago

As described in my last post, GoldenGate Studio requires the monitor agent on each GoldenGate instance. The goal of this agent is to allow GoldenGate Studio to interact with GoldenGate, for example, to deploy a new solution.

So lets start with the installation of the agent.

Download

The first step is to download the monitor agent (Oracle GoldenGate Monitor). It is available here.

Install_agent_monitor_8

Installation

The second step is to install the product.

After you have transferred the installer to the server decompress it.

oracle@srvoracle:/tmp/ [DB1] cd monitor/
oracle@srvoracle:/tmp/monitor/ [DB1] ll
total 420092
-rw-r--r-- 1 oracle oinstall 430166267 Feb 29 13:32 fmw_12.2.1.0.0_ogg_Disk1_1of1.zip
oracle@srvoracle:/tmp/monitor/ [DB1] unzip fmw_12.2.1.0.0_ogg_Disk1_1of1.zip 
Archive:  fmw_12.2.1.0.0_ogg_Disk1_1of1.zip
  inflating: fmw_12.2.1.0.0_ogg.jar  
oracle@srvoracle:/tmp/monitor/ [DB1] ll
total 840392
-rw-r--r-- 1 oracle oinstall 430166267 Feb 29 13:32 fmw_12.2.1.0.0_ogg_Disk1_1of1.zip
-r-xr-xr-x 1 oracle oinstall 430387063 Oct 14 08:33 fmw_12.2.1.0.0_ogg.jar

For launching the installer it is mandatory to have at a minimum Java version 8 (1.8). If this is not available it can be downloaded here.

To start the installation, launch the fmw_12.2.1.0.0_ogg.jar.

oracle@srvoracle:/home/oracle/Downloads/jdk1.8.0_73/bin/ [DB1] ./java -jar /tmp/monitor/fmw_12.2.1.0.0_ogg.jar 
Launcher log file is /tmp/OraInstall2016-02-29_01-39-26PM/launcher2016-02-29_01-39-26PM.log.
Extracting files.......
Starting Oracle Universal Installer

Checking if CPU speed is above 300 MHz.   Actual 2494.801 MHz    Passed
Checking monitor: must be configured to display at least 256 colors.   Actual 16777216    Passed
Checking swap space: must be greater than 512 MB.   Actual 4095 MB    Passed
Checking if this platform requires a 64-bit JVM.   Actual 64    Passed (64-bit not required)
Checking temp space: must be greater than 300 MB.   Actual 28817 MB    Passed


Preparing to launch the Oracle Universal Installer from /tmp/OraInstall2016-02-29_01-39-26PM
Log: /tmp/OraInstall2016-02-29_01-39-26PM/install2016-02-29_01-39-26PM.log
Logs successfully copied to /u01/app/oraInventory/logs.

The OUI (Oracle Universal Installer) will start. On the first screen just click on the next button.

Install_agent_monitor_1

On the next screen, we can choose the option for the updates. In my case, I leave the option to skip the auto updates.

Install_agent_monitor_2

Fill up the software location desired for GoldenGate agent.

Install_agent_monitor_3

Select the option to install only the agent monitor.

Install_agent_monitor_4

The OUI will test the system configuration and the java version.

Install_agent_monitor_5

The OUI provides a summary of the configuration. Click on next button, if all is ok.

Install_agent_monitor_9

The installation is done.

Install_agent_monitor_6

At the end OUI provides a summary of the installation with the location of the logs.

Install_agent_monitor_7

Now GoldenGate agent is installed.

Configuration Create instance

To create the instance of the agent, go where the binaries have been installed. In this example, it is /u01/app/oracle/product/jagent/oggmon/ogg_agent.

After that, launch the script createMonitorAgentInstance.sh.

oracle@srvoracle:/u01/app/oracle/product/jagent/oggmon/ogg_agent/ [DB1] ./createMonitorAgentInstance.sh 
Please enter absolute path of Oracle GoldenGate home directory : /u01/app/oracle/product/12.1.0/gg_1
Please enter absolute path of OGG Agent instance : /u01/app/oracle/product/12.1.3.0/jagent
Please enter unique name to replace timestamp in startMonitorAgent script (startMonitorAgentInstance_20160229140552.sh) : 2
Sucessfully created OGG Agent instance.
Create password

The agent needs a password to work. All the passwords will be stored in a wallet. For this go to the ORACLE_HOME_AGENT/bin. In my case, /u01/app/oracle/product/12.1.3.0/jagent/bin.

Launch the script pw_agent_util.sh.

oracle@srvoracle:/u01/app/oracle/product/12.1.3.0/jagent/bin/ [DB1] ./pw_agent_util.sh -jagentonly
Please create a password for Java Agent: 
Please confirm password for Java Agent: 
Feb 29, 2016 2:18:55 PM oracle.security.jps.JpsStartup start
INFO: Jps initializing.
Feb 29, 2016 2:18:56 PM oracle.security.jps.JpsStartup start
INFO: Jps started.
Wallet is created successfully.
Enable monitoring

To enable the monitoring, launch ggsci command and edit the GOLBALS parameter file.

oracle@srvoracle:/u01/app/oracle/product/12.1.3.0/jagent/bin/ [DB1] ggi 

GGSCI (srvoracle) 2> edit params ./GLOBALS

GGSCI (srvoracle) 4> view params ./GLOBALS

GGSCHEMA ggadmin
CHECKPOINTTABLE ggadmin.checkpoint
ENABLEMONITORING

Now restart the ggsci command and the jagent appears when doing an “info all”.

oracle@srvoracle:/u01/app/oracle/product/12.1.3.0/jagent/bin/ [DB1] ggi

GGSCI (srvoracle) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED                                           
JAGENT      STOPPED

To finalize, delete the datastore and recreate one.

oracle@srvoracle:/u01/app/oracle/product/12.1.3.0/jagent/ [DB1] ggi

GGSCI (srvoracle) 1> delete datastore
Are you sure you want to delete the datastore? yes

2016-02-29 14:33:30  INFO    OGG-06492  Datastore deleted.

GGSCI (srvoracle) 2> create datastore
Profile 'Trace' added.

2016-02-29 14:33:55  INFO    OGG-06489  Datastore created.

Now you can start the manager process and the jagent.

Conclusion

Now that the agents have been installed on each instance, all the prerequisite for GoldenGate Studio are met.

In the next blog, I will show you how to use GoldenGate Studio to deploy a solution

 

Cet article GoldenGate 12.2 – Installation of the monitoring agent est apparu en premier sur Blog dbi services.

SQL Server 2016: Always Encrypted – part 2

20 hours 48 min ago

In my last blog post about SQL Server 2016 Always Encrypted, here, I showed how to use this new functionality but also that you have to separate the different execution context with an Application server, a database server and a security server to avoid that certificate will be available for all users and break the segregation.

Let’s see how to build those environment.
In my security server named SQL2016-2, I first create a Self-signed certificate with PowerShell after having import the PKI module. The certificate is stored in the local machine with:

  • DnsName, DNS name of the certificate: Demo_AlwaysEncrypted.cert
  • CertStoreLocation, specifies the certificate store in which to store the new certificate: Cert:\LocalMachine\MY
  • Provider, specifies the name of the KSP or CSP that this cmdlet uses to create the certificate: Microsoft Strong Cryptographic Provider
  • KeyAlgorithm, specifies the name of the algorithm that creates the asymmetric keys that are associated with the new certificate: RSA
  • KeyLength, specifies the length, in bits, of the key that is associated with the new certificate: 2048
  • HashAlgorithm, specifies the name of the hash algorithm to use to sign the new certificate: sha256

PowerShell script:

Import-Module PKI
New-SelfSignedCertificate -DnsName "Demo_AlwaysEncrypted.cert" -CertStoreLocation Cert:\LocalMachine\MY -Provider "Microsoft Strong Cryptographic Provider" -KeyAlgorithm RSA -KeyLength 2048 -HashAlgorithm sha256

Blog_AlwaysEncrypted_part2_1

I have now my new certificate.
I open my SQL Server instance and navigate in my database SecurityDB to Security, Always Encrypted Keys and right click on Column Master Keys and select “New Column Master Key…”.
I create a new Column Master Key with the certificate I have just created on my local Machine and I name it CMK2:

Blog_AlwaysEncrypted_part2_2

The code to create my Master Key is the following, you can have it by clicking on the Script button in the above screen:

CREATE COLUMN MASTER KEY [CMK2]
WITH
(
       KEY_STORE_PROVIDER_NAME = N'MSSQL_CERTIFICATE_STORE',
       KEY_PATH = N'LocalMachine/My/A3DF88FA53ED5347ABCF709D7A15621CDC358456'
)
GO

I create now a Column Encryption Key named CEK2 with the Column Master Key I have create before and named CMK2:

Blog_AlwaysEncrypted_part2_3

The code to create my Encryption Key is the following:

CREATE COLUMN ENCRYPTION KEY [CEK2]
WITH VALUES
(
       COLUMN_MASTER_KEY = [CMK2],
       ALGORITHM = 'RSA_OAEP',
       ENCRYPTED_VALUE = 0x01700000016C006F00630061006C006D0061006300680069006E0065002F006D0079002F00610033006400660038003800660061003500330065006400350033003400370061006200630066003700300039006400370061003100350036003200310063006400630033003500380034003500360098721948A1283B0E5375B4E7F0A5F2EED05EDB0A87814BF66EABCBE2B4E75B5A759CF48D7D87D6A2C44A6DB2B587992E412EAF20D3FE826844FE76B1BC764E769C911486150E08EB972DBF406FF51738EFAA7F831A27A540F948CBC673273B1205A2CEF33AE77A0AF01DB3589A4A746151DC8C699DAB7AD1B99910A598EA7258670F1D5C0600704A9BB440F86761F41E8C72E52FFF6A7348B65F62E60D90B60CF1FC00B1B473F40E82B140219A317760CBC8591E07215AD114FAF80F08361C607823DB44037C455514B3789661560613638D058D7965846962B9FD3D9ECF0AC62325043AB40D892683744DC994E868C5F0E3FD14277D8941977426AC59DEA7EEF1394641DB314C2DCB083D3D5A12B97A131986D9834CBF2837908D3EF54C9104D92371C8A9EDC9B22555D8D615440D21B03393303FD4A9C0047282091178F31323A6EF4313576470349F32F106D38ABC3B0A1C4F47E870EA43F7B90D85D4F6C6686368E4E5EFC416EA08241F75F2D7B4F4B700114CA545AA4310829B2691A6D2C553F8F8B7E3983FC99F4E0A28BD79C84F50B7F6ABB17419DEAE06ACA694D68B00B48DBC0D66B46847A389C8A587F138CEF7791845A999EA45849D2FAA45BDA88FE134C70E592937981A7B2A18397CC615BB435A1EDDD66CF58119D89B24190744F4326718CBCDC1182AD20C8023EECB3994569844270DC90020B20D70B5CCC5
)
GO

I will now transfer those two certificates to my SQL Server VM, named SQL2016-1, in order to encrypt my table columns with them. To do that, I will execute the scripts I have generated before in my SQL Server instance in my security VM. I have created a second database named TestDB2 where I will execute my scripts:

Blog_AlwaysEncrypted_part2_4

After execution of the script I have my Column Master Key and my Column Encryption Key in the Security\Always Encrypted Keys path of my TestDB2 database.

I will now create the same table that I have on my TestDB database (see my previous blog post) in my new database TestDB2. For that I will execute the following script with the same encrypted columns (CustAccountNumber, CustAccountBalance, CustCreditCardNumber) but with my new Column Encryption Key CEK2:

Blog_AlwaysEncrypted_part2_5

My table is created with the specified columns encrypted.
I will now export my certificate from my security server to my application server with PowerShell.
First, I retrieve my certificate from the store and export it to a file with a password:

PowerShell script:

# I retrieve the Certificate from the store
$MyCert = Get-ChildItem Cert:\LocalMachine\My -DnsName "Demo_AlwaysEncrypted.cert"
# I create a password to protect my certificate
$pwd = ConvertTo-SecureString -String "Demo" -Force –AsPlainText
# I export my certificate as PFX Certificate Archive
Export-PfxCertificate -Cert $MyCert -FilePath C:\Temp\MyCert.pfx -Password $pwd

Blog_AlwaysEncrypted_part2_6

I’m able now to copy my certificate in my application server and import it:

PowerShell script:

# I import the certificate from my file
$pwd = ConvertTo-SecureString -String "Demo" -Force –AsPlainText
Import-PfxCertificate -CertStoreLocation Cert:\LocalMachine\My -FilePath C:\Temp\MyCert.pfx -Password $pwd

Blog_AlwaysEncrypted_part2_7

My certificate is now in the store of my application server and I will be able at this time to insert and read plain text data from my application server. I will use a PowerShell script to insert data. I will not explain all this script but it’s good to know that:

  • The connection string needs to enable Always Encrypted for the client application, here PowerShell, with this parameter: Column Encryption Setting=enabled
  • The application passes plain text data in SqlParameter objects to the driver. The driver will encrypt the values and send encrypted values to the SQL Server VM

PowerShell script:

$sqlConn = New-Object System.Data.SqlClient.SqlConnection
$sqlConn.ConnectionString = "Server=SQL2016-1\SQL2016_1;Integrated Security=true; Initial Catalog=TestDB2; Column Encryption Setting=enabled;"
$sqlConn.Open()
$sqlcmd = New-Object System.Data.SqlClient.SqlCommand
$sqlcmd.Connection = $sqlConn
$sqlcmd.CommandText = "INSERT INTO dbo.Customer (CustNb, CustName, CustSurname, CustAccountNumber, CustAccountBalance, CustCreditCardNumber, CustCountry) VALUES (@CustNb, @CustName, @CustSurname, @CustAccountNumber, @CustAccountBalance, @CustCreditCardNumber, @CustCountry)"
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustNb",[Data.SQLDBType]::uniqueidentifier)))
$sqlcmd.Parameters["@CustNb"].Value = [GUID]::NewGuid()
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustName",[Data.SQLDBType]::VarChar,50)))
$sqlcmd.Parameters["@CustName"].Value = "Durand"
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustSurname",[Data.SQLDBType]::VarChar,50)))
$sqlcmd.Parameters["@CustSurname"].Value = "Laurent"
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustAccountNumber",[Data.SQLDBType]::VarChar,50)))
$sqlcmd.Parameters["@CustAccountNumber"].Value = "1234-234589"
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustAccountBalance",[Data.SQLDBType]::Int)))
$sqlcmd.Parameters["@CustAccountBalance"].Value = 1256890
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustCreditCardNumber",[Data.SQLDBType]::VarChar,50)))
$sqlcmd.Parameters["@CustCreditCardNumber"].Value = "1234-4245-4563-4564"
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustCountry",[Data.SQLDBType]::VarChar,200)))
$sqlcmd.Parameters["@CustCountry"].Value = "France"
$sqlcmd.ExecuteNonQuery();
$sqlConn.Close()

Blog_AlwaysEncrypted_part2_9

I have now inserted a row in my Customer table. If I try to do the same insert without the parameter Column Encryption Setting=enabled, I receive an error message:

Blog_AlwaysEncrypted_part2_10

Now, if I want to read my table from my application server I can execute a simple select * with this PowerShell script and my parameter Column Encryption Setting=enabled:

PowerShell script:

$sqlConn = New-Object System.Data.SqlClient.SqlConnection
$sqlConn.ConnectionString = “Server=SQL2016-1\SQL2016_1;Integrated Security=true; Initial Catalog=TestDB2; Column Encryption Setting=enabled;”
$sqlConn.Open()
$sqlcmd = $sqlConn.CreateCommand()
$query = "SELECT * FROM dbo.customer"
$sqlcmd.CommandText = $query
$adp = New-Object System.Data.SqlClient.SqlDataAdapter $sqlcmd
$data = New-Object System.Data.DataSet
$adp.Fill($data) | Out-Null
$data.Tables

Blog_AlwaysEncrypted_part2_11

I see plan text data but without the parameter:

Blog_AlwaysEncrypted_part2_12

I’m able to read the table but the encrypted columns appear as blob and cannot by readable.
If I go back to my SQL Server VM, even with SYSADMIN right and with the connection string parameter Column Encryption Setting=enabled I am not allowed to query the table:

Blog_AlwaysEncrypted_part2_13

The error indicates that I don’t have access on my SQL Server VM at the certificate used to encrypt the column master key and so I’m not able to decrypt the encrypted columns.
If I don’t use the parameter Column Encryption Setting=enabled during my connection via SQL Server Management Studio and try to query my table:

Blog_AlwaysEncrypted_part2_14

I can see my table row(s) but the encrypted columns appear as BLOB and so are protected to unexpected reader which is exactly the goal of this new feature: avoid that SQL Server DBAs or users with some rights should able to visualize critical data.
In order to validate this security, the certificate must not be saved in the certificate store of the SQL Server VM.
That’s all Folks! ;-)

 

Cet article SQL Server 2016: Always Encrypted – part 2 est apparu en premier sur Blog dbi services.

Documentum Multiple ADTS: Switching rendition queue

Wed, 2016-06-29 08:29

As part of my previous posts about having 2 rendition servers for one docbase (see below), I’ll show you how to simply switch a rendition queue item to the other server
http://blog.dbi-services.com/documentum-multiple-adts-ratio-of-rendition-creations-between-instances-part-1/
http://blog.dbi-services.com/documentum-multiple-adts-ratio-of-rendition-creations-between-instances-part-2/

I had an issue by a customer where one of the two rendition server was stuck since 2 days. As I explained in my previous posts, each server will reserve a group of items from the queue for it to process. Let’s say we got the threshold to 5 items. Each server will reserve 5 items in the dmi_queue_item and set the attribute sign_off_user to itself. E.g. RNDSERVER_DOCBASE1.

Then it will process each items one by one; onces one is done it will reserve a new one from the queue, and so on.

The problem is: if the rendition server is stuck for whatever reason all reserved items will NOT go back to the available pool. It means that they will be reserved by THIS rendition server until you fix the server and it starts processing them again.

You can imagine what I got by the customer, some documents were not rendered since 2 days!

So here is the simplest solution to put the items back in the pool:

update dmi_queue_item objects set sign_off_user ='' where sign_off_user ='RNDSERVER_DOCBASE1';

Hence all items will be set as available. The other rendition server should reserve them now as the current server is stuck and can’t reserve more items.

In the case of a big file beeing processed by the first server and you want the documents to be processed by the other one you can reserver items by yourself manually with:

update dmi_queue_item objects set sign_off_user='RNDSERVER2_DOCBASE1' where item_id in ('09xxxx','09xxxx');

If you have any questions please use the comment section.

 

Cet article Documentum Multiple ADTS: Switching rendition queue est apparu en premier sur Blog dbi services.

PostgreSQL as a central reporting hub? – Yes, of course

Tue, 2016-06-28 15:31

For every client we do projects for there usually are database systems from various vendors. In most places you at least find some Oracle and MS SQL Server databases. Probably somewhere there are even some MySQL/MariaDB or PostgreSQL instance running. The Oracle and MS SQL Server stuff almost always is there because vendors require it. For MySQL/MariaDB the common use case are web applications, ticketing systems and home grown developments. At some point in time there might be a requirement to centralize important data of all these databases into a central reporting hub. The more data you have the more there is a requirement to analyze it and to create reports that drive some sorts of decisions. PostgreSQL is very well prepared to assist in this. If you have read the last posts about connecting your PostgreSQL instance to either Oracle, MS SQL Server, MariaDB/MySQL or even other PostgreSQL instances you might already know into what direction this post will go.

This is a sample use case: There is a requirement to centralize all the sales figures of a company into one big reporting database. The (simplified) database landscape in this company looks exactly as described above:

pg_reporting_1

There is one Oracle database, one MS SQL Server instance, one MariaDB/MySQL instance and one PostgreSQL instance. Each of them holds some sales figures which shall be integrated into the reporting database. For the demo setup I use the Dell DVD Store Database sample schema. You can read how to set this up for:

As the goal is to have a PostgreSQL instance as a central reporting hub the desired database landscape will approximately look like this:

pg_reporting_2

Lets assume all the foreign data wrappers are already setup in the central PostgreSQL instance:

(postgres@[local]:4445) [postgres] > \dx
                                                 List of installed extensions
     Name     |    Version    |   Schema   |                                    Description                                    
--------------+---------------+------------+-----------------------------------------------------------------------------------
 mysql_fdw    | 1.0           | public     | Foreign data wrapper for querying a MySQL server
 oracle_fdw   | 1.1           | public     | foreign data wrapper for Oracle access
 plpgsql      | 1.0           | pg_catalog | PL/pgSQL procedural language
 postgres_fdw | 1.0           | ds2_oracle | foreign-data wrapper for remote PostgreSQL servers
 tds_fdw      | 2.0.0-alpha.1 | public     | Foreign data wrapper for querying a TDS database (Sybase or Microsoft SQL Server)

In addition lets assume all the foreign servers and all the user mappings are there and are working:

(postgres@[local]:4445) [postgres] > select srvname,srvoptions from pg_foreign_server;
   srvname    |                                      srvoptions                                       
--------------+---------------------------------------------------------------------------------------
 mysql_server | {host=192.168.22.41,port=3306}
 mssql_svr    | {servername=192.168.22.102,port=1433,database=ds2,tds_version=7.3,msg_handler=notice}
 oracle_srv   | {dbserver=//192.168.22.42/DELLDVD}
 postgres_srv | {host=192.168.22.40,port=5432,dbname=ds2}
(4 rows)

(postgres@[local]:4445) [postgres] > select * from pg_user_mappings;
 umid  | srvid |   srvname    | umuser | usename  |               umoptions                
-------+-------+--------------+--------+----------+----------------------------------------
 65547 | 65546 | mysql_server |     10 | postgres | {username=web,password=web}
 65587 | 65586 | mssql_svr    |     10 | postgres | {username=ds2user,password=ds2}
 65615 | 65614 | oracle_srv   |     10 | postgres | {user=DS2,password=ds2}
 65676 | 65675 | postgres_srv |     10 | postgres | {user=ds2,password=ds2}

As the central reporting database is already connected to all the other database systems how could you organize the fetching of the data? One approach is to create a separate schema for each of the foreign databases and one additional schema that combines the data:

(postgres@[local]:4445) [postgres] > create schema ds2_mssql;
(postgres@[local]:4445) [postgres] > create schema ds2_oracle;
(postgres@[local]:4445) [postgres] > create schema ds2_mysql;
(postgres@[local]:4445) [postgres] > create schema ds2_postgresql;
(postgres@[local]:4445) [postgres] > create schema ds2_combined;
(postgres@[local]:4445) [postgres] > \dn
      List of schemas
      Name      |  Owner   
----------------+----------
 ds2_combined   | postgres
 ds2_mssql      | postgres
 ds2_mysql      | postgres
 ds2_oracle     | postgres
 ds2_postgresql | postgres
 public         | postgres
(6 rows)

The next step is to create the foreign tables we want to work with.

For Oracle, MySQL/MariaDB and PostgreSQL we can use the import foreign schema command:

postgres=# import foreign schema "DS2" from server oracle_srv into ds2_oracle;
IMPORT FOREIGN SCHEMA
postgres= import foreign schema "DS2" from server mysql_srv into ds2_mysql;
IMPORT FOREIGN SCHEMA
postgres= import foreign schema "public" from server postgres_srv into ds2_postgresql;
IMPORT FOREIGN SCHEMA

For MS SQL Server we need to specify the foreign tables on our own (there is a feature request to implement import foreign schema):

create foreign table ds2_mssql.orders 
  ( orderid numeric not null
  , orderdate timestamp(0) without time zone not null
  , customerid numeric                                
  , netamount numeric(12,2) not null 
  , tax numeric(12,2) not null 
  , totalamount numeric(12,2) not null 
  )
SERVER mssql_svr
OPTIONS (table 'dbo.orders', row_estimate_method 'showplan_all');

create foreign table ds2_mssql.orderlines
  ( orderlineid numeric not null 
  , orderid numeric not null 
  , prod_id numeric not null 
  , quantity numeric not null 
  , orderdate timestamp(0) without time zone not null 
  )
SERVER mssql_svr
OPTIONS (table 'dbo.orderlines', row_estimate_method 'showplan_all');

Having all the foreign tables available we can start to work with the data either by creating materialized views:

create materialized view ds2_combined.mv_orders as
  select * from ds2_mssql.orders
  union all
  select * from ds2_mysql."ORDERS"
  union all
  select * from ds2_oracle.orders
  union all
  select * from ds2_postgresql.orders
  with no data;
refresh materialized view ds2_combined.mv_orders with data;

… or by importing the data into PostgreSQL and then build reports on top of that:

BEGIN;
  insert into ds2_combined.orders select * from ds2_mssql.orders;
  insert into ds2_combined.orders select * from ds2_mysql."ORDERS";
  insert into ds2_combined.orders select * from ds2_oracle.orders;
  insert into ds2_combined.orders select * from ds2_postgresql.orders;
END;
BEGIN;
  insert into ds2_combined.orderlines 
         select * from ds2_mssql.orderlines;
  insert into ds2_combined.orderlines 
         select * from ds2_mysql."ORDERLINES";
  insert into ds2_combined.orderlines 
         select * from ds2_oracle.orderlines;
  insert into ds2_combined.orderlines 
         select * from ds2_postgresql.orderlines;
END;

Having all the data locally available you can do whatever you want with it. Have fun with reporting on your data …

Btw: The source for this was a session at the Swiss PGDAY 2016. You can download the slides there.

 

Cet article PostgreSQL as a central reporting hub? – Yes, of course est apparu en premier sur Blog dbi services.

Statspack Idle Events

Tue, 2016-06-28 11:26

When you don’t have Enterprise Edition + Diagnostic Pack, you cannot use AWR but you can, and should, install Statspack. Statspack is still there, but unfortunately do not evolve a lot. The most important section, the ‘Top 5 Timed Events’ shows only foreground events, or is supposed to do so. When a user process waits on a background process, this section must count only the foreground wait and not the activity of the background process or we will have double accounting. The background activity is included in ‘Idle’ events in order to be excluded from this section.
But unfortunately, new versions come with new wait events, and the list of Statspack idle events is not up to date anymore.

Here is the ‘Top 5 Timed Events’ I got from a database between 22:00 and 23:00 where there is no application activity:

Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
----------------------------------------- ------------ ----------- ------ ------
LGWR worker group idle 6 22,049 ###### 65.2
AQPC idle 120 3,602 30014 10.7
heartbeat redo informer 3,568 3,601 1009 10.7
lreg timer 1,195 3,598 3011 10.6
direct path read 31,221 466 15 1.4
-------------------------------------------------------------

Humm. What can you do with that? Idle events and timers are at the top. Direct path read seem to be minimal. And no CPU usage?
Obviously, something is wrong here.

Statspack uses a fixed list of wait events that are considered as ‘idle’ events and it is stored at Statspack installation into STATS$IDLE_EVENT.
This comes from an age where wait classes were not there. In current version, a more realistic list of wait events is in V$EVENT_NAME where class_name=’Idle’

Let’s compare them (that’s in 12.1.0.1)

First, are there some non-idle events that are considered as ‘idle’ by Statspack?


SQL> select s.event statspack_idle_event,v.name,v.wait_class from STATS$IDLE_EVENT s left outer join V$EVENT_NAME v on s.event=v.name where wait_class'Idle';
 
STATSPACK_IDLE_EVENT NAME WAIT_CLASS
--------------------------------------- --------------------------------------- ----------
null event null event Other
SQL*Net message to client SQL*Net message to client Network
SQL*Net more data from client SQL*Net more data from client Network
KSV master wait KSV master wait Other
parallel recovery slave wait for change parallel recovery slave wait for change Other

The goal of this blog post is not to detail the meaning of each of those events (search for them on tanelpoder.com as a good start for that), but if they are now considered as non-idle, Statspack should obey the same rule.

Then we can check which idle events are not in Statspack list:


SQL> select s.event statspack_idle_event,v.name,v.wait_class from STATS$IDLE_EVENT s right outer join V$EVENT_NAME v on s.event=v.name where wait_class
 
STATSPACK_IDLE_EVENT NAME WAIT_CLASS
-------------------- ------------------------------------------ ----------
OFS idle Idle
heartbeat redo informer Idle
LGWR worker group idle Idle
Recovery Server waiting for work Idle
Recovery Server waiting restore start Idle
Recovery Server Surrogate wait Idle
Recovery Server Servlet wait Idle
Recovery Server Comm SGA setup wait Idle
parallel recovery coordinator idle wait Idle
recovery sender idle wait Idle
recovery receiver idle wait Idle
recovery merger idle wait Idle
virtual circuit next request Idle
lreg timer Idle
REPL Apply: txns Idle
REPL Capture/Apply: messages Idle
REPL Capture: archive log Idle
PL/SQL lock timer Idle
Emon coordinator main loop Idle
Emon slave main loop Idle
AQ: 12c message cache init wait Idle
AQ Cross Master idle Idle
AQPC idle Idle
Streams AQ: load balancer idle Idle
Sharded Queues : Part Maintenance idle Idle
REPL Capture/Apply: RAC AQ qmn coordinator Idle
iowp msg Idle
iowp file id Idle
netp network Idle
gopp msg Idle

There are a lot of them. We can see lot of idle events that have been introduced in recent versions.

The Statspack list is an old list. Here is how to refresh it:


delete from STATS$IDLE_EVENT;
insert into STATS$IDLE_EVENT select name from V$EVENT_NAME where wait_class='Idle';
commit;

Once I did that and run a new Statspack report on the same snapshots as above, I get a more realistic ‘Top 5 Timed Events':


Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
----------------------------------------- ------------ ----------- ------ ------
direct path read 31,221 466 15 48.7
CPU time 310 32.4
db file sequential read 49,137 77 2 8.0
SQL*Net vector data to client 15,189 31 2 3.3
enq: TM - contention 1 24 23937 2.5

Actually, this 22:00 to 23:00 time period is where the maintenance jobs are running. dbms_space.auto_space_advisor_job_proc likes to read your tables in bulk in order to see if there is some free space. And I don’t like this 24 seconds TM lock wait at a time where I though the database was not in use. This was hidden from the original report.

Patch?

Statspack is still supported and there’s a patch to add the following events as idle:
"virtual circuit next request" "AQ Cross Master idle" "AQ: 12c message cache init wait" "AQPC idle" "Emon coordinator main loop" "Emon slave main loop" "LGWR worker group idle" "OFS idle" "REPL Apply: txns" "REPL Capture/Apply: RAC AQ qmn coordinator" "REPL Capture/Apply: messages" "REPL Capture: archive log" "Recovery Server Comm SGA setup wait" "Recovery Server Servlet wait" "Recovery Server Surrogate wait" "Recovery Server waiting for work" "Recovery Server waiting restore start" "Sharded Queues : Part Maintenance idle" "Streams AQ: load balancer idle" "gopp msggopp msg" "heartbeat redo informer" "iowp file id" "iowp msg" "lreg timer" "netp network" "parallel recovery coordinator idle wait" "recovery merger idle wait" "recovery receiver idle wait" "recovery sender idle wait" "imco timer" "process in prespawned state"
(Nice way to be referenced by google for all those improbable wait events, isn’t it?)

However, I think that filing STATS$IDLE_EVENT from V$EVENTNAME, or maybe even replacing it as a view can be a better long term solution. Each version comes with new wait events and it seems that Statspack evolves only through patches.

 

Cet article Statspack Idle Events est apparu en premier sur Blog dbi services.

SQL Server 2016 – Query Store: retrieve query that doesn’t appear!

Tue, 2016-06-28 07:25

For our event SQL Server 2016 in September, I am studying the new functionality Query Store.
My colleague David Barbarin have written few months ago about Query Store and how it’s working.

Simple SELECT Query

To begin, I execute a simple SELECT on a table with 3 different methods:

SELECT * FROM QS_test;
exec sp_executesql N'SELECT * FROM QS_test'
EXEC sp_GetQS_test;

The “sp_GetQS_test” is a stored procedure with the select statement.
I created a little query with specific DMVs for Query Store to analyze the query:

SELECT  qsqt.query_text_id,qsqt.query_sql_text,qsq.query_id,qsq.query_hash,
   qsq.query_parameterization_type_desc,qsq.initial_compile_start_time,
   qsq.last_compile_start_time,qsq.last_execution_time,qsq.avg_compile_duration,
   qsp.query_id,qsp.plan_id,qsrs.execution_type_desc
  FROM sys.query_store_query_text qsqt
  inner join sys.query_store_query qsq on qsq.query_text_id=qsqt.query_text_id
  inner join sys.query_store_plan qsp on qsp.query_id=qsq.query_id 
  inner join sys.query_store_runtime_stats qsrs on qsrs.plan_id=qsp.plan_id   
  WHERE query_sql_text='SELECT * FROM QS_test';

QS_01
As you can see in the result, all 3 queries are present with the same query text (‘SELECT * FROM QS_test’) in the query store.

SELECT Query with a Where clause

I continue my test with a select and a where clause:

  SELECT * FROM QS_test WHERE rid=5

I run my query to find the query in the query store:
QS_02
And, Oh surprise, no query found! The query does not appear in the query store….
I rerun my query without the where clause to see if I find something:
QS_03
The result give me a query written differently:
(@1 tinyint)SELECT * FROM [QS_test] WHERE [rid]=@1

This query goes through a parametrization and to retrieve this information we use a new function in SQL Server 2016: fn_stmt_sql_handle_from_sql_stmt

Function sys.fn_stmt_sql_handle_from_sql_stmt

This function give us the SQL handle for the query
QS_04
After, I add the function in my query to find it in the Query Store:

SELECT  qsqt.query_text_id,qsqt.query_sql_text,qsq.query_id,qsq.query_hash,

qsq.query_parameterization_type_desc,qsq.initial_compile_start_time,qsq.last_compile_start_time,

qsq.last_execution_time,qsq.avg_compile_duration,qsp.query_id,qsp.plan_id,qsrs.execution_type_desc

FROM sys.query_store_query_text qsqt

inner join sys.query_store_query qsq on qsq.query_text_id=qsqt.query_text_id

inner join sys.query_store_plan qsp on qsp.query_id=qsq.query_id

inner join sys.query_store_runtime_stats qsrs on qsrs.plan_id=qsp.plan_id

CROSS APPLY sys.fn_stmt_sql_handle_from_sql_stmt('SELECT * FROM QS_test WHERE rid=5',NULL) fsshfss

WHERE qsqt.statement_sql_handle=fsshfss.statement_sql_handle;

QS_05

It’s done, I retrieve the query thanks to this new function.
You can notice that I use the statement_sql_handle column and not the query_sql_text column in the clause where.

I have tested with query_sql_text column and you can see here the error that I get…
QS_06
The query_sql_text from the function is SQL_Latin1_General_CP1_CI_AS and both my database and my instance are using French_CI_AS. This is not hopeless…
Then, if you want to use the query with query_sql_text, you just need just to precise the collation with the keyword COLLATE
QS_07

 

Cet article SQL Server 2016 – Query Store: retrieve query that doesn’t appear! est apparu en premier sur Blog dbi services.

Trace Flag 4199 or not in SQL Server 2016?

Tue, 2016-06-28 04:34

Some Trace Flag like T1117 and T1118 are no more needed as you can read on David’s blog.
But that’s not all, you have also the T4199…

What’ does the Trace Flag T4199?

This Trace Flag enables all query optimizer fixes.
A lot of DBAs have enabled this Trace Flag globally during the build of a new server.
If you want to take advantage of an enhancement or a fix, the T4199 becomes a necessity…

But now, with SQL Server 2016, this is an old story!

SQL Server 2016 eliminates the T4199

In SQL Server 2016, you don’t need to enable this Trace Flag, forget your old habit!
It is automatically included when you change the COMPATIBILY LEVEL to 130:

USE [master]
GO
ALTER DATABASE [MyDatabase] SET COMPATIBILITY_LEVEL = 130
GO

Advise: If you find a unexpected/poor plan, use the Query Store to analyze and force a plan!
More information on the KB974006

 

Cet article Trace Flag 4199 or not in SQL Server 2016? est apparu en premier sur Blog dbi services.

Swiss PGDAY 2016, Slides are online

Tue, 2016-06-28 03:15

Last Friday the Swiss PGDAY happened in Rapperswil. All the slides as well as some pictures are now availble (tab “Programm”):

>pgday_logo

 

Cet article Swiss PGDAY 2016, Slides are online est apparu en premier sur Blog dbi services.

Manage DDL inside SharePlex

Fri, 2016-06-24 15:30

In a precedent blog (http://blog.dbi-services.com/discovering-shareplex-for-oracle/) about SharePlex, we presented how to setup a replication between 2 environments. In this article we will try to see how SharePlex deals with replicating DDL statements.
Before starting we present below our environment and our configuration file. We are just  replicating scott_atlas to scott_atlasrep2. We suppose that SharePlex is already configured.

Server Name Database Name Oracle Version What atlas.localdomain SPLEXDB 12.1.0.2 Source atlasrep2.localdomain SPLEXSTR2 12.1.0.2 Target
[oracle@atlas config]$ cat ddl_config_atlasrep2.cnf
datasource:o.SPLEXDB

#source tables            target tables            routing map

expand scott_atlas.%       scott_atlasrep2.%       atlasrep2@o.SPLEXSTR2

 

Let’s start SharePlex on both source and target

[oracle@atlas ~]$ $SP_HOME/bin/sp_cop -u $NAME_FOR_ALERTING &
[1] 2617
[oracle@atlas ~]$

*******************************************************
* SharePlex for Oracle Startup
* Copyright 2016 Dell, Inc.
* ALL RIGHTS RESERVED.
* Protected by U.S. Patents: 7,461,103 and 7,065,538
* Version: 8.6.4.66-m64-oracle120
* VarDir : /u01/app/shareplex/vardir/splexdb_864_12_2105
* Port : 2105
*******************************************************
[oracle@atlasrep2 ~]$ $SP_HOME/bin/sp_cop -u $NAME_FOR_ALERTING &
[1] 2437
[oracle@atlasrep2 ~]$

*******************************************************
* SharePlex for Oracle Startup
* Copyright 2016 Dell, Inc.
* ALL RIGHTS RESERVED.
* Protected by U.S. Patents: 7,461,103 and 7,065,538
* Version: 8.6.4.66-m64-oracle120
* VarDir : /u01/app/shareplex/vardir/splexstr2_864_12_2105
* Port   : 2105
*******************************************************

Now let’s verify that SharePlex processes are running on both source and target

[oracle@atlas ~]$ $SP_HOME/bin/sp_ctrl


sp_ctrl (atlas:2105)> show

Process    Source                               Target                 State                   PID
---------- ------------------------------------ ---------------------- -------------------- ------
Capture    o.SPLEXDB                                                   Running                2618
Read       o.SPLEXDB                                                   Running                2620
Export     atlas                                atlasrep2              Running
[oracle@atlasrep2 ~]$  rlwrap $SP_HOME/bin/sp_ctrl

sp_ctrl (atlasrep2:2105)> show

Process    Source                               Target                 State                   PID
---------- ------------------------------------ ---------------------- -------------------- ------
Import     atlas                                atlasrep2              Running                2734
Post       o.SPLEXDB-atlas                      o.SPLEXSTR2            Running                2438

SharePlex provides many parameters to handle DDL replication. We are going to see some of them. To display SharePlex parameters, just execute

sp_ctrl (atlas:2105)> list param

SP_OCT_REPLICATE_DDL:  Possible values 0/1/2/3

Manages the replication of ALTER and TRUNCATE statements. It can have the following values

0: No replication of both ALTER and TRUNCATE

1: Only ALTER replication

2: Only TRUNCATE replication

3: Both replication of ALTER and TRUNCATE  –default value

Let’s show an example with the default value 3, ALTER and TRUNCATE DDL performed in scott_atlas schema should be replicated. From the source and the target we can see that table bonus is not empty

From the source

SQL> select name from v$database;

NAME
---------
SPLEXDB
SQL> conn scott_atlas/tiger
Connected.
SQL> select * from bonus;

ENAME      JOB              SAL       COMM
---------- --------- ---------- ----------
Moise      IT              2300        200

 

From the target

SQL> select name from v$database;

NAME
---------
SPLEXSTR

SQL> conn scott_atlasrep2/tiger
Connected.
SQL> select * from bonus;

ENAME      JOB              SAL       COMM
---------- --------- ---------- ----------
Moise      IT              2300        200

Let’s truncate table bonus and let’s add 2 new columns on the source

SQL> show user
USER is "SCOTT_ATLAS"
SQL> truncate table bonus;

Table truncated.

SQL> alter table bonus add (col_net number,col_var number);

Table altered.

SQL>  select * from bonus;

no rows selected

We can see that both DDL are replicated on the target

SQL> show user
USER is "SCOTT_ATLASREP2"
SQL> select * from bonus;

no rows selected

SQL> desc bonus
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ENAME                                              VARCHAR2(10)
 JOB                                                VARCHAR2(9)
 SAL                                                NUMBER
 COMM                                               NUMBER
 COL_NET                                            NUMBER
 COL_VAR                                            NUMBER

SP_OCT_AUTOADD_ENABLE:  Possible values 0/1

Manages newly created objects in the active configuration (objects in the configuration file). The default value is 1, meaning that newly created table in scott_atlas schema will be automatically replicated on the source. SharePlex will also replicated all corresponding DML.
Let’s create a table on the source and let’s insert some data

SQL> show user
USER is "SCOTT_ATLAS"
SQL> create table mytable (id number primary key,message long);

Table created.

SQL> insert into mytable values (1,'Ceci est un message important concernant...');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from mytable;

        ID
----------  MESSAGE
------------------------------------------------------------
         1
Ceci est un message important concernant...

We can verify that the new table and corresponding data are replicated on the target.

SQL> show user
USER is "SCOTT_ATLASREP2"
SQL> select * from mytable;

        ID
----------
MESSAGE
--------------------------------------------------------------------------------
         1
Ceci est un message important concernant...

 

SP_OCT_REPLICATE_ALL_DDL: Possible values 0/1

Manages  DDL replication  for certain objects that are not listed in the configuration file. The default value is 0 and means that DDL concerning objects which are not in the active configuration will not be replicated. Let’s take an example with a toto schema (Just remember that in our configuration file only scott_atlas objects are being replicated).
From the source let’s create a table test1 in toto schema

SQL> connect toto/toto

Connected.

SQL> create table test1(id number);

 
Table created.

As expected we can verify that table test1 is not replicated in toto schema on the target

SQL> conn toto/toto
Connected.
SQL> desc test1
ERROR:
ORA-04043: object test1 does not exist

Now let’s set the value of the parameter to 1 on the source

sp_ctrl (atlas:2105)> set param SP_OCT_REPLICATE_ALL_DDL 1

And let’s create a table test2 on toto schema and let’s insert a row on the source

SQL> create table test2(id number);

Table created.

SQL> insert into test2 values (1);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from test2;

        ID
----------
         1

We can see that DDL is replicated (table creation) on target but not the corresponding DML.

SQL> select * from test2;

no rows selected

SQL>

That means that if an object is not in the active replication and if the parameter SP_OCT_REPLICATE_ALL_DDL is set to 1, DDL will be replicated but data will not.

 

Cet article Manage DDL inside SharePlex est apparu en premier sur Blog dbi services.

Upgrading to 7.2 created a new ACS. How to remove it and test it?

Fri, 2016-06-24 03:46

I had this strange behavior that once upgraded from 6.7 to 7.2 a new ACS was created. I think it’s because the current ACS name didn’t fit the new ACS name pattern. Well it’s not a big issue to have 2 ACS configured. But in my case they pointed both to the same port and servlet so… I had to remove one.

Hence, how can we know which one is used?

That’s easy, just find the acs.properties file located in:

$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties

 

In this file you should find the line:

repository.acsconfig=YOUR_ACS_NAME.comACS1

 

In fact my previous ACS was named YOUR_ACS_NAME.cACS1 that’s why I think a new one was created. So here you have the ACS used and you just have to remove the other one:

delete dm_acs_config objects where object_name = ‘YOUR_OLD_ACS_NAME';

Fine, now how can we check that the ACS is working properly?

First you can paste the ACS url i your browser to check if it’s running, it should look like this:

http://your-content-server-host:9080/ACS/servlet/ACS

 

If you installed your method server on another port than 9080, use it.

You should see the following result (maybe with a different version):

ACS Server Is Running - Version : 7.2.012.0.0114

 

If you can’t find the ACS url, login to Documentum Administrator and navigate to:
Administration -> Distributed Content Configuration -> ACS Server
If you right click on it you will see the url at the bottom of the page.

At this point the ACS is running but is documentum using it properly?

In order to verify this point a bit of configuration is needed. Login to the server on which you have DA installed, in the DA application search for a log4j.properties file and add the following lines:

log4j.logger.com.documentum.acs=DEBUG, ACS_LOG
log4j.logger.com.documentum.fc.client.impl.acs=DEBUG, ACS_LOG
log4j.appender.ACS_LOG=org.apache.log4j.RollingFileAppender
log4j.appender.ACS_LOG.File=${catalina.home}/logs/AcsServer.log
log4j.appender.ACS_LOG.MaxFileSize=10MB
log4j.appender.ACS_LOG.layout=org.apache.log4j.PatternLayout
log4j.appender.ACS_LOG.layout.ConversionPattern=%d{ABSOLUTE} %5p [%t] %c - %m%n

You may have to update the line log4j.appender.ACS_LOG.File.

Restart the tomcat or whatever webapp server you have. In order to generate logs you’ll have to open a document from DA. Let’s say we have a document called TESTDOC.doc.
Once you open it you’ll have around 3 to 4 lines in AcsServer.log. In order to verify that everything went fine, you should NOT see the following line:
INFO [Timer-161] com.documentum.acs.dfc – [DFC_ACS_LOG_UNAVAILABLE] “userName=”test”, objectId=”0903d0908010000″, objectName=”TESTDOC.doc””, skip unavailable “ACS” serverName=”YOUR_ACS_NAME_HERE” protocol “http”

Instead you must have a kind of ticket/key formed by a lot of letters/numbers. This step will validate that you have been served by the ACS.

 

Cet article Upgrading to 7.2 created a new ACS. How to remove it and test it? est apparu en premier sur Blog dbi services.

SQL Server 2016: Always Encrypted

Fri, 2016-06-24 01:58

One of the top new features of SQL Server 2016 is the Always Encrypted functionality. Always Encrypted provides that data, store in a database, remains encrypted the all times there are in the database. There is a complete separation between persons who own the data and person who manage it. Only persons who own the data can see plain text data and person like DBAs, sys admins or privilege logins cannot have access to the data.

Data are encrypted and decrypted in flight between the database and the client application inside a Client Driver on the client side.
The client manages encryption keys which are stored outside of SQL Server.
Let’s start to explain via a concrete example.

I have a table in my database with sensitive data and I want to encrypt those data to avoid that Database Administrator can see plain text data for Credit Card number, Account Number and Account Balance:

Blog_AlwaysEncrypted_1

To enable encryption columns right click on our Customer table and select “Encrypt Columns…”:

Blog_AlwaysEncrypted_2

An introduction screen appears explaining how Always Encrypted works, click Next:

Blog_AlwaysEncrypted_3

The next screen shows all the columns of our table and we have to select which ones we want to encrypt. Here the Credit card number, the account balance and the account number:

Blog_AlwaysEncrypted_4

We need to choose the Encryption Type between two options which are described if we click in the “Encryption Type” text:

Blog_AlwaysEncrypted_5

I will choose Randomized for Credit Card Number and Account Number as I don’t want to query  on those columns and it is more secure. But I choose Deterministic for Account Balance as I want to filtering by equality with this field.
Please note that Deterministic encryption use a column collation with a binary2 sort order for character columns, so the collation for our char columns will be changed from French_CI_AS to French_BIN2 in my example.

For the column Encryption Key which are the key which will encrypt the data for each column, I will let the wizard generated one for me. I will also use the same column encryption key for all my encrypted columns:

Blog_AlwaysEncrypted_6

The next screen is used for the master key configuration. The column encryption key is used to encrypt the data in the column and the column master key protect, encrypt the column encryption key. I will, here also, use an auto generated column master key which will be a self-signed certificate saved in the Windows Certificate Store:

Blog_AlwaysEncrypted_7

In the Run Settings screen, first of all, a warning points the fact that if the encryption/decryption is executed during some insert statement, there could be a risk of data loss.
You could normally choose if you want to run the encryption immediately or if you want to generate a PowerShell script to do it later. For the time being the PowerShell generation could not be chosen… So I will run it now:

Blog_AlwaysEncrypted_8

A summary explains the operation that will be proceeded. A column master key will be generated and saved in the Windows Certificate store, the column encryption key will be also generated and used to encrypt my three columns:

Blog_AlwaysEncrypted_9

My columns have been encrypted:

Blog_AlwaysEncrypted_10

Now , I go back to my query, refresh it and I see that I cannot anymore read plain text for my three columns but instead I have varbinary encrypted blobs:

Blog_AlwaysEncrypted_11

There is just a problem in this demo… In fact, I have created my column master key certificate with a self-signed certificate in the context of the current user.
So, this user have access to my certificate and can decrypt the encrypted columns if we add in the connection string “Column Encryption Setting=Enabled”,  which is the change required to use Always Encrypted.

Blog_AlwaysEncrypted_12

Now, as the certificate used to create the Column Master Key is available the encrypted columns appear in plain text…

Blog_AlwaysEncrypted_13

We will have to separate physically the certificate used to create my column master key from the SQL Server machine used to create the Column Master Key and the Column Encryption Key.
I will show you how to do that in a future blog post.

 

Cet article SQL Server 2016: Always Encrypted est apparu en premier sur Blog dbi services.

Question is: upgrade now to 12.1.0.2 or wait for 12.2 ?

Thu, 2016-06-23 04:27

Let’s look at Release Schedule of Current Database Releases (Doc ID 742060.1)
12.2.0.1 is planned for 2HCY2016 on platforms Linux x86-64, Oracle Solaris SPARC (64-bit), Oracle Solaris x86-64 (64-bit).
2HCY2016 starts next week but we can imagine that it will not be released immediately and anyway we will have to wait a few months to download the on-premise(s) version. Add another couple of months to get at least one Proactive Bundle Patch to stabilize that new release. So maybe we can plan for production upgrade on Jan. 2017 for Linux platform, and Apr. or Jul. 2017 for Windows platform, right? How does that cope with 11.2.0.4 and 12.1.0.1 end of support?

Is delay for 12.2 a problem?

My opinion is that long time for new release is not a problem. Most of customers want stable supported release, not new features available only with options and that may introduce bugs. As long as we have support, PSUs and Proactive Bundle patchsets, everything is ok. We can’t blame software regressions after upgrade, and at the same time look forward to get new releases in a short period of time.
So in my opinion, waiting 6 months or 1 year to get 12.2 is not a problem except for book authors that wait for the general availability of 12.2 to release their book https://www.amazon.com/Oracle-Database-Release-Multitenant-Press/dp/1259836096 ;)

Is ‘cloud first’ a problem?

I don’t think that ‘cloud first’ is a problem by itself. We will have to learn 12.2 features and test them before upgrading our databases, and the Oracle Public Cloud is good for that. But I fear that customers will feel forced to go to the cloud, which is wrong. Was the same when 12.1.0.2 was released for Enterprise Edition. They feel forced to qui Standard Edition but that was probably not the goal. Especially when those that have quit Standard Edition One did it to go to open-source RDBMS.

Is ‘multitenant first’ a problem?

Yes, ‘cloud first’ may mean ‘multitenant first’ because that’s the only architecture available for 12c on the Oracle DBaaS. First, you can install a non-CDB if you choose ‘virtual image’. And anyway, OPC trial is the good occasion to test 12.2 and multitenant at the same time. Let me repeat that multitenant architecture has lot of features available without the multitenant option.

Upgrade planning

Back to the ground, the problem in my opinion is the incertitude.
Free extended support for 11.2.0.4 ends on 31-May-2017 and we don’t know yet if we will have a stable (i.e with few PSUs) 12.2 release at that time for on-premises, especially for Windows which will come later than Linux.
Remember that 12.1.0.2 on Windows came two months after the Linux one. And another two months for AIX.

12.1.0.1 support ends on 31-Aug-2016 and 12.2 will not be out at that time, at least for on-premises.

So what?

Customers that expected to get 12.2 before the end of 12.1.0.1 or 11.2.0.4 support will now (since the announcement of 2HCY2016 last month and the ‘cloud first’ recent announcement) have to plan an intermediate upgrade to 12.1.0.2 before going to 12.2. And because of the ‘Release 1′ myth, they are afraid of that. Our mission, as consultants and Oracle partners, is to explain that the myth has no reason behind it. Look at Mike Dietrich blog about that. Hope you will be convinced that version, releases and patchsets can bring regressions and should be carefully tested, whatever it’s the 1st, 2nd or 4th number on the version identification that is incremented. New ORACLE_HOME is new software.

Then, once in 12.1.0.2 you will have the time to plan an upgrade to 12.2 after learning, testing, changing administration scripts/procedures/habits to the era of multitenant. And you will be ready for the future.

The customers in 11.2.0.4 that do not want to plan that intermediate upgrade will have the option to pay for extended support which ends on 31-DEC-2019.

 

Cet article Question is: upgrade now to 12.1.0.2 or wait for 12.2 ? est apparu en premier sur Blog dbi services.

ODA X6-2S and ODA X6-2M for EE and SE2

Tue, 2016-06-21 13:07

After the announcement of the death of SE1 and SE we wondered what Oracle will do for Small and Medium Enterprises and entry level products. The answer was postgres the Oracle Cloud Services, but that’s for dev/test only because Public Cloud is not for production and SME will not build a Private Cloud for their few Oracle databases.

Find out how the @Oracle #Database Appliance can simplify your DB administration: #OracleODA https://t.co/iJT0roXrgF pic.twitter.com/mOBPQfu8Tg

— Oracle Hardware (@oraclehardware) June 16, 2016

The answer was announced to partners a few week ago and is now official: the 5th generation of ODA, the X6, has now an entry level version for Standard Edition 2.

ODA, until today, was only for Enterprise Edition. You could install Standard Edition on a Guest VM (described here) but with poor I/O performance and no advantages of ODA.

The smallest ODA was a 2 nodes cluster. The hardware is not expensive for what it is, but it is still expensive when you want only one small server.
Lot of our customers asked for a small ODA with Standard Edition. Even big customers with only a small database. They consolidate everything on VMWare, but for licencing reason need to isolate Oracle on physical machine. Two ODAs (as they want a DR site) is too large for their needs.

Good news, there are now smaller, one node, ODAs that can run Standard Edition and fully automated, even more automated than the previous ODAs.
New products but same values:

  • simplicity with automated deployment,
  • automated patching,
  • zero-admin storage,
  • integrated VM management,
  • performance (new NVMe to access flash storage)

The new products are:

What is new in the #OracleODA: pic.twitter.com/np3sKjcFXq

— Oracle Hardware (@oraclehardware) June 21, 2016

ODA X6-2S: the entry level for SE2

‘Only one single database’ but this is not a limit, just a recommandation related to the capacity.
Virtualized (can run application server on it)
1 socket with 10 cores Xeon, 126GB RAM, up to 384 GB, 10Gbase-T for public network and/or 10 GbE SFP+ Public Network
All Flash Storage. 6.4 TB NVMe Flash Storage, up to 12.3 -> usable for database is 2.4 TB up to 4.8
Cost: 18000$ which is the same as a Dell equivalent server (Oracle says it’s 2x cheaper but we may not have the same price list)

ODA X6-2M: similar but ‘multiple databases’

More resource: (2x sockets, 2x memory)
Same storage as X6-2S but NFS is also supported for additional storage
cost: 24000$

Both can be used for Standard Edition 2 (licensed in socket or NUP) or Enterprise Edition (Licences with Capacity On Demand activated cores or NUP)

ODA HA

This is the X5-2 we already know with 2 servers (X6 is planned for end of year)
2 nodes there, for RAC, RON, or single instance consolidation. It can be Bare Metal or Virtualized

Just my guess, but there are good chances that only Multitenant is supported for them in 12.2 – single-tenant when in SE2 of course – so that it is easy to move database to and from the public cloud services. Actually, the goal is:

Oracle’s new #database appliances provide a bridge between on-premise systems and the #cloud #OracleODA pic.twitter.com/D86QzYLsUD

— Oracle Hardware (@oraclehardware) June 21, 2016

You run dev and test on Oracle Public Cloud and your production on-premises. That’s a full Oracle solution where you can move your (pluggable) databases between the two platforms.

The ODA automation has evolved. It was easy configuration and patching. It is now also easy provisioning with the same interface that cou can found in Enterprise Manager or the Cloud. The Appliance Manager is accessible though web console or command line and helps to automate deployment, management, support and monitoring of the ODA.

 

Cet article ODA X6-2S and ODA X6-2M for EE and SE2 est apparu en premier sur Blog dbi services.

Adaptive Plans and SQL Baselines

Tue, 2016-06-21 12:44

I encountered recently an issue with Adaptive Plan and SPM. Documentations says that it works perfectly together but I remembered a presentation from Nelson Calero at UKOUG TECH15 mentioning strange behavior. I reproduced the issue and share the test case here as you may encounter it in 12.1 leading to regressions when you capture SQL Plan Baselines.

Cleanup

Whith all those adaptive features, you need to start clean if you want a reproductible testcase
SQL> -- drop tables
SQL> drop table DEMO1;
Table dropped.
SQL> drop table DEMO2;
Table dropped.
SQL>
SQL> whenever sqlerror exit failure
SQL>
SQL> -- drop all sql plan baselines
SQL> set serveroutput on long 100000 longc 100000
SQL> exec for i in (select sql_handle, plan_name, accepted, origin, created from dba_sql_plan_baselines) loop dbms_output.put_line(''||dbms_spm.drop_sql_plan_baseline(sql_handle=>i.sql_handle,plan_name=>i.plan_name)); end loop;
1
 
PL/SQL procedure successfully completed.
 
SQL> set serveroutput off
SQL> select 0/0 from dba_sql_plan_baselines;
no rows selected
SQL>
SQL> -- flush shared pool
SQL> alter system flush shared_pool;
System altered.
SQL> select 0/0 from v$sql where sql_id='agw7bn072730a';
no rows selected

Create the tables

SQL> -- create two tables with few rows for L=1 and lot of rows for L=15
SQL> create table DEMO2 (id constraint PK1 primary key,l) as select rownum,floor(log(2,rownum)) from xmltable('1 to 100000');
Table created.
 
SQL> create table DEMO1 as select * from DEMO2;
Table created.

Run the query


SQL> -- run a join for the few rows case
SQL> alter session set statistics_level=all;
Session altered.
SQL> select count(DEMO2.l) from DEMO1 join DEMO2 using(id) where DEMO1.l=1;
 
COUNT(DEMO2.L)
--------------
3

And here is the adaptive plan:

SQL> select * from table(dbms_xplan.display_cursor(null,null,format=>'allstats last +adaptive +outline'));
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------
SQL_ID agw7bn072730a, child number 0
-------------------------------------
select count(DEMO2.l) from DEMO1 join DEMO2 using(id) where DEMO1.l=1
 
Plan hash value: 2870612662
 
-------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads |
-------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 203 | 183 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 203 | 183 |
|- * 2 | HASH JOIN | | 1 | 5882 | 3 |00:00:00.01 | 203 | 183 |
| 3 | NESTED LOOPS | | 1 | 5882 | 3 |00:00:00.01 | 203 | 183 |
| 4 | NESTED LOOPS | | 1 | | 3 |00:00:00.01 | 200 | 183 |
|- 5 | STATISTICS COLLECTOR | | 1 | | 3 |00:00:00.01 | 195 | 179 |
| * 6 | TABLE ACCESS FULL | DEMO1 | 1 | 5882 | 3 |00:00:00.01 | 195 | 179 |
| * 7 | INDEX UNIQUE SCAN | PK1 | 3 | | 3 |00:00:00.01 | 5 | 4 |
| 8 | TABLE ACCESS BY INDEX ROWID| DEMO2 | 3 | 1 | 3 |00:00:00.01 | 3 | 0 |
|- 9 | TABLE ACCESS FULL | DEMO2 | 0 | 100K| 0 |00:00:00.01 | 0 | 0 |
-------------------------------------------------------------------------------------------------------------
 
Outline Data
-------------
 
/*+
BEGIN_OUTLINE_DATA
INDEX(@"SEL$58A6D7F6" "DEMO2"@"SEL$1" ("DEMO2"."ID"))
NLJ_BATCHING(@"SEL$58A6D7F6" "DEMO2"@"SEL$1")
USE_NL(@"SEL$58A6D7F6" "DEMO2"@"SEL$1")
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
DB_VERSION('12.2.0.1')
ALL_ROWS
OUTLINE_LEAF(@"SEL$58A6D7F6")
MERGE(@"SEL$1")
OUTLINE(@"SEL$2")
OUTLINE(@"SEL$1")
FULL(@"SEL$58A6D7F6" "DEMO1"@"SEL$1")
LEADING(@"SEL$58A6D7F6" "DEMO1"@"SEL$1" "DEMO2"@"SEL$1")
END_OUTLINE_DATA
*/
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - access("DEMO1"."ID"="DEMO2"."ID")
6 - filter("DEMO1"."L"=1)
7 - access("DEMO1"."ID"="DEMO2"."ID")
 
Note
-----
- this is an adaptive plan (rows marked '-' are inactive)

It’s an adaptive plan, HASH JOIN was the initial choice but first execution activated the NESTED LOOP.

SQL Baseline Capture

SQL> alter session set optimizer_capture_sql_plan_baselines=true;
Session altered.
 
SQL> select count(DEMO2.l) from DEMO1 join DEMO2 using(id) where DEMO1.l=1;
 
COUNT(DEMO2.L)
--------------
3
 
SQL> alter session set optimizer_capture_sql_plan_baselines=false;
Session altered.

Here is the SQL Baseline:

SQL> select sql_handle, plan_name, accepted, origin, created from dba_sql_plan_baselines;
 
SQL_HANDLE PLAN_NAME ACC ORIGIN CREATED
------------------------------ ---------------------------------------- --- ----------------------------- ---------------------------------------------------------------------------
SQL_4c1b404640b73a81 SQL_PLAN_4s6u08t0bffn1e47b6a4d YES AUTO-CAPTURE 28-MAY-16 09.13.04.000000 PM

and its plan:

SQL> select plan_table_output from dba_sql_plan_baselines,table(dbms_xplan.display_sql_plan_baseline(sql_handle, plan_name, format=>'+adaptive'));
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
 
--------------------------------------------------------------------------------
SQL handle: SQL_4c1b404640b73a81
SQL text: select count(DEMO2.l) from DEMO1 join DEMO2 using(id) where DEMO1.l=1
--------------------------------------------------------------------------------
 
--------------------------------------------------------------------------------
Plan name: SQL_PLAN_4s6u08t0bffn1e47b6a4d Plan id: 3833293389
Enabled: YES Fixed: NO Accepted: YES Origin: AUTO-CAPTURE
Plan rows: From dictionary
--------------------------------------------------------------------------------
 
Plan hash value: 740165205
 
-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 16 | 108 (2)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 16 | | |
| * 2 | HASH JOIN | | 5882 | 94112 | 108 (2)| 00:00:01 |
| * 3 | TABLE ACCESS FULL| DEMO1 | 5882 | 47056 | 54 (2)| 00:00:01 |
| 4 | TABLE ACCESS FULL| DEMO2 | 100K| 781K| 54 (2)| 00:00:01 |
-------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - access("DEMO1"."ID"="DEMO2"."ID")
3 - filter("DEMO1"."L"=1)
 
Note
-----
- this is an adaptive plan (rows marked '-' are inactive)

Unfortunately, the baseline captured only the ‘initial’ plan with the HASH JOIN.
This is not what is documented in Maria Colgan paper:
SPM plan capture and Adaptive Plans: When automatic plan capture is enabled and a SQL statement that has an adaptive plan is executed, only the final plan used will be captured in the SQL plan baseline.

Run with baseline

SQL> alter session set statistics_level=all;
Session altered.
 
SQL> select count(DEMO2.l) from DEMO1 join DEMO2 using(id) where DEMO1.l=1;
 
COUNT(DEMO2.L)
--------------
3
 
SQL> select * from table(dbms_xplan.display_cursor(null,null,format=>'allstats last +adaptive'));
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------
SQL_ID agw7bn072730a, child number 1
-------------------------------------
select count(DEMO2.l) from DEMO1 join DEMO2 using(id) where DEMO1.l=1
 
Plan hash value: 740165205
 
------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 390 | | | |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 390 | | | |
|* 2 | HASH JOIN | | 1 | 5882 | 3 |00:00:00.01 | 390 | 2545K| 2545K| 826K (0)|
|* 3 | TABLE ACCESS FULL| DEMO1 | 1 | 5882 | 3 |00:00:00.01 | 195 | | | |
| 4 | TABLE ACCESS FULL| DEMO2 | 1 | 100K| 100K|00:00:00.01 | 195 | | | |
------------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - access("DEMO1"."ID"="DEMO2"."ID")
3 - filter("DEMO1"."L"=1)
 
Note
-----
- SQL plan baseline SQL_PLAN_4s6u08t0bffn1e47b6a4d used for this statement

This confirms that the SQL baseline forces the initial HASH JOIN plan. It’s a bug that should be fixed in 12.2 so for the moment, be very careful when you want to fix an adaptive plan with SQL Baselines: your goal is to stabilize once you have the optimal plan, but the result may be a regression to a bad plan.

 

Cet article Adaptive Plans and SQL Baselines est apparu en premier sur Blog dbi services.

ORA-01775: looping chain of synonyms

Tue, 2016-06-21 07:13

This error message is misleading. You may encounter it when you expect ORA-00942: table or view does not exist. Let’s explain

I’m connected as SCOTT and create a PUBLIC SYNONYM for an object that do not exists:

SQL> create public synonym MONEY for NOTHING;
Synonym created.

No error message.
Only when I read it I have an error message telling me that there are no table or view behind it:

SQL> select * from NOTHING;
select * from NOTHING
*
ERROR at line 1:
ORA-00942: table or view does not exist

Let’s do the same but call it BONUS instead of MONEY:

SQL> create public synonym BONUS for NOTHING;
Synonym created.
 
SQL> select * from BONUS;
no rows selected

No error here. Why? because I’ve a table that is called BONUS. So the name is resolved with the table and the synonym is not even tried.

I’ll now drop that synonym and create it for the table BONUS. Same name for the public synonym and for the table.

SQL> drop public synonym BONUS;
Synonym dropped.
 
SQL> create public synonym BONUS for BONUS;
Synonym created.

As user SCOTT, when I query BONUS the name is resolved as the table:

SQL> show user
USER is "SCOTT"
SQL> select * from BONUS;
no rows selected

As another user, when I query BONUS the name is resolved as the synonym, which finally reads SCOTT.BONUS:

SQL> show user
USER is "SCOTT"
SQL> select * from BONUS;
no rows selected

In 12c it is easy to see the final query:

SQL> variable c clob
SQL> exec dbms_utility.expand_sql_text('select * from BONUS',:c);
PL/SQL procedure successfully completed.
 
SQL> print c
 
C
----------------------------------------------------------------------------------------------------------
SELECT "A1"."ENAME" "ENAME","A1"."JOB" "JOB","A1"."SAL" "SAL","A1"."COMM" "COMM" FROM "SCOTT"."BONUS" "A1"

But now, what happens when we drop the table?

SQL> drop table SCOTT.BONUS;
Table dropped.

Do you expect a ORA-00942: table or view does not exist?

SQL> select * from BONUS;
select * from BONUS
*
ERROR at line 1:
ORA-01775: looping chain of synonyms

Here is the ‘looping chain of synonyms’. I ask for BONUS. The name resolution first check for an object in my schema, but there are none:

SQL> select object_type from user_objects where object_name='BONUS';
no rows selected

Then it looks for public synonym and there is one:

SQL> select object_type from all_objects where owner='PUBLIC' and object_name='BONUS';
 
OBJECT_TYPE
-----------------------
SYNONYM

So we check what it is a synonym for:

SQL> select table_owner,table_name from all_synonyms where owner='PUBLIC' and synonym_name='BONUS';
 
TABLE_OWNER TABLE_NAME
------------ ------------
SCOTT BONUS

And there it is interesting. Besides the column names that includes ‘TABLE’ a synonym can reference any object. So it’s not just replacing the synonym with ‘SCOTT.BONUS’ which would raise an ORA-00942. It is doing name resolution of BONUS in the context of the user SCOTT. Something similar to:

SQL> alter session set current_schema=SCOTT;
Session altered.
SQL> select * from BONUS;

And then, what do you expect from that? There is no table named BONUS but there is a public synonym… and you’re back to the begining:

select * from BONUS
*
ERROR at line 1:
ORA-01775: looping chain of synonyms

Most of the time, you don’t have synonyms that reference other synonyms, so you don’t really have a ‘chain’ of synonyms. Except when there is only synonym in the namespace and it’s a self-reference loop. So if you see ORA-01775, check if the referenced object is not missing.

 

Cet article ORA-01775: looping chain of synonyms est apparu en premier sur Blog dbi services.

When changing CURSOR_SHARING takes effect?

Mon, 2016-06-20 13:15

I usually don’t advice to set CURSOR_SHARING=FORCE but imagine your application requires it, you forgot it (or tried to do without) on migration and then everything is slow. You want to change it, but when does it take effect? New execution? New parsing? New session?

EXACT

I have the default value where parent cursor is shared only when sql_text is the same:

SQL> show parameter cursor_sharing
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cursor_sharing string EXACT

And I check with a query that the predicate is not changed:

SQL> select * from dual where dummy='X';
 
D
-
X
 
SQL> select * from table(dbms_xplan.display_cursor) where plan_table_output like '%filter%';
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
1 - filter("DUMMY"='X')

FORCE

I change at system (=instance) level

SQL> alter system set cursor_sharing=force;
System altered.
 
SQL> show parameter cursor_sharing
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cursor_sharing string FORCE

I tested without session cached cursors:

SQL> alter session set session_cached_cursors=0;
Session altered.

and even from another session

SQL> connect / as sysdba
Connected.

But the predicate still has its predicate:

SQL> select * from dual where dummy='X';
 
D
-
X
 
SQL> select * from table(dbms_xplan.display_cursor) where plan_table_output like '%filter%';
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
1 - filter("DUMMY"='X')

No invalidation, no new cursor. Same old statement.

FLUSH SHARED_POOL

Only when I flush the shared_pool I can execute the statement with literals replaced:

SQL> alter system flush shared_pool;
System altered.
 
SQL> select * from dual where dummy='X';
 
D
-
X
 
SQL> select * from table(dbms_xplan.display_cursor) where plan_table_output like '%filter%';
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
1 - filter("DUMMY"=:SYS_B_0)

If you fear a hard parse fest, you can flush specific cursors. I’ve documented the procedure in a previous post.

Autotrace

As a side note, do not rely on autotrace for that

SQL> set autotrace on explain
SQL> select * from dual where dummy='X';
 
D
-
X
 
Execution Plan
----------------------------------------------------------
Plan hash value: 272002086
 
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 2 | 2 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| DUAL | 1 | 2 | 2 (0)| 00:00:01 |
--------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
1 - filter("DUMMY"='X')

Just one more thing that is special with autotrace…

Conclusion

I don’t know exactly how cursor_sharing=force is managed. I thought that the literal replacement occurred before searching for parent cursor. Don’t hesitate to comment here if you know the ‘why’ behind this behavior. My goal here was just to test what has to be done in order to have immediate effect of cursor_sharing change.

 

Cet article When changing CURSOR_SHARING takes effect? est apparu en premier sur Blog dbi services.

ASM iostats

Mon, 2016-06-20 12:15

A few screenshots and a link here. Sysadmins do not like ASM because they don’t have the tools they like to manage the disks. For example, they don’t want to run SQL queries to check performance, and asmcmd iostat is quite limited. Here is a nice way to get I/O statistics easily from command line.

The perl script is from Bertrand Drouvot (do not miss his twitter profile picture) and is easily downloadable from his blog:
https://bdrouvot.wordpress.com/2013/10/04/asm-metrics-are-a-gold-mine-welcome-to-asm_metrics-pl-a-new-utility-to-extract-and-to-manipulate-them-in-real-time/

It’s only queries on ASM instance, so no risk.

I order to show the relevance, I took screenshots from this script and the XtremIO console from a system where all ASM disks, and only them, are on the XtremIO brick so you can compare statistics from the storage array and from the ASM instance.

Bandwidth

ASMIOScreenshot 2016-06-20 15.18.55

IOPS

ASMIOScreenshot 2016-06-20 15.19.08

Latency

ASMIOScreenshot 2016-06-20 15.19.17

 

Cet article ASM iostats est apparu en premier sur Blog dbi services.

SQL Server 2016 AlwaysOn: distributed availability groups

Mon, 2016-06-20 10:49

This time, I will talk about distributed availability groups. What’s that? In short, a group of availability groups. Sounds good right? But in which cases may we need such architecture? Firstly, let’s say that distributed availability groups will run on the top of two distinct availability groups meaning that they reside on two distinct WSFCs with their own quorum and voting management. Referring to the Microsoft documentation here, we may think that this new feature will be mainly used in DR scenario but I’m not sure to understand Microsoft about this sentence:

You want the data to continually replicate to the DR site, but you don’t want a potential network problem or issue at the DR site to bring down your primary site

Indeed, we don’t need this new feature to avoid a DR site or a network failure bringing down the primary site. We may simply exclude cluster nodes at the DR site for voting. Maybe I missed something here and I will probably go back when I will get more information.

Moreover, I may not image my customer using an additional cluster just only for DR purpose. Supposing that the DR site is costless from SQL Server license perspective, we need to maintain a “complex” architecture (WSFC) only for that.

After discussing with one another French MVP Christophe Laporte (@Conseilit), we began to draw some pictures of potential scenarios where using DAG may be a good solution. Firstly, let’s say a customer that have many applications. Some of them may run on the primary site and other ones on the DR site because there is a high latency between the two datacenters. In this specific context, you may implement one availability group on each datacenter for HA purpose and add a distributed availability group for DR recovery.

blog 96 - 7 - DAG scenario

Let’s say now a big company that bought another business that includes an existing datacenter infrastructure with availability groups used by the old company. They may want to introduce HA + DR by using a distributed availability group at the both sides without performing any big changes. Of course, we have to keep in mind the cost of such architecture …

Probably one another advantage of distributed availability groups is that the primary replica has only to send log blocks to one primary replica on another availability group. Let’s imagine a traditional implementation with 2 synchronous replicas at the primary site and 4 or maybe 6 asynchronous replicas at the DR site used for scale-out reporting purpose. In this case, even if we are in a situation where all the read-only replicas are asynchronous, the failure of one may impact the primary replica because the transaction log file won’t be truncated by backup log operations until we fix the issue.

blog 96 - 8 - DAG scenario

We may have potentially up to 6 replicas that may lead to transaction log issue management in this specific context. Let’s say now we change the game by including all of the read-only replicas in one specific availability group at the DR site that is included itself in a DAG. The failure of one read-only replica on the DR site may impact only the primary on the DR availability group.

 

blog 96 - 9 - DAG scenario

I believe that others scenarios are possible and we will discover some of them through experience. Please feel free to comment or add your thoughts J

Ok it’s time to implement our first DAG. On my lab environment I implemented two additional virtual machines and then I configured another WSFC that includes my two virtual machines. So I finally get an environment that includes two WSFCs with two nodes on each. The first is already installed and used for direct seeding (see direct seeding at the beginning). We will also leverage direct seeding when implementing a DAG in order to replicate the WideWorldImporters between the two availability groups.

But before installing the DAG itself, let’s install the second availability group.

:CONNECT WIN20123SQL16\SQL16

Use master;

-- primary replica
CREATE AVAILABILITY GROUP [testGrp2]   
FOR    
REPLICA ON 
N'WIN20123SQL16\SQL16' 
WITH (ENDPOINT_URL = N'TCP://WIN20123SQL16.dbi-services.test:5022',  
    FAILOVER_MODE = MANUAL,  
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
    BACKUP_PRIORITY = 50,   
    SECONDARY_ROLE(ALLOW_CONNECTIONS = ALL),   
    SEEDING_MODE = AUTOMATIC)
,   
N'WIN20124SQL16\SQL16' 
WITH (ENDPOINT_URL = N'TCP://WIN20124SQL16.dbi-services.test:5022',   
    FAILOVER_MODE =MANUAL,   
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
    BACKUP_PRIORITY = 50,   
    SECONDARY_ROLE(ALLOW_CONNECTIONS = ALL),   
    SEEDING_MODE = AUTOMATIC);   
GO  

ALTER AVAILABILITY GROUP [testGrp2]    
ADD LISTENER 'lst-testgrp2' 
( 
	WITH IP ( ('192.168.5.121', '255.255.255.0') ) , 
	PORT = 1433);    
GO 

:CONNECT WIN20124SQL16\SQL16

USE master;

-- secondary replica
ALTER AVAILABILITY GROUP [testGrp2] JOIN   
ALTER AVAILABILITY GROUP [testGrp2] GRANT CREATE ANY DATABASE  
GO

And finally the distributed availability group

:CONNECT WIN20121SQL16\SQL16

USE master;

-- Primary cluster 
--DROP AVAILABILITY GROUP [distributedtestGrp]  
CREATE AVAILABILITY GROUP [distributedtestGrp]  
WITH (DISTRIBUTED)   
AVAILABILITY GROUP ON  
'testGrp' 
WITH    
(   
	LISTENER_URL = 'tcp://lst-testgrp:5022',    
	AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
	FAILOVER_MODE = MANUAL,   
    SEEDING_MODE = AUTOMATIC   
),   
'testGrp2' 
WITH    
(   
	LISTENER_URL = 'tcp://lst-testgrp2:5022',   
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
    FAILOVER_MODE = MANUAL,   
    SEEDING_MODE = AUTOMATIC   
);    
GO   

:CONNECT WIN20123SQL16\SQL16

USE master;

-- secondary cluster
ALTER AVAILABILITY GROUP [distributedtestGrp]   
JOIN   
AVAILABILITY GROUP ON  
'testGrp' 
WITH    
(   
	LISTENER_URL = 'tcp://lst-testgrp:5022',    
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
    FAILOVER_MODE = MANUAL,   
    SEEDING_MODE = AUTOMATIC   
),   
'testGrp2' 
WITH    
(   
	LISTENER_URL = 'tcp://lst-testgrp2:5022',   
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
    FAILOVER_MODE = MANUAL,   
    SEEDING_MODE = AUTOMATIC   
);    
GO

 You may notice the special syntax DISTRIBUTED. In addition, the replicas have been replaced by the listeners of each availability group.

blog 96 - 13 - DAG SSMS

One interesting thing here is that we can’t manage the DAG from SSMS. All options are greyed. So let’s have a look at some DMVs.

select 
	ag.name as group_name,
	ag.is_distributed,
	ar.replica_server_name as replica_name,
	ar.endpoint_url,
	ar.availability_mode_desc,
	ar.failover_mode_desc,
	ar.primary_role_allow_connections_desc as allow_connections_primary,
	ar.secondary_role_allow_connections_desc as allow_connections_secondary,
	ar.seeding_mode_desc as seeding_mode
from sys.availability_replicas as ar
join sys.availability_groups as ag
	on ar.group_id = ag.group_id;
go

 

blog 96 - 12 - DMV config

Replica names are availability group names for DAG.

We may get information about direct seeding between the two availability groups in the same manner than previously:

 

select 
	ag.name as aag_name,
	ar.replica_server_name,
	d.name as database_name,
	has.current_state,
	has.failure_state_desc as failure_state,
	has.error_code,
	has.performed_seeding,
	has.start_time,
	has.completion_time,
	has.number_of_attempts
from sys.dm_hadr_automatic_seeding as has
join sys.availability_groups as ag
	on ag.group_id = has.ag_id
join sys.availability_replicas as ar
	on ar.replica_id = has.ag_remote_replica_id
join sys.databases as d
	on d.group_database_id = has.ag_db_id

 

blog 96 - 11 - WideWorldImporters

 

select 
	local_physical_seeding_id,
	remote_physical_seeding_id,
	local_database_name,
	@@servername as local_machine_name,
	remote_machine_name,
	role_desc as [role],
	transfer_rate_bytes_per_second,
	transferred_size_bytes / 1024 as transferred_size_KB,
	database_size_bytes / 1024 as database_size_KB,
	DATEADD(mi, DATEDIFF(mi, GETUTCDATE(), GETDATE()), start_time_utc) as start_time,
	DATEADD(mi, DATEDIFF(mi, GETUTCDATE(), GETDATE()), estimate_time_complete_utc) as estimate_time_complete,
	total_disk_io_wait_time_ms,
	total_network_wait_time_ms,
	is_compression_enabled
from sys.dm_hadr_physical_seeding_stats

blog 96 - 10 - WideWorldImporters

Ok let’s perform a basic test that consists in creating and inserting data into the WideWorldImporters database from each listener.

:CONNECT lst-testgrp

use [WideWorldImporters];

create table dbo.test (id int identity);
go

insert dbo.test default values;
go

:CONNECT lst-testgrp2

use [WideWorldImporters];

create table dbo.test (id int identity);
go

insert dbo.test default values;
go

 

Connecting to lst-testgrp…
(1 row(s) affected)
Disconnecting connection from lst-testgrp…

Connecting to lst-testgrp2…
Msg 3906, Level 16, State 2, Line 14
Failed to update database “WideWorldImporters” because the database is read-only.
Msg 3906, Level 16, State 2, Line 18
Failed to update database “WideWorldImporters” because the database is read-only.
Disconnecting connection from lst-testgrp2…

Ok, as expected the availability group in DR is in READ ONLY mode. I’m not able to create or update anything from there.

Let’s perform a last test after switching over the DAG from testGrp to testGrp2. From the primary availability group:

ALTER AVAILABILITY GROUP [distributedtestGrp] FORCE_FAILOVER_ALLOW_DATA_LOSS

 

blog 96 - 13 - DAG SSMS after failover

 

This time I cannot insert data from the first availability group testGrp

Connecting to lst-testgrp…
Msg 976, Level 14, State 1, Line 2
The target database, ‘WideWorldImporters’, is participating in an availability group and is currently not accessible for queries. Either data movement is suspended or the availability replica is not enabled for read access. To allow read-only access to this and other databases in the availability group, enable read access to one or more secondary availability replicas in the group.  For more information, see the ALTER AVAILABILITY GROUP statement in SQL Server Books Online.
Disconnecting connection from lst-testgrp…

Connecting to lst-testgrp2…
(1 row(s) affected)
Disconnecting connection from lst-testgrp2…

In this blog we have introduced two new features shipped with SQL Server 2016 and availability groups. At a first glance, they seem to be pretty cool features and will extend the scope of availability group capabilities. I think we will see over time the pros and cons during our other tests and implementation at customer shops. If so, other blog posts will coming soon.

Stay tuned!

 

 

 

 

 

 

 

Cet article SQL Server 2016 AlwaysOn: distributed availability groups est apparu en premier sur Blog dbi services.

SQL Server 2016 AlwaysOn: Add DBs in AGs with direct seeding

Mon, 2016-06-20 10:26

In this blog I would like to talk about two new features shipped with SQL Server 2016 for AlwaysOn availability groups. The first one concerns the new direct seeding capability and the seconde one introduces distributed availability groups (DAG). I will talk about the second one in a next blog post.

First of all, let’s talk about direct seeding.  At the first glance, this feature seems very interesting because it simplifies a lot the process of adding a high available database.  Indeed, with previous versions, adding a database to an availability group from an application perspective requires some extra steps from the database administrator in order to be highly available like backup/restore or initializing replication process. Let’s say a software editor wants to install or add a database (SharePoint for instance). 

Enabling seeding mode for the concerned replicas reduce the amount of work of adding the databases by automating a little bit more the initialization process. We just have to add the database to the availability group and it’s over: no extra backup / restore tasks, no need to configure a file share for backup / restore and no manual initialization. Of course, CREATE and ALTER AVAILABILITY GROUP syntax includes a new per replica basis parameter named SEEDING_MODE that has two settings: MANUAL and AUTOMATIC. The former means that we will use the legacy method ton initialize databases to secondaries (by using backup / restore) and the latter will use the new automated initialization method that will consist in seeding database data across the replication network.
Let’s have a quick demo of this new feature. I will configure a basic availability group with two replicas (WIN20121SQL16\SQL16 and WIN20122SQL16\SQL16). You may notice the new parameter SEEDING_MODE = AUTOMATIC meaning that we will use the new automated method for initializing databases for this availability group.  You may also notice that we don’t need any more to create a “dummy” database before creating an availability group. 

On the primary:

-- primary replica
CREATE AVAILABILITY GROUP [testGrp]   
FOR   
REPLICA ON 
N'WIN20121SQL16\SQL16' 
WITH (ENDPOINT_URL = N'TCP://WIN20121SQL16.dbi-services.test:5022',  
    FAILOVER_MODE = AUTOMATIC,  
    AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,   
    BACKUP_PRIORITY = 50,   
    SECONDARY_ROLE(ALLOW_CONNECTIONS = NO),   
    SEEDING_MODE = AUTOMATIC)
,   
N'WIN20122SQL16\SQL16' 
WITH (ENDPOINT_URL = N'TCP://WIN20122SQL16.dbi-services.test:5022',   
    FAILOVER_MODE = AUTOMATIC,   
    AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,   
    BACKUP_PRIORITY = 50,   
    SECONDARY_ROLE(ALLOW_CONNECTIONS = NO),   
    SEEDING_MODE = AUTOMATIC);   
GO

System views and DMVs have been updated accordingly. For example the sys.availability_replicas provides a new seeding_mode column.

select 
	ag.name as group_name,
	ar.replica_server_name as replica_name,
	ar.endpoint_url,
	ar.availability_mode_desc,
	ar.failover_mode_desc,
	ar.primary_role_allow_connections_desc as allow_connections_primary,
	ar.secondary_role_allow_connections_desc as allow_connections_secondary,
	ar.seeding_mode_desc as seeding_mode
from sys.availability_replicas as ar
join sys.availability_groups as ag
	on ar.group_id = ag.group_id;
go

blog 96 - 1 - availability group and replicas state

Let’s complete the configuration of the availability group by giving gextra permissions to create databases on secondaries to the availability group itself.

ALTER AVAILABILITY GROUP [testGrp] JOIN   
ALTER AVAILABILITY GROUP [testGrp] GRANT CREATE ANY DATABASE  
GO

At this point we just have to add the new WideWorldImporters database to the testGrp availability group and our job is over!

-- primary replica add database WideWorldImporters to testGrp
ALTER AVAILABILITY GROUP [testGrp] ADD DATABASE [WideWorldImporters];
GO

The system will then seed database data to the secondary. We may get information of seeding progress from two new DMVs sys.dm_hadr_automatic_seeding and sys.dm_hadr_physical_seeding_stats

sys.dm_hadr_automatic_seeding DMV gives information about successful or failed database seedings and corresponding error messages. Furthermore sys.dm_hadr_physical_seeding_stats DMV provides currently running seeding information like estimated completion time and related statistics about I/O and network usage.

Here what I found after adding the database to the availability group:

select 
	ag.name as aag_name,
	ar.replica_server_name,
	d.name as database_name,
	has.current_state,
	has.failure_state_desc as failure_state,
	has.error_code,
	has.performed_seeding,
	has.start_time,
	has.completion_time,
	has.number_of_attempts
from sys.dm_hadr_automatic_seeding as has
join sys.availability_groups as ag
	on ag.group_id = has.ag_id
join sys.availability_replicas as ar
	on ar.replica_id = has.ag_remote_replica_id
join sys.databases as d
	on d.group_database_id = has.ag_db_id

 

blog 96 - 2 - seeding sessions info

select 
	local_physical_seeding_id,
	remote_physical_seeding_id,
	local_database_name,
	@@servername as local_machine_name,
	remote_machine_name,
	role_desc as [role],
	transfer_rate_bytes_per_second,
	transferred_size_bytes / 1024 as transferred_size_KB,
	database_size_bytes / 1024 as database_size_KB,
	DATEADD(mi, DATEDIFF(mi, GETUTCDATE(), GETDATE()), start_time_utc) as start_time,
	DATEADD(mi, DATEDIFF(mi, GETUTCDATE(), GETDATE()), estimate_time_complete_utc) as estimate_time_complete,
	total_disk_io_wait_time_ms,
	total_network_wait_time_ms,
	is_compression_enabled
from sys.dm_hadr_physical_seeding_stats

 

blog 96 - 3 - seeding progress info

The WideWorldImportes database is not so big by default (roughly 455MB). The estimated duration to seed database is approximatively 1’14’’.

Let’s compare this little math by including the transfer rate that is estimated to ~ 42MB/s (transfer_rate_bytes_per_second column value) and the total size of data to transfer. We may deduce 455 / 42 ~=10s to transfer all the data.

Let’s now have a look at the SQL error log on the secondary. We may see information related to the seeding data operation.

blog 96 - 6 - import error log secondary 2

blog 96 - 6 - import error log secondary 2

The seeding operation has occurred between 19:15:53 and 19:16:08 so approximatively 15 seconds that is not so far to our previous math. The estimated duration to seed data based on start and estimated completion time seems to be less accurate than the transfer throughput but I think I need to perform further tests and investigations before coming to any conclusion.

In short, a pretty cool feature but my guess is that using direct seeding may not be suitable for all scenarios. One customer example that is off the top of my head is big databases that run inside availability groups. I had the opportunity to migrate SAP databases (~1TB) to SQL Server 2014 and availability groups : using direct seeding would not be use in this case. Indeed, direct seeding does not use compression by default (you can turn on compression with traceflag 1462) and we may easily imagine that seeding all data across the network may take a long time. We may change this default behaviour by using a special trace flag but at the cost of high CPU consumption. It is probably not a good idea if we plan to use the application immediately after adding the SAP to the concerned availability group. In addition, using log stream compression may hurt performance with heavily OLTP workload because it introduces latency by design. There is a tradeoff to find here … Therefore using backup / restore process seems to be the best option by playing with optimization techniques like compression, increasing the number of backup media files or changing MAXTRANSFERSIZE and BUFFERCOUNT parameter values.

One another scenario that comes to mind concerns databases migration with thousands of databases at the same time. Which is the fastest way to migrate all these databases? Adding them to the availability group and let the system to seed database data across the network to each secondary or perform a copy of database files on each secondary and chose skip initial data synchronization? At this point I need to investigate further to answer.

One another important thing is that direct seeding is not part of the availability group wizard GUI. You have probably noticed that I don’t use it in this blog post and this is an intended behaviour. A connect item has already been created by Brent Ozar here. Please feel free to vote!

See you!

 

 

Cet article SQL Server 2016 AlwaysOn: Add DBs in AGs with direct seeding est apparu en premier sur Blog dbi services.

12c online datafile move and resize

Sat, 2016-06-18 12:24

I’ve described in previous posts how the 12c online datafile move works: your session process do the copy and tells other writers (dbwr and direct-path inserts) to write blocks to both files (old one and new one for blocks that have already been copied). Readers read old file until the copy is completed.
The target file is created at the start of the move, with the same size, and then is filled as long as the copy phase runs. What happens if the size of the source file increases?

I run a datafile move from one session. The datafile size is 14.6 GB.

SQL> alter database move datafile '/u02/app/oracle/oradata/CDB/sysaux011460.dbf' to '/u03/app/oracle/oradata/CDB/sysaux014244.dbf';

We see the start of the operation in the alert.log:

2016-06-18 14:23:09.254000 +00:00
Moving datafile /u02/app/oracle/oradata/CDB/sysaux011460.dbf (3) to /u03/app/oracle/oradata/CDB/sysaux014244.dbf
2016-06-18 14:23:10.600000 +00:00

manual resize

If I want to resize the datafile manually, I can’t:

SQL> alter database datafile '/u02/app/oracle/oradata/CDB/sysaux011460.dbf' resize 15G
*
ERROR at line 1:
ORA-63000: operation disallowed: data file
/u02/app/oracle/oradata/CDB/sysaux011460.dbf is being moved

automatic resize

So what happens if the datafile is autoextensible and I add data to it? I’ve run some ‘allocate extent’ and inserts and got the resize to occur:


2016-06-18 14:23:10.600000 +00:00
Resize operation completed for file# 3, old size 15319040K, new size 15329280K
2016-06-18 14:23:12.126000 +00:00
Resize operation completed for file# 3, old size 15329280K, new size 15400960K
2016-06-18 14:23:13.836000 +00:00

and let that continue

Resize operation completed for file# 3, old size 18513920K, new size 18585600K
2016-06-18 14:27:08.730000 +00:00
Resize operation completed for file# 3, old size 18585600K, new size 18657280K
2016-06-18 14:27:11.079000 +00:00
2016-06-18 14:28:03.905000 +00:00
Resize operation completed for file# 3, old size 18657280K, new size 18728960K
2016-06-18 14:28:05.179000 +00:00
Resize operation completed for file# 3, old size 18872320K, new size 18944000K

until the filesystem is full

2016-06-18 14:28:28.647000 +00:00
ORA-1653: unable to extend table SYS.TEST01 by 128 in tablespace SYSAUX
ORA-1653: unable to extend table SYS.TEST01 by 128 in tablespace SYSAUX
Errors in file /u01/app/oracle/diag/rdbms/cdb/CDB/trace/CDB_ora_26342.trc:
ORA-27061: waiting for async I/Os failed
Linux-x86_64 Error: 28: No space left on device
Additional information: 4294967295
Additional information: 1048576
ORA-1653: unable to extend table SYS.TEST02 by 128 in tablespace SYSAUX
ORA-1653: unable to extend table SYS.TEST02 by 1024 in tablespace SYSAUX

My datafile is now 18GB.

file size

I’m checking the size for both files here:

[oracle@CDB]$ ls -l /u0?/app/oracle/oradata/CDB/sysaux*
-rw-r----- 1 oracle oinstall 19398664192 Jun 18 14:38 /u02/app/oracle/oradata/CDB/sysaux011460.dbf
-rw-r----- 1 oracle oinstall 15686705152 Jun 18 14:38 /u03/app/oracle/oradata/CDB/sysaux014244.dbf

The source one is 19398664192=18GB which is exactly what I got in the last resize message from the alert.log but the target one is still 14.6GB which is the size when it has been created at the beginning of the move. The double write occurs only for the blocks that have already been copied and the move did not reach the 14.6 GB yet.

We can see that from disk usage. ‘ls’ displays the declared size but ‘du’ counts the actual size – only the blocks that have been written yet.


[oracle@CDB]$ du -k /u0?/app/oracle/oradata/CDB/sysaux*
18944012 /u02/app/oracle/oradata/CDB/sysaux011460.dbf
6894604 /u03/app/oracle/oradata/CDB/sysaux014244.dbf

The target file has only 6894604=6.5 GB yet but it keeps increasing:

[oracle@CDB]$ du -k /u0?/app/oracle/oradata/CDB/sysaux*
18944012 /u02/app/oracle/oradata/CDB/sysaux011460.dbf
7013388 /u03/app/oracle/oradata/CDB/sysaux014244.dbf

Note: I verified that if the move datafile session is suspended, the target file disk usage does not increase even when we have activity on the tablespace.

The move is continuing and at the point it reaches a block above the initial size the target file is resized:


[oracle@CDB]$ ls -l /u0?/app/oracle/oradata/CDB/sysaux*
-rw-r----- 1 oracle oinstall 19398664192 Jun 18 15:18 /u02/app/oracle/oradata/CDB/sysaux011460.dbf
-rw-r----- 1 oracle oinstall 19398664192 Jun 18 15:18 /u03/app/oracle/oradata/CDB/sysaux014244.dbf

And finally, the move is completed without any problem:


Sat Jun 18 15:52:57 2016
Move operation committed for file /u03/app/oracle/oradata/CDB/sysaux011460.dbf

Conclusion: no problem

Online datafile move is compatible with autoextensible datafile resize, without any problem. Of course, you should not plan a resize at the same time as a large load, for performance reasons, but it is works. Actually this feature is very reliable: no locks, efficient, and cleanup is well done even in case of crash (info is in controlfile).

 

Cet article 12c online datafile move and resize est apparu en premier sur Blog dbi services.

Pages