Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 2 hours 52 min ago

12cR2 RMAN> REPAIR

Sun, 2017-04-23 15:39

Do you know the RMAN Recovery advisor? It detects the problems, and then you:

RMAN> list failure;
RMAN> advise failure;
RMAN> repair failure;

You need to have a failure detected. You can run Health Check if it was not detected automatically (see https://blog.dbi-services.com/oracle-12c-rman-list-failure-does-not-show-any-failure-even-if-there-is-one/). In 12.2 you can run the repair directly, by specifying what you want to repair.

Syntax

There is no online help on RMAN but you can list which keywords are expected by supplying a wrong one:
RMAN> repair xxx;
 
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00558: error encountered while parsing input commands
RMAN-01009: syntax error: found "identifier": expecting one of: "failure"
RMAN-01008: the bad identifier was: xxx
RMAN-01007: at line 1 column 8 file: standard input

This is 12.1.0.2 where the only option is REPAIR FAILURE. In 12.2 we have a lot more:


RMAN> repair xxx
 
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00558: error encountered while parsing input commands
RMAN-01009: syntax error: found "identifier": expecting one of: "database, database root, datafile, failure, pluggable, tablespace, ("
RMAN-01008: the bad identifier was: xxx
RMAN-01007: at line 1 column 8 file: standard input

When you know what is broken, you can repair it without having to know what to restore and what to recover. You can repair:

  • database: the whole database
  • database root: the CDB$ROOT container, which means all its tablespaces
  • pluggable database: it means all the PDB tablespaces
  • a specific datafile
Repair pluggable database

I corrupt one datafile from PDB01:


RMAN> host "> /u01/oradata/CDB2/CDB2_SITE1/46EA7EF707457B4FE0531416A8C027F2/datafile/o1_mf_system_d8k2t4wj_.dbf";
host command complete

And I repair the pluggable database:


RMAN> repair pluggable database PDB01;
 
Starting restore at 23-APR-17
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=203 device type=DISK
Executing: alter database datafile 21 offline
Executing: alter database datafile 22 offline
Executing: alter database datafile 23 offline
 
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00021 to /u01/oradata/CDB2/CDB2_SITE1/46EA7EF707457B4FE0531416A8C027F2/datafile/o1_mf_system_d8k2t4wj_.dbf
channel ORA_DISK_1: restoring datafile 00022 to /u01/oradata/CDB2/CDB2_SITE1/46EA7EF707457B4FE0531416A8C027F2/datafile/o1_mf_sysaux_d8k2t4wn_.dbf
channel ORA_DISK_1: restoring datafile 00023 to /u01/oradata/CDB2/CDB2_SITE1/46EA7EF707457B4FE0531416A8C027F2/datafile/o1_mf_users_d8kbmy6w_.dbf
channel ORA_DISK_1: reading from backup piece /u90/fast_recovery_area/CDB2_SITE1/46EA7EF707457B4FE0531416A8C027F2/backupset/2017_04_23/o1_mf_nnndf_B_dht2d4ow_.bkp
channel ORA_DISK_1: piece handle=/u90/fast_recovery_area/CDB2_SITE1/46EA7EF707457B4FE0531416A8C027F2/backupset/2017_04_23/o1_mf_nnndf_B_dht2d4ow_.bkp tag=B
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:35
Finished restore at 23-APR-17
 
Starting recover at 23-APR-17
using channel ORA_DISK_1
 
starting media recovery
media recovery complete, elapsed time: 00:00:00
 
Executing: alter database datafile 21 online
Executing: alter database datafile 22 online
Executing: alter database datafile 23 online
Finished recover at 23-APR-17

The good thing is that it automatically restores and recovers the datafiles with only one command.
But we see here that all datafiles have been restored. AsI knew that only one datafile was corrupted, it would have been faster to use REPAIR DATAFILE for it.

However, doing the same and calling the recovery advisor is not better: it advises to:

1 Restore and recover datafile 21; Restore and recover datafile 23; Recover datafile 22

When dealing with recovery, you need to understand how it works, what was the scope of the failure, and how to repair it. The advisors or automatic actions can help but do not alleviate the need to understand.

 

Cet article 12cR2 RMAN> REPAIR est apparu en premier sur Blog dbi services.

Data Pump LOGTIME, DUMPFILE, PARFILE, DATA_PUMP_DIR in 12c

Sat, 2017-04-22 16:28

Data Pump is a powerful way to save data or metadata, move it, migrate, etc. Here is an example showing few new features in 12cR1 and 12cR2.

New parameters

Here is the result of a diff between 12.1 and 12.2 ‘imp help=y’
CaptureDataPump122

But for this post, I’ll show the parameters that existed in 12.1 but have been enhanced in 12.2

LOGTIME

This is a 12.1 feature. The parameter LOGFILE=ALL displays the system timestamp in front of the messages in at the screen and in the logfile. The default is NONE and you can also set it to STATUS for screen only and LOGFILE for logfile only.


[oracle@vmreforanf12c01 tmp]$ expdp system/manager@PDB01 parfile=impdp.par logfile=impdp.log
 
Export: Release 12.2.0.1.0 - Production on Sat Apr 22 22:20:22 2017
 
Copyright (c) 1982, 2016, Oracle and/or its affiliates. All rights reserved.
 
Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
22-APR-17 22:20:29.671: Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@PDB01 parfile=impdp.par logfile=impdp.log
22-APR-17 22:20:35.505: Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
22-APR-17 22:20:36.032: Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
22-APR-17 22:20:36.407: Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
22-APR-17 22:20:43.586: Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
22-APR-17 22:20:44.126: Processing object type SCHEMA_EXPORT/USER
22-APR-17 22:20:44.199: Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
22-APR-17 22:20:44.243: Processing object type SCHEMA_EXPORT/ROLE_GRANT
22-APR-17 22:20:44.296: Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
22-APR-17 22:20:44.760: Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
22-APR-17 22:20:53.706: Processing object type SCHEMA_EXPORT/TABLE/TABLE
22-APR-17 22:20:59.699: Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
22-APR-17 22:21:00.712: Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
22-APR-17 22:21:03.494: . . exported "SCOTT"."DEMO" 8.789 KB 14 rows
22-APR-17 22:21:03.651: . . exported "SCOTT"."EMP" 8.781 KB 14 rows
22-APR-17 22:21:03.652: . . exported "SCOTT"."DEPT" 6.031 KB 4 rows
22-APR-17 22:21:03.654: . . exported "SCOTT"."SALGRADE" 5.960 KB 5 rows
22-APR-17 22:21:03.656: . . exported "SCOTT"."BONUS" 0 KB 0 rows
22-APR-17 22:21:04.532: Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
22-APR-17 22:21:04.558: ******************************************************************************
22-APR-17 22:21:04.559: Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
22-APR-17 22:21:04.569: /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/log/46EA7EF707457B4FE0531416A8C027F2/SCOTT_20170422.01.dmp
22-APR-17 22:21:04.622: Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Sat Apr 22 22:21:04 2017 elapsed 0 00:00:41

You will always appreciate finding timestamps in the log file. But remember that your import/export is processed by multiple workers and it is difficult to estimate duration between the different lines. I explained this in https://blog.dbi-services.com/datapump-processing-object-type-misleading-messages/

DUMPFILE

You can see that my DUMPFILE contains also the timestamp in the file name. This is possible in 12.2 with the %T substitution variable. Here was my PARFILE where DUMPFILE mentions %U (in addition to %U if there are multiple files):

[oracle@vmreforanf12c01 tmp]$ cat impdp.par
schemas=SCOTT
logtime=all
dumpfile=SCOTT_%T.%U.dmp
reuse_dumpfiles=yes
filesize=1M

PARFILE parameters

I don’t usually use a PARFILE and prefer to pass all parameters on the command line, even if this requires escaping a lot of quotes, because I like to ship the log file with the DUMPFILE. And before 12.2 the LOGFILE mentions only the parameters passed on command line. In 12.2 the PARFILE parameters are mentioned into the LOGFILE (but not to the screen):


;;;
Export: Release 12.2.0.1.0 - Production on Sat Apr 22 22:20:22 2017

Copyright (c) 1982, 2016, Oracle and/or its affiliates. All rights reserved.
;;;
Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
22-APR-17 22:20:24.899: ;;; **************************************************************************
22-APR-17 22:20:24.901: ;;; Parfile values:
22-APR-17 22:20:24.903: ;;; parfile: filesize=1M
22-APR-17 22:20:24.905: ;;; parfile: reuse_dumpfiles=Y
22-APR-17 22:20:24.907: ;;; parfile: dumpfile=SCOTT_%T.%U.dmp
22-APR-17 22:20:24.909: ;;; parfile: logtime=all
22-APR-17 22:20:24.911: ;;; parfile: schemas=SCOTT
22-APR-17 22:20:24.913: ;;; **************************************************************************
22-APR-17 22:20:29.654: Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@PDB01 parfile=impdp.par logfile=impdp.log
22-APR-17 22:20:35.469: Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
22-APR-17 22:20:36.032: Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
22-APR-17 22:20:36.407: Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
22-APR-17 22:20:43.535: Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
22-APR-17 22:20:44.126: Processing object type SCHEMA_EXPORT/USER
22-APR-17 22:20:44.199: Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
22-APR-17 22:20:44.243: Processing object type SCHEMA_EXPORT/ROLE_GRANT
22-APR-17 22:20:44.296: Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
22-APR-17 22:20:44.760: Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
22-APR-17 22:20:53.620: Processing object type SCHEMA_EXPORT/TABLE/TABLE
22-APR-17 22:20:59.699: Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
22-APR-17 22:21:00.712: Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
22-APR-17 22:21:03.494: . . exported "SCOTT"."DEMO" 8.789 KB 14 rows
22-APR-17 22:21:03.651: . . exported "SCOTT"."EMP" 8.781 KB 14 rows
22-APR-17 22:21:03.652: . . exported "SCOTT"."DEPT" 6.031 KB 4 rows
22-APR-17 22:21:03.654: . . exported "SCOTT"."SALGRADE" 5.960 KB 5 rows
22-APR-17 22:21:03.656: . . exported "SCOTT"."BONUS" 0 KB 0 rows
22-APR-17 22:21:04.532: Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
22-APR-17 22:21:04.558: ******************************************************************************
22-APR-17 22:21:04.559: Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
22-APR-17 22:21:04.569: /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/log/46EA7EF707457B4FE0531416A8C027F2/SCOTT_20170422.01.dmp
22-APR-17 22:21:04.621: Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Sat Apr 22 22:21:04 2017 elapsed 0 00:00:41

Now the LOGFILE shows all export information. Only the password is hidden.

DATA_PUMP_DIR

In 12.1 multitenant, you cannot use the default DATA_PUMP_DIR. It is there, but you just cannot use it implicitly or explicitly. With my PARFILE above when DIRECTORY is not mentioned I would have the following error:

ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-39087: directory name DATA_PUMP_DIR is invalid

This means that there is no default possible and we need to mention DIRECTORY.

But in 12.2 it worked, going to /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/log/46EA7EF707457B4FE0531416A8C027F2/ which is the default DATA_PUMP_DIR:

SYSTEM@PDB01 SQL> select * from dba_directories;
 
OWNER DIRECTORY_NAME DIRECTORY_PATH ORIGIN_CON_ID
----- -------------- -------------- -------------
SYS TSPITR_DIROBJ_DPDIR /u90/tmp_data_restore 3
SYS PREUPGRADE_DIR /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/admin 1
SYS XMLDIR /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/xml 1
SYS ORA_DBMS_FCP_LOGDIR /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs 1
SYS ORA_DBMS_FCP_ADMINDIR /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/admin 1
SYS ORACLE_OCM_CONFIG_DIR /u01/app/oracle/product/12.2.0/dbhome_1/ccr/state 1
SYS ORACLE_OCM_CONFIG_DIR2 /u01/app/oracle/product/12.2.0/dbhome_1/ccr/state 1
SYS XSDDIR /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/xml/schema 1
SYS DATA_PUMP_DIR /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/log/46EA7EF707457B4FE0531416A8C027F2 1
SYS OPATCH_INST_DIR /u01/app/oracle/product/12.2.0/dbhome_1/OPatch 1
SYS OPATCH_SCRIPT_DIR /u01/app/oracle/product/12.2.0/dbhome_1/QOpatch 1
SYS OPATCH_LOG_DIR /u01/app/oracle/product/12.2.0/dbhome_1/QOpatch 1
SYS ORACLE_BASE / 1
SYS ORACLE_HOME / 1

Of course, don’t leave it under ORACLE_HOME which is on a filesystem for binaries where you don’t want to put variable size files. But it is good to have a default.

 

Cet article Data Pump LOGTIME, DUMPFILE, PARFILE, DATA_PUMP_DIR in 12c est apparu en premier sur Blog dbi services.

SecureFiles on multi-datafiles tablespaces

Fri, 2017-04-21 14:47

When we have a tablespace with multiple datafiles, we are used to seeing the datafiles filled evenly, the extents being allocated in a round-robin fashion. In the old time, we used that to maximize performance, distributing the tables to all disks. Today, we use LVM striping, maximum Inter-Policy, ASM even distribution. And we may even use bigfile tablespaces, so that we don’t care about having multiple datafiles.

But recently, during test phase of migration, I came upon something like this:
SecureFile003

To reproduce the case, I’ve created a tablespace with 8 datafiles:

SQL> create tablespace MYTABS datafile
2 '/tmp/MYTABS01.dbf' size 1M autoextend on maxsize 100M,
3 '/tmp/MYTABS02.dbf' size 1M autoextend on maxsize 100M,
4 '/tmp/MYTABS03.dbf' size 1M autoextend on maxsize 100M,
5 '/tmp/MYTABS04.dbf' size 1M autoextend on maxsize 100M,
6 '/tmp/MYTABS05.dbf' size 1M autoextend on maxsize 100M,
7 '/tmp/MYTABS06.dbf' size 1M autoextend on maxsize 100M,
8 '/tmp/MYTABS07.dbf' size 1M autoextend on maxsize 100M,
9 '/tmp/MYTABS08.dbf' size 1M autoextend on maxsize 100M
10 /
 
Tablespace created.

SecureFiles

This was a 11g to 12c migration, with Data Pump, and a good occasion to convert all LOB to SecureFiles with the transform=lob_storage:securefile parameter. And this tablespace is the one where the LOB segments are stored. I reproduced it with:

SQL> create table MYTABLE ( x clob ) tablespace USERS
2 LOB(x) store as securefile MYLOBSEG (tablespace MYTABS disable storage in row);
 
Table created.

Then I inserted about 80MB:

SQL> insert into MYTABLE select lpad('x',100000) from xmltable('1 to 8000') ;
8000 rows created.
 
SQL> select file_id,file_name,blocks,user_blocks from dba_data_files where tablespace_name='MYTABS' order by file_id;
 
FILE_ID FILE_NAME BLOCKS USER_BLOCKS
---------- ------------------------------ ---------- -----------
61 /tmp/MYTABS01.dbf 128 120
62 /tmp/MYTABS02.dbf 128 120
63 /tmp/MYTABS03.dbf 128 120
64 /tmp/MYTABS04.dbf 9344 9336
65 /tmp/MYTABS05.dbf 128 120
66 /tmp/MYTABS06.dbf 128 120
67 /tmp/MYTABS07.dbf 128 120
68 /tmp/MYTABS08.dbf 128 120
 
8 rows selected.
 
SQL> select file_id,segment_type,sum(blocks) from dba_extents where tablespace_name='MYTABS' group by file_id,segment_type order by file_id,segment_type;
 
FILE_ID SEGMENT_TYPE SUM(BLOCKS)
---------- ------------------ -----------
64 LOBSEGMENT 9216
65 LOBINDEX 8
65 LOBSEGMENT 24

And I continued to load rows, and observed the datafiles filled to their maxsize one after the other, without numeric or alphabetical order.

SQL> select file_id,file_name,blocks,user_blocks from dba_data_files where tablespace_name='MYTABS' order by file_id;
 
FILE_ID FILE_NAME BLOCKS USER_BLOCKS
---------- ------------------------------ ---------- -----------
61 /tmp/MYTABS01.dbf 8320 8312
62 /tmp/MYTABS02.dbf 12416 12408
63 /tmp/MYTABS03.dbf 12416 12408
64 /tmp/MYTABS04.dbf 12416 12408
65 /tmp/MYTABS05.dbf 128 120
66 /tmp/MYTABS06.dbf 128 120
67 /tmp/MYTABS07.dbf 128 120
68 /tmp/MYTABS08.dbf 128 120
 
8 rows selected.
 
SQL> select file_id,segment_type,sum(blocks) from dba_extents where tablespace_name='MYTABS' group by file_id,segment_type order by file_id,segment_type;
 
FILE_ID SEGMENT_TYPE SUM(BLOCKS)
---------- ------------------ -----------
61 LOBSEGMENT 8192
62 LOBSEGMENT 12288
63 LOBSEGMENT 12288
64 LOBSEGMENT 12288
65 LOBINDEX 8
65 LOBSEGMENT 24

Here are some screenshots during this load

SecureFile001

SecureFile002

SecureFile003

BasicFiles

This occurs only with SecureFiles. With the same load into LOB stored as BasicFile I got the following distribution:
SecureFile004

Note that the inserts to BasicFile were about two times slower than the same into SecureFiles.

So what?

I don’t see any bad consequences about that, and this may even be an improvement when concurrent sessions are loading the LOBs. When the datafile picked-up looks random, a guess is that it depends on a process ID to try to distribute the concurrent load. And if you want to distribute data over multiple disks, then you should do it at a lower level. However, it is 30 years behavior that changes and it’s better to be aware of it: datafiles may reach their maxsize even when the tablespace is not full. I order to be sure that it is the expected behavior, I opened an SR with easy to reproduce testcase. Status is ‘review update’ for 9 days (SR 3-14677784041) and I’ll update this post is I get an answer.

 

Cet article SecureFiles on multi-datafiles tablespaces est apparu en premier sur Blog dbi services.

SQL Server 2017 AlwaysOn AGs and new read-scale architectures

Fri, 2017-04-21 06:40

As you probably know Microsoft announced an official name for SQL Server vNext during the last Microsoft Data Amp event on April 19 2017. It becomes officially SQL Serve 2017.

In my first blog post, I wrote about SQL Server on Linux and the introduction of availability groups features. At this moment the SQL Server release version was CTP 1.3. As a reminder, with previous CTP releases, listeners were unusable because they did not persist when switch over events occurred as well as they didn’t provide any transparent redirection capabilities. Today, we are currently on the CTP 2.0 and this last CTP release comes with an important improvement concerning AGs with the support of listeners.

In this blog post I don’t want to write about creating an AG listener on Linux environment. The process is basically the same that creating a listener on Windows and it is well documented by Microsoft for a while. But several things shipped with the last CTP 2.0 have drawn my attention and will allow extending some scenarios with AGs.

First of all, from the Microsoft documentation we may notice a “Create for read-scale only” section. In a nutshell, we are now able to create a cluster-less availability group. Indeed, in this context we want to prioritize scale-out scenarios in favor of HA meaning the cluster layer is not mandatory here. That’s the point. Using Linux or Windows operating system in this case? Well, we may have a long debate here but let’s say we will use a Linux operating system for this scenario.

You also probably noticed that the CLUSTER_TYPE parameter includes now a new EXTERNAL value. So we may create an availability group? by using one of the following values:

  • WSFC = A Windows Server Failover Cluster will manage the availability group
  • EXTERNAL = An external entity will manage the availability group (pacemaker on Linux so far)
  • NONE = No cluster entity will manage the availability group

In my opinion, introducing the EXTERNAL value does make sense regarding the previous CTP releases. Indeed we were able only to specify NONE value to either use an external entity to manage AGs or to use nothing for read-scale scenarios making it meaningless.

At the same time FAILOVER_MODE parameter includes also a new EXTERNAL value which must be specified when using an external entity to manage AGs failover. Before going further in this blog post let’s set the scene. A pretty basic environment which includes 3 high available replicas on Linux involved in a read-scale scenario meaning no extra layer of HA management and asynchronous mode as well.

 blog 121 - ag linux read-scale scenario

As a reminder, implementing a listener with corresponding read-only routes is very useful for the following reasons:

  • Applications are transparently redirected to the corresponding read-only replica when read intent parameter is specified
  • Since SQL Server 2016 applications may be redirected in a round-robin fashion, there’s no need to implement extra component (ok .. round-robin algorithm is pretty basic but that’s not so bad actually)
  • Application does not need to know the underlying infrastructure. They have to connect to the AG listener and that’s it.

But in such scenario where no cluster layer is installed, we are not able to benefit from a floating virtual IP which is part of the automatic redirection to the primary replica in case of a failover event and as you already know, connections must be redirected to the primary in order to benefit from transparent redirection / round robin capabilities. So the remaining question is how to achieve redirection without a floating IP address in this case?

Firstly let’s say creating an AG listener on Linux doesn’t imply creating a corresponding virtual IP and Network Name on the cluster side and especially in this case where AG doesn’t rely on the cluster layer. However creating an AG listener that relies on the primary replica IP address to benefit from transparent / round-robin redirection remains a viable option. This is only the first part of the solution because we have also to address scenarios that include switchover events. Indeed, in this case, primary replica may change regarding the context and the current listener’s configuration becomes invalid (we refer to the previous primary’s IP address). At this stage, I would like to thank again @MihaelaBlendea from Microsoft who put me on the right track.

This is not an official / supported solution but it seems to work well according to my tests. Update 21.04.2017 : Mihaela has confirmed this is a supported solution from Microsoft.

The solution consists in including all the replica IP addresses included in the topology in the listener definition and we may use a DNS record to point to the correct primary replica after a manual failover event. Therefore, applications do have only to know the DNS record to connect to the underlying SQL Server infrastructure.

Here the definition of my availability group including the listener:

CREATE AVAILABILITY GROUP [AdvGrpDRLinux]
WITH
(
    DB_FAILOVER = ON, --> Trigger the failover of the entire AG if one DB fails 
    CLUSTER_TYPE = NONE 
)
FOR REPLICA ON
N'LINUX07'
WITH
(
    ENDPOINT_URL = N'tcp://192.168.40.23:5022',
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,
    FAILOVER_MODE = MANUAL,
    SEEDING_MODE = AUTOMATIC, --> use direct seeding
    SECONDARY_ROLE (ALLOW_CONNECTIONS = ALL)
),
N'LINUX08'
WITH
( 
    ENDPOINT_URL = N'tcp://192.168.40.24:5022', 
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,
    FAILOVER_MODE = MANUAL, 
    SEEDING_MODE = AUTOMATIC, --> use direct seeding
    SECONDARY_ROLE (ALLOW_CONNECTIONS = ALL)
),
N'LINUX09'
WITH
( 
    ENDPOINT_URL = N'tcp://192.168.40.25:5022', 
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,
    FAILOVER_MODE = MANUAL, 
    SEEDING_MODE = AUTOMATIC, --> use direct seeding
    SECONDARY_ROLE (ALLOW_CONNECTIONS = ALL)
);
GO

ALTER AVAILABILITY GROUP [AdvGrpDRLinux] GRANT CREATE ANY DATABASE;
GO


ALTER AVAILABILITY GROUP [AdvGrpDRLinux]   
ADD LISTENER 'lst-advgrplinux' 
( 
	WITH IP ( ('192.168.40.23', '255.255.255.0'), --> LINUX07 IP Address
			  ('192.168.40.24', '255.255.255.0'), --> LINUX08 IP Address
			  ('192.168.40.25', '255.255.255.0')  --> LINUX09 IP Address
	        ) 
		, PORT = 1433 
);   
GO

Notable parameters are:

  • CLUSTER_TYPE = NONE
  • AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT
  • FAILOVER_MODE = MANUAL
  • Direct seeding is enabled.

Let’s have a look at the AG configuration by using AG DMVs:

blog 121 - ag linuxag config

Then here the listener configuration:

blog 121 - ag linuxag listener config

And finally the configuration of my read-only routes and the priority list for redirection:

blog 121 - ag linuxag ro config

You may notice that I use round-robin capabilities for each replica.

I also created a DNS A record with the address of the current primary replica (lst-advgrplinux – 192.168.40.23). DNS record will be used by applications to connect the AdvGrpDRLinux AG.

Let’s test the new configuration by using SQLCMD tool with –K READONLY option. Redirection and round-robin feature come into play. First test is conclusive.

blog 121 - ag linux first test sqlcmd

Go ahead and let’s perform a manual failover. In this case, the primary replica is still available, so I just switched momentary on synchronous mode to avoid resuming replication databases from secondary replicas afterwards. Then I performed a switch over to the LINUX08 replica. According to the Microsoft documentation, in order to guarantee no data loss I also changed temporary the REQUIERED_COPIES_TO_COMMIT to 1. Finally, after performing the manual failover successfully, I switched back to asynchronous mode (REQUIERED_COPIES_TO_COMMIT must be reverted to 0 in this case).

USE [master]
GO

-- switch momentary to synchronous mode
ALTER AVAILABILITY GROUP [AdvGrpDRLinux]
MODIFY REPLICA ON N'LINUX07' WITH (AVAILABILITY_MODE = SYNCHRONOUS_COMMIT)
GO
ALTER AVAILABILITY GROUP [AdvGrpDRLinux]
MODIFY REPLICA ON N'LINUX08' WITH (AVAILABILITY_MODE = SYNCHRONOUS_COMMIT)
GO
ALTER AVAILABILITY GROUP [AdvGrpDRLinux]
MODIFY REPLICA ON N'LINUX09' WITH (AVAILABILITY_MODE = SYNCHRONOUS_COMMIT)
GO

ALTER AVAILABILITY GROUP [AdvGrpDRLinux]
SET (REQUIRED_COPIES_TO_COMMIT = 1)

-- demote old primary replica LINUX07
ALTER AVAILABILITY GROUP [AdvGrpDRLinux] SET (ROLE = SECONDARY); 

-- switch to new primary replica LINUX08
:CONNECT LINUX08 -U sa -PXXXXX
ALTER AVAILABILITY GROUP [AdvGrpDRLinux] FAILOVER;
GO

-- revert back to asynchronous mode
:CONNECT LINUX08 -U sa -PXXXXX
ALTER AVAILABILITY GROUP [AdvGrpDRLinux]
SET (REQUIRED_COPIES_TO_COMMIT = 0)

ALTER AVAILABILITY GROUP [AdvGrpDRLinux]
MODIFY REPLICA ON N'LINUX07' WITH (AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT)
GO
ALTER AVAILABILITY GROUP [AdvGrpDRLinux]
MODIFY REPLICA ON N'LINUX08' WITH (AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT)
GO
ALTER AVAILABILITY GROUP [AdvGrpDRLinux]
MODIFY REPLICA ON N'LINUX09' WITH (AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT)
GO

After updating the DNS record to point to the new primary replica – LINUX08 with IP address equal to 192.168.40.24, transparent redirection and round-robin capabilities continued to work correctly.

blog 121 - ag linux second test sqlcmd

See you soon for other interesting new scenarios with availability groups on Linux!

 

 

 

Cet article SQL Server 2017 AlwaysOn AGs and new read-scale architectures est apparu en premier sur Blog dbi services.

OGG: Patch 17030189 is required on your Oracle mining database for trail format RELEASE 12.2

Tue, 2017-04-18 06:20

Another GoldenGate 12.2 one: Some days ago I had this in the GoldenGate error log:

2017-04-12 14:56:08  WARNING OGG-02901  Oracle GoldenGate Capture for Oracle, extimch.prm:  Replication of UDT and ANYDATA from redo logs is not supported with the Oracle compatible parameter setting. Using fetch instead.
2017-04-12 14:56:08  ERROR   OGG-02912  Oracle GoldenGate Capture for Oracle, extimch.prm:  Patch 17030189 is required on your Oracle mining database for trail format RELEASE 12.2 or later.

Seemed pretty obvious that I was missing a patch.

Headed over to mos and searched for the patch mentioned:

ogg_17030189

Hm, only two hits and I was neither on Exadata nor on the 12.1.0.1 database release. After digging around a bit more in various mos notes there was one (Doc ID 2091679.1) which finally mentioned a workaround. When you install GoldenGate 12.2 you get a script by default in the GoldenGate Home which is called “prvtlmpg.plb”. Looking at the script:

oracle@xxxxx:/u01/app/ogg/ch_src/product/12.2.0.1.160823/ [xxxxx] ls prvtlmpg.plb
prvtlmpg.plb
oracle@xxxxx:/u01/app/ogg/ch_src/product/12.2.0.1.160823/ [xxxxx] strings prvtlmpg.plb
WHENEVER SQLERROR EXIT
set verify off 
set feedback off
set echo off
set serveroutput on
column quotedMiningUser new_value quotedMiningUser noprint
column quotedCurrentSchema new_value quotedCurrentSchema noprint
variable status number
prompt
prompt Oracle GoldenGate Workaround prvtlmpg
prompt
prompt This script provides a temporary workaround for bug 17030189.
prompt It is strongly recommended that you apply the official Oracle 
prompt Patch for bug 17030189 from My Oracle Support instead of using
prompt this workaround.
prompt
prompt This script must be executed in the mining database of Integrated
prompt Capture. You will be prompted for the username of the mining user.
prompt Use a double quoted identifier if the username is case sensitive
prompt or contains special characters. In a CDB environment, this script
prompt must be executed from the CDB$ROOT container and the mining user
prompt must be a common user.
prompt
prompt ===========================  WARNING  ==========================
prompt You MUST stop all Integrated Captures that belong to this mining
prompt user before proceeding!
prompt ================================================================

Really? You get a script to workaround a known issue by default? Lets try:

SQL> @prvtlmpg.plb

Oracle GoldenGate Workaround prvtlmpg

This script provides a temporary workaround for bug 17030189.
It is strongly recommended that you apply the official Oracle
Patch for bug 17030189 from My Oracle Support instead of using
this workaround.

This script must be executed in the mining database of Integrated
Capture. You will be prompted for the username of the mining user.
Use a double quoted identifier if the username is case sensitive
or contains special characters. In a CDB environment, this script
must be executed from the CDB$ROOT container and the mining user
must be a common user.

===========================  WARNING  ==========================
You MUST stop all Integrated Captures that belong to this mining
user before proceeding!
================================================================

Enter Integrated Capture mining user: GGADMIN

Installing workaround...                                                                                         
No errors.                                                                                                       
No errors.
No errors.                                                                                                       
Installation completed.                                                                                          
SQL>                                                                                                             

And finally the extract started fine. Interesting … There seems to be a patch for 11.2.0.4.7DBPSU in development but nothing else for the moment. Even the latest PSU for 11.2.0.4 seems not to include the patch.

 

Cet article OGG: Patch 17030189 is required on your Oracle mining database for trail format RELEASE 12.2 est apparu en premier sur Blog dbi services.

Welcome to M|17, part 2

Mon, 2017-04-17 03:57

m17bannernew
Welcome to the second day of the MariaDB’s first user conference
On the 12th, at 09:00, started the first-ever experimental MariaDB Associate certification exam and I was glad to be among the first and participate
This exam was offered free of charges to all registered attendees
As I wrote over, it was really experimental because all candidates faced many problems
Certification
First, as this exam was proctored, the authentification process was very, very slow, essentially due to the overloaded network
Once done, we were all expecting a “Multiple Choice Question” as in almost all other certifications instead of we had to perform real-world database administration tasks on a remote linux box where a MariaDB server was installed
Following skills were tested:
Server configuration
Security
Users and Roles
Schema Operations
Query Performance
Backup and Restore
Testing duration was 90mn but when you are facing network break and slowness, it’s really short
To pass the exam you need 48 points on a total of 60, so 80%
One thing you do not have to forget when you are finished is to absolutely restart the MariaDB server otherwise all your servers configuration answers are lost
They kindly warned us before we started but at the end there were no alert and communication was roughly stopped
This certification will be definitely Online in one or two months
After lunch, which was as the day before a big buffet but more exotic, my decision was to go to the session of Ashraf Sharif from Severalnines
Step-By-Step: Clustering with Galera and Docker Swarm
I was really happy to see him as we often collaborated for several ClusterControl support cases. He was happy too
Unfortunately for him, he had to speed up because 45mn was not enough for such a vast topic
It was even quite a challenge as he had more than 140 slides and a demo
FullSizeRender
Several key notes were then proposed to close this 2-days event in the conference center
Again the air-conditioning was too cool and this time I got sick
Gunnar Hellekson, director of Product Management for Red Hat Enterprise Linux started with Open Source in a dangerous world
He discussed mainly on how we can leverage the amazing innovation coming out of open source communities while still plotting a journey with secure, stable and supported open source platforms, illustrating with some examples of customer and organizations that use open source to not just innovate but add more competitive advantage
The last key note was proposed by Michael Widenius himself, Everything Old is New: the return of relational
As the database lanscape is changing, evolving very fast and is no longer the property of some as Oracle, IBM or Microsoft,
he is convinced that even NOSQL may work for a subset of use cases, open source relational database are delivering more and more capabilities for NoSQL use cases at a rapid pace

As a conclusion for this MariaDB’s first user conference, my overall impression is positive, it was well organized, all the staff were enthusiastic and open, we could meet and talk with a lot of different people
So, a sweet juicy well dosed workshop, some high level sessions to bring sweetness and acidity into perfect harmony, 3 or 4 spicy key notes to enhance the taste of the event spirit, all ingredients to a cocktail shaker, shake and you obtain the delicious and unforgettable M|17 cocktail.

 

Cet article Welcome to M|17, part 2 est apparu en premier sur Blog dbi services.

Listing the extensions available in PostgreSQL

Mon, 2017-04-17 02:52

When you follow this blog regularly you probably already now that PostgreSQL is highly extensible. There are quite a couple of extension which ship by default and are ready to use. How can you know what is there? The most obvious way is to check the documentation. But did you know there are other ways for getting this information?

What you can do to list the available extensions is to check the files on disk at the location where you installed PostgreSQL, in my case:

postgres@pgbox:/u01/app/postgres/product/96/db_2/share/extension/ [PG962] pwd
/u01/app/postgres/product/96/db_2/share/extension
postgres@pgbox:/u01/app/postgres/product/96/db_2/share/extension/ [PG962] ls
adminpack--1.0.sql                  hstore--1.3--1.4.sql                  pageinspect.control                      plperlu--unpackaged--1.0.sql
adminpack.control                   hstore--1.4.sql                       pageinspect--unpackaged--1.0.sql         plpgsql--1.0.sql
autoinc--1.0.sql                    hstore.control                        pg_buffercache--1.0--1.1.sql             plpgsql.control
autoinc.control                     hstore_plperl--1.0.sql                pg_buffercache--1.1--1.2.sql             plpgsql--unpackaged--1.0.sql
...

The issue with this approach is that chances are high that you have no clue what the extensions are about. Better ask the database by checking pg_available_extensions:

postgres=# select * from pg_available_extensions;
        name        | default_version | installed_version |                               comment                                
--------------------+-----------------+-------------------+----------------------------------------------------------------------
 plpgsql            | 1.0             | 1.0               | PL/pgSQL procedural language
 plperl             | 1.0             |                   | PL/Perl procedural language
 plperlu            | 1.0             |                   | PL/PerlU untrusted procedural language
 plpython2u         | 1.0             |                   | PL/Python2U untrusted procedural language
 plpythonu          | 1.0             |                   | PL/PythonU untrusted procedural language
 pltcl              | 1.0             |                   | PL/Tcl procedural language
 pltclu             | 1.0             |                   | PL/TclU untrusted procedural language
 adminpack          | 1.0             |                   | administrative functions for PostgreSQL
 bloom              | 1.0             |                   | bloom access method - signature file based index
 btree_gin          | 1.0             |                   | support for indexing common datatypes in GIN
 btree_gist         | 1.2             |                   | support for indexing common datatypes in GiST
 chkpass            | 1.0             |                   | data type for auto-encrypted passwords
...

Here you can check the “comment” column which explains what an extension is about.

There is another catalog view which gives you even more information, e.g. the dependencies between extensions, pg_available_extension_versions:

postgres=# select * from pg_available_extension_versions where requires is not null;
       name        | version | installed | superuser | relocatable | schema |      requires       |                           comment                            
-------------------+---------+-----------+-----------+-------------+--------+---------------------+--------------------------------------------------------------
 earthdistance     | 1.1     | f         | t         | t           |        | {cube}              | calculate great-circle distances on the surface of the Earth
 hstore_plperl     | 1.0     | f         | t         | t           |        | {hstore,plperl}     | transform between hstore and plperl
 hstore_plperlu    | 1.0     | f         | t         | t           |        | {hstore,plperlu}    | transform between hstore and plperlu
 hstore_plpythonu  | 1.0     | f         | t         | t           |        | {hstore,plpythonu}  | transform between hstore and plpythonu
 hstore_plpython2u | 1.0     | f         | t         | t           |        | {hstore,plpython2u} | transform between hstore and plpython2u
 hstore_plpython3u | 1.0     | f         | t         | t           |        | {hstore,plpython3u} | transform between hstore and plpython3u
 ltree_plpythonu   | 1.0     | f         | t         | t           |        | {ltree,plpythonu}   | transform between ltree and plpythonu
 ltree_plpython2u  | 1.0     | f         | t         | t           |        | {ltree,plpython2u}  | transform between ltree and plpython2u
 ltree_plpython3u  | 1.0     | f         | t         | t           |        | {ltree,plpython3u}  | transform between ltree and plpython3u
(9 rows)

Once you installed an extension you have two options for displaying that information. Either you use the psql shortcut:

postgres=# create extension hstore;
CREATE EXTENSION
postgres=# \dx
                           List of installed extensions
  Name   | Version |   Schema   |                   Description                    
---------+---------+------------+--------------------------------------------------
 hstore  | 1.4     | public     | data type for storing sets of (key, value) pairs
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language
(2 rows)

… or you ask pg_extension:

postgres=# select * from pg_extension ;
 extname | extowner | extnamespace | extrelocatable | extversion | extconfig | extcondition 
---------+----------+--------------+----------------+------------+-----------+--------------
 plpgsql |       10 |           11 | f              | 1.0        |           | 
 hstore  |       10 |         2200 | t              | 1.4        |           | 

Btw: Did you know that you can tell psql to show you the actual statement that gets executed when you use a shortcut?

postgres=# \set ECHO_HIDDEN on
postgres=# \dx
********* QUERY **********
SELECT e.extname AS "Name", e.extversion AS "Version", n.nspname AS "Schema", c.description AS "Description"
FROM pg_catalog.pg_extension e LEFT JOIN pg_catalog.pg_namespace n ON n.oid = e.extnamespace LEFT JOIN pg_catalog.pg_description c ON c.objoid = e.oid AND c.classoid = 'pg_catalog.pg_extension'::pg_catalog.regclass
ORDER BY 1;
**************************

                           List of installed extensions
  Name   | Version |   Schema   |                   Description                    
---------+---------+------------+--------------------------------------------------
 hstore  | 1.4     | public     | data type for storing sets of (key, value) pairs
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language

Happy extending …

 

Cet article Listing the extensions available in PostgreSQL est apparu en premier sur Blog dbi services.

SQLcl on Bash on Ubuntu on Windows

Sun, 2017-04-16 11:54

I’m running my laptop on Windows, which may sound weird, but Linux is unfortunately not an option when you exchange Microsoft Word documents, manage your e-mails and calendar with Outlook and present with Powerpoint using dual screen (I want to share on the beamer only the slides or demo screen, not my whole desktop). However, I have 3 ways to enjoy GNU/Linux: Cygwin to operate on my laptop, VirtualBox to run Linux hosts, and Cloud services when free trials are available.

Now that Windows 10 has a Linux subsystem, I’ll try it to see if I still need Cygwin.
In a summary, I’ll still use Cygwin, but may prefer this Linux subsystem to run SQLcl, the SQL Developer command line, from my laptop.

Bash on Ubuntu on Windows

In this post I’ll detail what I had to setup to get the following:
CaptureWin10bash000
Bash on Windows 10 is available for several months, but with no interaction with the Windows system except accessing to the filesystems. I didn’t try that. This month, Microsoft has released a new update, called ‘Creator Update’ for whatever reason.

Creator Update

You will probably have no choice to update to ‘Creator Update’ soon but for the moment you have to download Windows10Upgrade9252.exe from https://www.microsoft.com/en-us/software-download/windows10

Windows Subsystem for Linux

You enable the feature from Control Panel -> Programs and Features -> Turn Windows features on and off:
CaptureWin10bash002

This requires a reboot. Windows is not yet an OS where you can install or enable features without closing everything. But at least in Windows 10 the reboot is very fast.

Developer mode

This is a beta feature and requires to enable developer mode:
CaptureWin10bash003

You do that on the Setup -> Update and Security -> For developers:

CaptureWin10bash001

Bash

Now, when you run it (type Bash in the start menu) it installs a subset of Ubuntu (downloaded from the web):
CaptureWin10bash005
It asks for a user and password. You will need the password to sudo to root.
You are in Windows/System32 here, which is ugly, so better exit and run again ‘Bash on Ubuntu on Windows’.

HOME

All my customization (.bash_profile .bashrc .vimrc .tmux.conf .ssh/config … ) is in my cygwin environment and I want to share it for the time I’ll run both Cygwin and Bash on Ubuntu on Windows. For this, I sudo and change the entry in /etc/passwd to have my home where I have my cygwin.home:
fpa:x:1000:1000:"",,,:/mnt/d/Dropbox/cygwin-home/:/bin/bash

Mount

Here are the mount points I have on Cygwin
$ mount
C:/cygwin64/bin on /usr/bin type ntfs (binary,auto)
C:/cygwin64/lib on /usr/lib type ntfs (binary,auto)
C:/cygwin64 on / type ntfs (binary,auto)
C: on /cygdrive/c type ntfs (binary,posix=0,user,noumount,auto)
D: on /cygdrive/d type ntfs (binary,posix=0,user,noumount,auto)

My C: and D: windows drives are mounted in /cygdrive

Here are the mounts I have on the Windows Subsystem for Linux:
root@dell-fpa:/mnt# mount
rootfs on / type lxfs (rw,noatime)
data on /data type lxfs (rw,noatime)
cache on /cache type lxfs (rw,noatime)
mnt on /mnt type lxfs (rw,noatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,noatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,noatime)
none on /dev type tmpfs (rw,noatime,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,noatime)
none on /run type tmpfs (rw,nosuid,noexec,noatime,mode=755)
none on /run/lock type tmpfs (rw,nosuid,nodev,noexec,noatime)
none on /run/shm type tmpfs (rw,nosuid,nodev,noatime)
none on /run/user type tmpfs (rw,nosuid,nodev,noexec,noatime,mode=755)
C: on /mnt/c type drvfs (rw,noatime)
D: on /mnt/d type drvfs (rw,noatime)
root on /root type lxfs (rw,noatime)
home on /home type lxfs (rw,noatime)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noatime)

Because I have scripts and configuration files that mention /cygdrive, I’ve created symbolic links for them:

fpa@dell-fpa:/mnt$ sudo su
[sudo] password for fpa:
root@dell-fpa:/mnt# mkdir /cygdrive
root@dell-fpa:/# ln -s /mnt/c /cygdrive/c
root@dell-fpa:/# ln -s /mnt/d /cygdrive/D

chmod

The first thin I do from my bash shell is to ssh to other hosts:


fpa@dell-fpa:/mnt/c/Users/fpa$ ssh 192.168.78.104
Bad owner or permissions on /mnt/d/Dropbox/cygwin-home//.ssh/config

Ok, permissions of .ssh was set from cygwin, let’s try it from Bash On Ubuntu on Linux:

fpa@dell-fpa:/mnt/c/Users/fpa$ chmod 644 /mnt/d/Dropbox/cygwin-home//.ssh/config
fpa@dell-fpa:/mnt/c/Users/fpa$ ls -ld /mnt/d/Dropbox/cygwin-home//.ssh/config
-rw-rw-rw- 1 root root 5181 Mar 5 16:56 /mnt/d/Dropbox/cygwin-home//.ssh/config

This is not what I want. With 644 I expect -rw-r–r–

Let’s try 444:

fpa@dell-fpa:/mnt/c/Users/fpa$ chmod 444 /mnt/d/Dropbox/cygwin-home//.ssh/config
fpa@dell-fpa:/mnt/c/Users/fpa$ ls -ld /mnt/d/Dropbox/cygwin-home//.ssh/config
-r--r--r-- 1 root root 5181 Mar 5 16:56 /mnt/d/Dropbox/cygwin-home//.ssh/config
fpa@dell-fpa:/mnt/c/Users/fpa$ ssh 192.168.78.104
Last login: Sun Apr 16 15:18:07 2017 from 192.168.78.1
...

Ok, this works but there’s a problem. It seems that the Bash On Ubuntu on Linux doesn’t allow to set permissions differently for user, group and others.

SQLcl

The second thing I do from bash in my laptop is to connect to databases with SQLcl. For Cygwin I had an alias that run the sql.bat script because Cygwin can run .bat files. When I run SQLcl from Cygwin, I run the Windows JDK. This doesn’t work in Bash on Ubuntu on Windows because we are in a Linux subsystem. But we don’t need to because SQLcl can be run directly from the sql bash script, calling the Linux JDK from the Linux subsystem. There’s only one thing to do: download the Linux JDK and set JAVA_HOME to the directory.

In my .bashrc I have the following to set the ‘sql’ alias depending on which environment I am


if [[ $(uname -a) =~ CYGWIN ]] then
alias sql='/cygdrive/D/Soft/sqlcl/bin/sql.bat'
else
alias sql='JAVA_HOME=/mnt/d/Soft/jdk1.8.0-Linux /cygdrive/D/Soft/sqlcl/bin/sql'
fi

What I observe here is that it is much faster (or less slower…) to start the JVM from the Linux subsystem.
Here 4 seconds to start SQLcl, connect and exit:

fpa@dell-fpa:/tmp$ time sql sys/oracle@//192.168.78.104/pdb1 as sysdba <<
 
real 0m4.684s
user 0m3.750s
sys 0m2.484s
 
fpa@dell-fpa:/tmp$ uname -a
Linux dell-fpa 4.4.0-43-Microsoft #1-Microsoft Wed Dec 31 14:42:53 PST 2014 x86_64 x86_64 x86_64 GNU/Linux

Here the same from Windows (Cygwin to time – but it’s running on Windows):

$ time sql sys/oracle@//192.168.78.104/pdb1 as sysdba <<
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
 
real 0m16.261s
user 0m0.000s
sys 0m0.015s
 
fpa@dell-fpa ~
$ uname -a
CYGWIN_NT-10.0 dell-fpa 2.7.0(0.306/5/3) 2017-02-12 13:18 x86_64 Cygwin

So what?

The Linux subsystem on Windows is not yet ready. The only thing I proved here is that it is faster to start a Java application from Linux, but for this I always have a VirtualBox VM started on my laptop, and this is where it is faster to run it, and have a real Linux system.

 

Cet article SQLcl on Bash on Ubuntu on Windows est apparu en premier sur Blog dbi services.

In-core logical replication will hit PostgreSQL 10

Thu, 2017-04-13 09:03

Finally in PostgreSQL 10 (expected to be released this September) a long awaited feature will probably appear: In-core logical replication. PostgreSQL supports physical replication since version 9.0 and now the next step happened with the implementation of logical replication. This will be a major help in upgrading PostgreSQL instances from one version to another with no (or almost no) downtime. In addition this can be used to consolidate data from various instances into one instance for reporting purposes or you can use it to distribute only a subset of your data to selected users on other instances. In contrast to physical replication logical replication works on the table level so you can replicate changes in one or more tables, one database are all databases in a PostgreSQL instance which is quite flexible.

In PostgreSQL logical replication is implemented using a publisher and subscriber model. This mean the publisher is the one who will send the data and the subscriber is the one who will receive and apply the changes. A subscriber can be a publisher as well so you can build cascading logical replication. Here is an overview of a possible setup:

pg-logocal-replication-overview

For setting up logical replication when you do not start with an empty database you’ll need to initially load the database where you want to replicate to. How can you do that? I have two PostgreSQL 10 instances (build from the git sources) running on the same host:

Role Port Publisher 6666 Subsriber 6667

Lets assume we have this sample setup on the publisher instance:

drop table if exists t1;
create table t1 ( a int primary key
                , b varchar(100)
                );
with generator as 
 ( select a.*
     from generate_series ( 1, 5000000 ) a
    order by random()
 )
insert into t1 ( a,b ) 
     select a
          , md5(a::varchar)
       from generator;
select * from pg_size_pretty ( pg_relation_size ('t1' ));

On the subscriber instance there is the same table, but empty:

create table t1 ( a int primary key
                , b varchar(100)
                );

Before we start with the initial load lets take a look at the process list:

postgres@pgbox:/home/postgres/ [PUBLISHER] ps -ef | egrep "PUBLISHER|SUBSCRIBER"
postgres 17311     1  0 11:33 pts/0    00:00:00 /u01/app/postgres/product/dev/db_01/bin/postgres -D /u02/pgdata/PUBLISHER
postgres 17313 17311  0 11:33 ?        00:00:00 postgres: PUBLISHER: checkpointer process   
postgres 17314 17311  0 11:33 ?        00:00:00 postgres: PUBLISHER: writer process   
postgres 17315 17311  0 11:33 ?        00:00:00 postgres: PUBLISHER: wal writer process   
postgres 17316 17311  0 11:33 ?        00:00:00 postgres: PUBLISHER: autovacuum launcher process   
postgres 17317 17311  0 11:33 ?        00:00:00 postgres: PUBLISHER: stats collector process   
postgres 17318 17311  0 11:33 ?        00:00:00 postgres: PUBLISHER: bgworker: logical replication launcher   
postgres 17321     1  0 11:33 pts/1    00:00:00 /u01/app/postgres/product/dev/db_01/bin/postgres -D /u02/pgdata/SUBSCRIBER
postgres 17323 17321  0 11:33 ?        00:00:00 postgres: SUBSCRIBER: checkpointer process   
postgres 17324 17321  0 11:33 ?        00:00:00 postgres: SUBSCRIBER: writer process   
postgres 17325 17321  0 11:33 ?        00:00:00 postgres: SUBSCRIBER: wal writer process   
postgres 17326 17321  0 11:33 ?        00:00:00 postgres: SUBSCRIBER: autovacuum launcher process   
postgres 17327 17321  0 11:33 ?        00:00:00 postgres: SUBSCRIBER: stats collector process   
postgres 17328 17321  0 11:33 ?        00:00:00 postgres: SUBSCRIBER: bgworker: logical replication launcher   

You’ll notice that there is a new background process called “bgworker: logical replication launcher”. We’ll come back to that later.

Time to create our first publication on the publisher with the create publication command:

postgres@pgbox:/u02/pgdata/PUBLISHER/ [PUBLISHER] psql -X postgres
psql (10devel)
Type "help" for help.

postgres=# create publication my_first_publication for table t1;
CREATE PUBLICATION

On the subscriber we need to create a subscription by using the create subscription command:

postgres@pgbox:/u02/pgdata/SUBSCRIBER/ [SUBSCRIBER] psql -X postgres
psql (10devel)
Type "help" for help.

postgres=# create subscription my_first_subscription connection 'host=localhost port=6666 dbname=postgres user=postgres' publication my_first_publication;
ERROR:  could not create replication slot "my_first_subscription": ERROR:  logical decoding requires wal_level >= logical

Ok, good hint. After changing that on both instances:

postgres@pgbox:/home/postgres/ [SUBSCRIBER] psql -X postgres
psql (10devel)
Type "help" for help.

postgres=# create subscription my_first_subscription connection 'host=localhost port=6666 dbname=postgres user=postgres' publication my_first_publication;
CREATE SUBSCRIPTION

If you are not on super fast hardware and check the process list again you’ll see something like this:

postgres 19465 19079 19 11:58 ?        00:00:04 postgres: SUBSCRIBER: bgworker: logical replication worker for subscription 16390 sync 16384  

On the subscriber the “logical replication launcher” background process launched a worker process and syncs the table automatically (this can be avoided by using the “NOCOPY DATA”):

postgres=# show port;
 port 
------
 6667
(1 row)

postgres=# select count(*) from t1;
  count  
---------
 5000000
(1 row)

Wow, that was really easy. You can find more details in the logfile of the subscriber instance:

2017-04-13 11:58:15.099 CEST - 1 - 19087 -  - @ LOG:  starting logical replication worker for subscription "my_first_subscription"
2017-04-13 11:58:15.101 CEST - 1 - 19463 -  - @ LOG:  logical replication apply for subscription my_first_subscription started
2017-04-13 11:58:15.104 CEST - 2 - 19463 -  - @ LOG:  starting logical replication worker for subscription "my_first_subscription"
2017-04-13 11:58:15.105 CEST - 1 - 19465 -  - @ LOG:  logical replication sync for subscription my_first_subscription, table t1 started
2017-04-13 11:59:03.373 CEST - 1 - 19082 -  - @ LOG:  checkpoint starting: xlog
2017-04-13 11:59:37.985 CEST - 2 - 19082 -  - @ LOG:  checkpoint complete: wrote 14062 buffers (85.8%); 1 transaction log file(s) added, 0 removed, 0 recycled; write=26.959 s, sync=2.291 s, total=34.740 s; sync files=13, longest=1.437 s, average=0.171 s; distance=405829 kB, estimate=405829 kB
2017-04-13 12:02:23.728 CEST - 2 - 19465 -  - @ LOG:  logical replication synchronization worker finished processing

On the publisher instance you get another process for sending the changes to the subscriber:

postgres 19464 18318  0 11:58 ?        00:00:00 postgres: PUBLISHER: wal sender process postgres ::1(41768) idle

Changes to the table on the publisher should now get replicated to the subscriber node:

postgres=# show port;
 port 
------
 6666
(1 row)
postgres=# insert into t1 (a,b) values (-1,'aaaaa');
INSERT 0 1
postgres=# update t1 set b='bbbbb' where a=-1;
UPDATE 1

On the subscriber node:

postgres=# show port;
 port 
------
 6667
(1 row)

postgres=# select * from t1 where a = -1;
 a  |   b   
----+-------
 -1 | aaaaa
(1 row)

postgres=# select * from t1 where a = -1;
 a  |   b   
----+-------
 -1 | bbbbb
(1 row)

As mentioned initially you can make the subscriber a publisher and the publisher a subscriber at the same time. So when we create this table on both instances:

create table t2 ( a int primary key );

Then create a publication on the subscriber node:

postgres=# create table t2 ( a int primary key );
CREATE TABLE
postgres=# show port;
 port 
------
 6667
(1 row)

postgres=# create publication my_second_publication for table t2;
CREATE PUBLICATION
postgres=# 

Then create the subscription to that on the publisher node:

postgres=# show port;
 port 
------
 6666
(1 row)

postgres=# create subscription my_second_subscription connection 'host=localhost port=6667 dbname=postgres user=postgres' publication my_second_publication;
CREATE SUBSCRIPTION

… we have a second logical replication the other way around:

postgres=# show port;
 port 
------
 6667
(1 row)
postgres=# insert into t2 values ( 1 );
INSERT 0 1
postgres=# insert into t2 values ( 2 );
INSERT 0 1
postgres=# 

On the other instance:

postgres=# show port;
 port 
------
 6666
(1 row)

postgres=# select * from t2;
 a 
---
 1
 2
(2 rows)

There are two new catalog views which give you information about subscriptions and publications:

postgres=# select * from pg_subscription;
 subdbid |        subname         | subowner | subenabled |                      subconninfo                       |      subslotname       |     subpublications     
---------+------------------------+----------+------------+--------------------------------------------------------+------------------------+-------------------------
   13216 | my_second_subscription |       10 | t          | host=localhost port=6667 dbname=postgres user=postgres | my_second_subscription | {my_second_publication}
(1 row)

postgres=# select * from pg_publication;
       pubname        | pubowner | puballtables | pubinsert | pubupdate | pubdelete 
----------------------+----------+--------------+-----------+-----------+-----------
 my_first_publication |       10 | f            | t         | t         | t
(1 row)

What a cool feature and so easy to use. Thanks to all who brought that into PostgreSQL 10, great work.

 

Cet article In-core logical replication will hit PostgreSQL 10 est apparu en premier sur Blog dbi services.

8 + 1 = 9, yes, true, but …

Thu, 2017-04-13 04:14

dbca_mb_1

Btw: If you really would do that (the screen shot is from 12.1.0.2):

SQL> alter system set sga_target=210m scope=spfile;

System altered.

SQL> alter system set sga_max_size=210m scope=spfile;

System altered.

SQL> alter system set pga_aggregate_target=16m scope=spfile;

System altered.

SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
PL/SQL Release 12.1.0.2.0 - Production
CORE	12.1.0.2.0	Production
TNS for Linux: Version 12.1.0.2.0 - Production
NLSRTL Version 12.1.0.2.0 - Production

SQL> startup force
ORA-00821: Specified value of sga_target 212M is too small, needs to be at least 320M
SQL> 

The same for 12.2.0.1:

SQL> alter system set sga_target=210m scope=spfile;

System altered.

SQL> alter system set sga_max_size=210m scope=spfile;

System altered.

SQL> alter system set pga_aggregate_target=16m scope=spfile;

System altered.

SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
PL/SQL Release 12.2.0.1.0 - Production
CORE	12.2.0.1.0	Production
TNS for Linux: Version 12.2.0.1.0 - Production
NLSRTL Version 12.2.0.1.0 - Production

SQL> startup force
ORA-00821: Specified value of sga_target 212M is too small, needs to be at least 468M
ORA-01078: failure in processing system parameters
SQL> 

To close this post here is another one that caught my eye yesterday:
solarisx64_2

Seems I totally missed that there was a x64 version of Solaris 8 and 9 :)

 

Cet article 8 + 1 = 9, yes, true, but … est apparu en premier sur Blog dbi services.

Oracle 12c – Why you shouldn’t do a crosscheck archivelog all in your regular RMAN backup scripts

Thu, 2017-04-13 02:28

Crosschecking in RMAN is quite cool stuff. With the RMAN crosscheck you can update an outdated RMAN repository about backups or archivelogs whose repository records do not match their physical status.

For example, if a user removes archived logs from disk with an operating system command, the repository (RMAN controlfile or RMAN catalog) still indicates that the logs are on disk, when in fact they are not. It is important to know, that the RMAN CROSSCHECK command never deletes any operating system files or removes any repository records, it just updates the repository with the correct information. In case you really want to delete something, you must use the DELETE command for these operations.

Manually removing archived logs or anything else out of the fast recovery area is something you should never do, however, in reality it still happens.

But when it happens, you want know which files are not on their physical location. So why not running a crosscheck archivelog all regularly in your backup scripts? Is it not a good idea?

From my point of view it is not. For two reason:

  • Your backup script runs slower because you do an extra step
  • But for and foremost you will not notice if an archived log is missing

Let’s run a little test case. I simply move one archived log away and run the backup archivelog all command afterwards.

oracle@dbidg03:/u03/fast_recovery_area/CDB/archivelog/2017_03_30/ [CDB (CDB$ROOT)] mv o1_mf_1_61_dfso8r7p_.arc o1_mf_1_61_dfso8r7p_.arc.20170413a

RMAN> backup archivelog all;

Starting backup at 13-APR-2017 08:03:14
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=281 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=44 device type=DISK
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 04/13/2017 08:03:17
RMAN-06059: expected archived log not found, loss of archived log compromises recoverability
ORA-19625: error identifying file /u03/fast_recovery_area/CDB/archivelog/2017_03_30/o1_mf_1_61_dfso8r7p_.arc
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 7

This is exactly what I have expected. I want to have a clear error message in case an archived log is missing. I don’t want Oracle to skip over it and just continue as if nothing has happened. But what happens if I run a crosscheck archivelog all before running my backup command?

RMAN> crosscheck archivelog all;

released channel: ORA_DISK_1
released channel: ORA_DISK_2
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=281 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=44 device type=DISK
validation succeeded for archived log
archived log file name=/u03/fast_recovery_area/CDB/archivelog/2017_03_28/o1_mf_1_56_dfmzywt1_.arc RECID=73 STAMP=939802622
validation succeeded for archived log
archived log file name=/u03/fast_recovery_area/CDB/archivelog/2017_03_28/o1_mf_1_57_dfo40o1g_.arc RECID=74 STAMP=939839542
validation succeeded for archived log
archived log file name=/u03/fast_recovery_area/CDB/archivelog/2017_03_29/o1_mf_1_58_dfovy7cj_.arc RECID=75 STAMP=939864041
validation succeeded for archived log
archived log file name=/u03/fast_recovery_area/CDB/archivelog/2017_03_29/o1_mf_1_59_dfq7pcwz_.arc RECID=76 STAMP=939908847
validation succeeded for archived log
archived log file name=/u03/fast_recovery_area/CDB/archivelog/2017_03_30/o1_mf_1_60_dfrg8f8o_.arc RECID=77 STAMP=939948334
validation succeeded for archived log
archived log file name=/u03/fast_recovery_area/CDB/archivelog/2017_03_31/o1_mf_1_62_dfv0kybr_.arc RECID=79 STAMP=940032607
validation succeeded for archived log
archived log file name=/u03/fast_recovery_area/CDB/archivelog/2017_03_31/o1_mf_1_63_dfw5s2l8_.arc RECID=80 STAMP=940070724
validation succeeded for archived log
archived log file name=/u03/fast_recovery_area/CDB/archivelog/2017_04_12/o1_mf_1_64_dgw5mgsl_.arc RECID=81 STAMP=941119119
validation succeeded for archived log
archived log file name=/u03/fast_recovery_area/CDB/archivelog/2017_04_13/o1_mf_1_65_dgy552z0_.arc RECID=82 STAMP=941184196
Crosschecked 9 objects

validation failed for archived log
archived log file name=/u03/fast_recovery_area/CDB/archivelog/2017_03_30/o1_mf_1_61_dfso8r7p_.arc RECID=78 STAMP=939988281
Crosschecked 1 objects
RMAN>

The crosscheck validation failed for the archived log which I have moved beforehand. Perfect, the crosscheck has found the issue.

RMAN> list expired backup;

specification does not match any backup in the repository

RMAN> list expired archivelog all;

List of Archived Log Copies for database with db_unique_name CDB
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - --------------------
78      1    61      X 30-MAR-2017 00:45:33
        Name: /u03/fast_recovery_area/CDB/archivelog/2017_03_30/o1_mf_1_61_dfso8r7p_.arc

However, If I run the backup archivelog all afterwards, RMAN continues as if nothing has ever happened, and in case you are not monitoring expired archived logs or backups, you will never notice it.

RMAN> backup archivelog all;

Starting backup at 13-APR-2017 08:05:01
current log archived
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=56 RECID=73 STAMP=939802622
input archived log thread=1 sequence=57 RECID=74 STAMP=939839542
input archived log thread=1 sequence=58 RECID=75 STAMP=939864041
input archived log thread=1 sequence=59 RECID=76 STAMP=939908847
input archived log thread=1 sequence=60 RECID=77 STAMP=939948334
channel ORA_DISK_1: starting piece 1 at 13-APR-2017 08:05:01
channel ORA_DISK_2: starting compressed archived log backup set
channel ORA_DISK_2: specifying archived log(s) in backup set
input archived log thread=1 sequence=62 RECID=79 STAMP=940032607
input archived log thread=1 sequence=63 RECID=80 STAMP=940070724
input archived log thread=1 sequence=64 RECID=81 STAMP=941119119
input archived log thread=1 sequence=65 RECID=82 STAMP=941184196
input archived log thread=1 sequence=66 RECID=83 STAMP=941184301
channel ORA_DISK_2: starting piece 1 at 13-APR-2017 08:05:01
channel ORA_DISK_2: finished piece 1 at 13-APR-2017 08:05:47
piece handle=/u03/fast_recovery_area/CDB/backupset/2017_04_13/o1_mf_annnn_TAG20170413T080501_dgy58fz7_.bkp tag=TAG20170413T080501 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:46
channel ORA_DISK_1: finished piece 1 at 13-APR-2017 08:06:07
piece handle=/u03/fast_recovery_area/CDB/backupset/2017_04_13/o1_mf_annnn_TAG20170413T080501_dgy58fy4_.bkp tag=TAG20170413T080501 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:01:06
Finished backup at 13-APR-2017 08:06:07

Starting Control File and SPFILE Autobackup at 13-APR-2017 08:06:07
piece handle=/u03/fast_recovery_area/CDB/autobackup/2017_04_13/o1_mf_s_941184367_dgy5bh7w_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 13-APR-2017 08:06:08

RMAN>

But is this really what I want? Probably not. Whenever an archived log is missing, RMAN should stop right away and throw an error message. This gives me the chance to check what was going wrong and the possibility to correct it.

Conclusion

I don’t recommend to run the crosscheck archivelog all in your regular RMAN backup scripts. This is a command that should be run manually in case it is needed. You just make your backup slower (ok, not too much but still), and you will probably never notice when an archived log is missing, which can lead to a database which can only be recovered to the point before the missing archived log.

 

Cet article Oracle 12c – Why you shouldn’t do a crosscheck archivelog all in your regular RMAN backup scripts est apparu en premier sur Blog dbi services.

Using WebLogic 12C RESTful interface to query a WebLogic Domain configuration

Thu, 2017-04-13 00:12

WebLogic 12.2.1 provides a new REST management interface with full accesses to all WebLogic Server resources.
This new interface provides an alternative to the WLST scripting or JMX developments for management and monitoring of WebLogic Domains.
This blog explains how the RESTful interface can be used to determine a WebLogic Domain configuration and display it’s the principals attributes.

For this purpose, a Search RESTful call will be used.
The RESTful URL to point to the search is: http://vm01.dbi-workshop.com:7001/management/weblogic/latest/edit/search
This search RESTful URL points to the root of the WebLogic Domain configuration managed beans tree.

The search call is a HTTP POST and requires a json structure to define the resources we are looking for.

{
    links: [],
    fields: [ 'name', 'configurationVersion' ],
    children: {
        servers: {
            links: [],
            fields: [ 'name','listenAddress','listenPort','machine','cluster' ],
            children: {
                SSL: {
                    fields: [ 'enabled','listenPort' ], links: []
                }
            }
        }
    }
}

The json structure above defines the search attributes that is provided in the HTTP POST.
This command searches for the WebLogic Domain name and Version.
Then for the servers in the children’s list for which it prints the name, listen port, machine name and cluster name if this server is member of a cluster. In the servers childrens list, it looks for the SSL entry and displays the SSL listen Port.

To execute this REST url from the Unix command line, we will use the Unix curl command:

curl -g --user monitor:******** -H X-Requested-By:MyClient -H Accept:application/json -H Content-Type:application/json -d "{ links: [], fields: [ 'name', 'configurationVersion' ], children: { servers: { links: [], fields: [ 'name', 'listenPort','machine','cluster' ], children: { SSL: { fields: [ 'listenPort' ], links: [] }} } } }" -X POST http://vm01.dbi-workshop.com:7001/management/weblogic/latest/edit/search

Below is a sample of the results provided by such command execution:

{
    "configurationVersion": "12.2.1.0.0",
    "name": "base_domain",
    "servers": {"items": [
    {
          "listenAddress": "vm01.dbi-workshop.com",
          "name": "AdminServer",
          "listenPort": 7001,
          "cluster": null,
          "machine": [
                 "machines",
                 "machine1"
          ],
          "SSL": {
                 "enabled": true,
                 "listenPort": 7002
          }
   },
   {
          "listenAddress": "vm01.dbi-workshop.com",
          "name": "server1",
          "listenPort": 7003,
          "cluster": [
                 "clusters",
                 "cluster1"
          ],
          "machine": [
                 "machines",
                 "machine1"
          ],
          "SSL": {
                 "enabled": false,
                 "listenPort": 7013
          }
  },
  {
          "listenAddress": "vm01.dbi-workshop.com",
          "name": "server2",
          "listenPort": 7004,
          "cluster": [
                "clusters",
                "cluster1"
          ],
          "machine": [
                "machines",
                "machine1"
          ],
          "SSL": {
                "enabled": false,
                "listenPort": 7014
          }
  },
  {
         "listenAddress": "vm01.dbi-workshop.com",
         "name": "server3",
         "listenPort": 7005,
         "cluster": null,
         "machine": [
                "machines",
                "machine1"
         ],
         "SSL": {
               "enabled": false,
               "listenPort": 7015
         }
  }
]}
 

Cet article Using WebLogic 12C RESTful interface to query a WebLogic Domain configuration est apparu en premier sur Blog dbi services.

Welcome to M|17

Wed, 2017-04-12 20:00

m17bannernew

Welcome to the MariaDB’s first user conference

On the 11th, started at 09:00 this big event at the Conrad Hotel in New York, closed to the One World Trade Center
After the short registration process where we received a full bag of goodies (mobilephone lens,Jolt charger, cap,note block,…)
we could choose between 3 workshops.
– Scaling and Securing MariaDB for High Availability
– MariaDB ColumnStore for High Performance Analytics
– Building Modern Applications with MariaDB

I decided to go to the first one presented by Michael de Groot, technical consultant at MariaDB.
After a theoritical introduction of the detailled MariaDB cluster technology and mechanisms (around 40 slides) we had to build up from scratch a MariaDB cluster composed of 4 nodes and I have to admit that this exercise was well prepared as we had just to follow the displayed instructions on the screen.
At the end that means 12:30, almost everybody had deployed the MariaDB cluster and was able to use and manage it.

Afterwards, it was time to get lunch. A big buffet of salads and sandwiches was waiting for us.
It was really nice because we could meet all people as Peter Zaitsev (Percona’s CEO) in a cool and relax atmosphere.

Welcome-mariadb
Atfter lunch, a keynote was delivered by MariaDB CEO Michael Howard in the biggest conference room of the hotel where around 400 people were present.
He mainly talked about the strategic orientation of MariaDB in the Open Source world for the next coming years.
Unfortunately the air conditioning was too cool and a lot of people started sneezing, even I and I had to keep my jacket all the time.

Then, a special guest speaker called Joan Tay Kim Choo, Executive Director of Technology Operations at DBS Bank, talked about their success story.
How they migrated all their databases from Oracle Enterprise and DB2 to MariaDB.

Roger Bodamer, MariaDB Chief Product Officer, then had also his keynote session.
It was really interresting because he discussed about how MariaDB will exploit the fundamental architectural changes in the cloud and how MariaDB will enable both OLTP and Analytical use cases for enterprises at any scale.

Finally, at five started the Welcome Reception and Technology Pavilion, in other words a small party.
Good music, good red wines (Cabernet was really good), good atmosphere.
we could meet all speakers and I had the chance to meet Michael Widenius alias “Monty”, founder of the MySQL Server, a great moment for me.
He gracefully accepted and several times because the light was really bad to take pictures with me.
MontySaid2

Around 18:30, the party was almost over, I was still here, one of the last guest finishing my glass of cabernet, thinking of tomorrow, the second day of this event and all the sessions I planned to see.

 

Cet article Welcome to M|17 est apparu en premier sur Blog dbi services.

Failed to set logmining server parameter MAX_SGA_SIZE to value XX

Wed, 2017-04-12 08:44

When you see something like this in your GoldenGate error log when you try to start an extract:

2017-04-12 14:51:38  ERROR   OGG-02039  Oracle GoldenGate Capture for Oracle, extxxx.prm:  Failed to set logmining server parameter MAX_SGA_SIZE to value 24.
2017-04-12 14:51:38  ERROR   OGG-02042  Oracle GoldenGate Capture for Oracle, extxxx.prm:  OCI Error 23605.
2017-04-12 14:51:38  ERROR   OGG-01668  Oracle GoldenGate Capture for Oracle, extxxx.prm:  PROCESS ABENDING.

… then you should increase the streams_pool_size (maybe you need to increase the sga parameters as well):

SQL> show parameter streams

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
streams_pool_size                    big integer 23M

Go at least to 1GB and you should be fine.

 

Cet article Failed to set logmining server parameter MAX_SGA_SIZE to value XX est apparu en premier sur Blog dbi services.

PostgreSQL 10 is just around the corner, time to dare the switch?

Wed, 2017-04-12 00:54

Some days ago Robert Haas published a great blog post about the features you can expect for the upcoming PostgreSQL 10 (probably in September this year). Beside of what Robert is describing in his blog: Do you still build your database infrastructure on proprietary software? The time to move forward is now, let me explain why:

What you can always hear when it is about replacing proprietary products with open source solutions is: It does not cost anything. Well, this is not entirely true. The software itself is free and at least when if comes to PostgreSQL you are free to do whatever your want. But this does not mean that you do not need to spend money when using open source software. You will still need to either hire people who will operate what you need or you will need to spend some money for someone else operating what you need (in the cloud or not, that does not matter). The big difference is:

  • You won’t need to purchase licenses, fact
  • Internal or external: When you compare the effort to operate a proprietary database with the time required to operate an open source database: You’ll save money for sure, as you’ll usually reduce complexity. The database is there to do its work and not for generating huge amounts of administration efforts.
  • When you need specific features not there yet you’ll need to get in touch with the community and try to convince them to implement it or you implement it yourself or you pay someone for implementing it (all choices will cost some money).

So far for the money aspects. The real benefit you get when choosing PostgreSQL is that you do not lock you in. Of course, once you start using PostgreSQL your data is in PostgreSQL and you can not just take it as it is and put it into another database. And of course, once you start implementing business logic inside the database you might feel that this locks you in again, but this is true for every product you use. Once you start using it you use it in the way the product works and other products usually work in another way. The key point is that you are free to do whatever you want to do with it and PostgreSQL tried to be as much compliant with the SQL Standard as possible. This is a complete change in thinking when you are used to work with the products of the big companies. PostgreSQL gets developed by people around the globe who in turn work for various companies around the globe. But there is no company called PostgreSQL, nobody can “buy” PostgreSQL. It is a pure open source project comparable to the Linux kernel development. Nobody can “buy” the Linux kernel but everybody can build business around it like the various commercial Linux distributions are doing it. The very same is true about PostgreSQL. The PostgreSQL product itself will always be free, check the PostgreSQL license.

What you do not get from PostgreSQL are the tools you need around PostgreSQL, e.g. for monitoring, backup/restore management or tools to automate failover and failback. The tools are there of course, both open source products as well as commercial tools. The commercial ones usually require some kind of subscription (e.g. EnterpiseDB).

Another important point to know is that PostgreSQL is supported on many platforms, check the build farm on what currently is tested and works. You are free to chose whatever platform you want to use: Your company is mainly using Windows, go and install PostgreSQL on Windows. Your main platform is FreeBSD? Go, install and use PostgreSQL on it.

But we need professional support! I know, you are used to work with the support organizations of the big companies and believe that only payed support is good support. If you want to (or are forced to), have a look here or contact us. There are plenty of companies which offer commercial support. In fact the official mailing lists provide outstanding support as well. Post your question to the mailing list which is the right one for your question and the question will get answered pretty fast, trust me. If you can’t believe it: Test it (but better think of asking a question after you searched the archives, maybe the answer is already there).

There are no conferences for PostgreSQL! Really? Have a look here. The next one in Switzerland is here.

I will not go into a features discussion here. If you want to learn more about the features of PostgreSQL search this blog or check the official documentation. There are tons of slides on SlideShare as well and many, many videos. If you really want to know what currently is going on in the PostgreSQL development check the PostgreSQL commit fest which is currently in progress. This is the place where patches are maintained. All is transparent and for every single patch you can check on how the whole discussion started in the hackers mailing list, e.g. for declarative partitioning.

Think about it …

 

Cet article PostgreSQL 10 is just around the corner, time to dare the switch? est apparu en premier sur Blog dbi services.

Documentum – Deactivation of a docbase without uninstallation

Sun, 2017-04-09 03:19

At some of our customers, we often install new docbases for development purposes which are used only for a short time to avoid cross-team interactions/interferences and this kind of things. Creating new docbases is quite easy with Documentum but it still takes some time (unless you use silent installations or Docker components). Therefore installing/removing docbases over and over can be a pain. For this purpose, we often install new docbases but then we don’t uninstall it, we simply “deactivate” it. By deactivate I mean updating configuration files and scripts to act just like if this docbase has never been created in the first place. As said above, some docbases are there only temporarily but we might need them again in a near future and therefore we don’t want to remove them completely.

In this blog, I will show you which files should be updated and how to simulate a “deactivation” so that the Documentum components will just act as if the docbase wasn’t there. I will describe the steps for the different applications of the Content Server including the JMS and Thumbnail Server, Web Application Server (D2/DA for example), Full Text Server and ADTS.

On this blog, I will use a Documentum 7.2 environment in LINUX of course (except for the ADTS…) which is therefore using JBoss 7.1.1 (for the JMS and xPlore 1.5). In all our environments we also have a custom script that can be used to stop or start all components installed in the host. Therefore in this blog, I will assume that you do have a similar script (let’s say that this script is named “startstop”) which include a variable named “DOCBASES=” that contains the list of docbases/repositories installed on the local Content Server (DOCBASES=”DOCBASE1 DOCBASE2 DOCBASE3″). For the Full Text Server, this variable will be “INDEXAGENTS=” and it will contain the name of the Index Agents installed on the local FT (INDEXAGENTS=”Indexagent_DOCBASE1 Indexagent_DOCBASE2 Indexagent_DOCBASE3″). If you don’t have such kind of script or if it is setup differently, then just adapt the needed steps below. I will put this custom startstop script at the following locations: $DOCUMENTUM/scripts/startstop in the Content Server and $XPLORE_HOME/scripts/startstop in the Full Text Server.

In the steps below, I will also assume that the docbase that need to be deactivated is “DOCBASE1″ and that we have two additional docbases installed on our environment (“DOCBASE2″ and “DOCBASE3″) that need to stay up&running. If you have some High Availability environments, then the steps below will apply to the Primary Content Server but for Remote Content Servers, you will need to adapt the name of the Docbase start and shutdown scripts which are placed under $DOCUMENTUM/dba: the correct name for Remote CSs should be $DOCUMENTUM/dba/dm_shutdown_DOCBASE1_<ServiceName@RemoteCSs>.

 

1. Content Server

Ok so let’s start with the deactivation of the docbase on the Content Server. Obviously the first thing to do is to stop the docbase if it is running:

ps -ef | grep "docbase_name DOCBASE1 " | grep -v grep
$DOCUMENTUM/dba/dm_shutdown_DOCBASE1

 

Once done and since we don’t want the docbase to be inadvertently restarted, then we need to update the custom script that I mentioned above. In addition to that, we should also rename the Docbase start script so an installer won’t start the docbase too.

mv $DOCUMENTUM/dba/dm_start_DOCBASE1 $DOCUMENTUM/dba/dm_start_DOCBASE1_deactivated
vi $DOCUMENTUM/scripts/startstop
    ==> Duplicate the line starting with "DOCBASES=..."
    ==> Comment one of the two lines and remove the docbase DOCBASE1 from the list that isn't commented
    ==> In the end, you should have something like:
        DOCBASES="DOCBASE2 DOCBASE3"
        #DOCBASES="DOCBASE1 DOCBASE2 DOCBASE3"

 

Ok so now the docbase has been stopped and can’t be started anymore so let’s start to check all the clients that were able to connect to this docbase. If you have a monitoring running on the Content Server (using the crontab for example), don’t forget to disable the monitoring too since the docbase isn’t running anymore. In the crontab, you can just comment the lines for example (using “crontab -e”). On the Java MethodServer (JMS) side, there are at least two applications you should take a look at (ServerApps and the ACS). To deactivate the docbase DOCBASE1 for these two applications, you should apply the following steps:

cd $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments
vi ServerApps.ear/DmMethods.war/WEB-INF/web.xml
    ==> Comment the 4 lines related to DOCBASE1 as follow:
        <!--init-param>
            <param-name>docbase-DOCBASE1</param-name>
            <param-value>DOCBASE1</param-value>
        </init-param-->

vi acs.ear/lib/configs.jar/config/acs.properties
    ==> Reorder the “repository.name.X=” properties for DOCBASE1 to have the biggest number (X is a number which goes from 1 to 3 in this case since I have 3 docbases)
    ==> Reorder the “repository.acsconfig.X=” properties for DOCBASE1 to have the biggest number (X is a number which goes from 1 to 3 in this case since I have 3 docbases)
    ==> Comment the “repository.name.Y=” property with the biggest number (Y is the number for DOCBASE1 so should be 3 now)
    ==> Comment the “repository.acsconfig.Y=” property with the biggest number (Y is the number for DOCBASE1 so should be 3 now)
    ==> Comment the “repository.login.Y=” property with the biggest number (Y is the number for DOCBASE1 so should be 3 now)
    ==> Comment the “repository.password.Y=” property with the biggest number (Y is the number for DOCBASE1 so should be 3 now)

 

So what has been done above? In the file web.xml, there is a reference to all docbases that are configured for the applications. Therefore commenting these lines in the file simply avoid the JMS to try to contact the docbase DOCBASE1 because it’s not running anymore. For the ACS, the update of the file acs.properties is a little bit more complex. What I usually do in this file is reordering the properties so that the docbases that aren’t running have the biggest index. Since we have DOCBASE1, DOCBASE2 and DOCBASE3, DOCBASE1 being the first docbase installed, therefore it will have by default the index N°1 inside the acs.properties (e.g.: repository.name.1=DOCBASE1.DOCBASE1 // repository.name.2=DOCBASE2.DOCBASE2 // …). Reordering the properties will simply allow you to just comment the highest number (3 in this case) for all properties and you will keep the numbers 1 and 2 enabled.

In addition to the above, you might also have a BPM (xCP) installed, in which case you also need to apply the following step:

vi bpm.ear/bpm.war/WEB-INF/web.xml
    ==> Comment the 4 lines related to DOCBASE1 as follow:
        <!--init-param>
            <param-name>docbase-DOCBASE1</param-name>
            <param-value>DOCBASE1</param-value>
        </init-param-->

 

Once the steps have been applied, you can restart the JMS using your preferred method. This is an example:

$DOCUMENTUM_SHARED/jboss7.1.1/server/stopMethodServer.sh
ps -ef | grep "MethodServer" | grep -v grep
nohup $DOCUMENTUM_SHARED/jboss7.1.1/server/startMethodServer.sh >> $DOCUMENTUM_SHARED/jboss7.1.1/server/nohup-JMS.out 2>&1 &

 

After the restart of the JMS, it won’t contain any errors anymore related to connection problems to DOCBASE1. For example if you don’t update the ACS file (acs.properties), it will still try to project itself to all docbases and it will therefore fail for DOCBASE1.

The next component I wanted to describe isn’t a component that is installed by default on all Content Servers but you might have it if you need document previews: the Thumbnail Server. To deactivate the docbase DOCBASE1 inside the Thumbnail Server, it’s pretty easy too:

vi $DM_HOME/thumbsrv/conf/user.dat
    ==> Comment the 5 lines related to DOCBASE1:
        #[DOCBASE1]
        #user=dmadmin
        #local_folder=thumbnails
        #repository_folder=/System/ThumbnailServer
        #pfile.txt=/app/dctm/server/product/7.2/thumbsrv/conf/DOCBASE1/pfile.txt

sh -c "$DM_HOME/thumbsrv/container/bin/shutdown.sh"
ps -ef | grep "thumbsrv" | grep -v grep
sh -c "$DM_HOME/thumbsrv/container/bin/startup.sh"

 

If you don’t do that, the Thumbnail Server will try to contact all docbases configured in the “user.dat” file and because of a bug with certain versions of the Thumbnail (see this blog for more information), your Thumbnail Server might even fail to start. Therefore commenting the lines related to DOCBASE1 inside this file is quite important.

 

2. Web Application Server

For the Web Application Server hosting your Documentum Administrator and D2/D2-Config clients, the steps are pretty simple: usually nothing or almost nothing has to be done. If you really want to be clean, then there might be a few things to do, it all depends on what you configured… On this part, I will consider that you are using non-exploded applications (which means: war files). I will put these WAR files under $WS_HOME/applications/. In case your applications are exploded (meaning your D2 is a folder and not a war file), then you don’t have to extract the files (no need to execute the jar commands). If you are using a Tomcat Application Server, then the applications will usually be exploded (folder) and will be placed under $TOMCAT_HOME/webapps/.

 – D2:

If you defined the LoadOnStartup property for DOCBASE1, then you might need to execute the following commands to extract the file, comment the line for the DOCBASE1 inside it and update the file back into the war file:

jar -xvf $WS_HOME/applications/D2.war WEB-INF/classes/D2FS.properties
sed -i 's,^LoadOnStartup.DOCBASE1.\(username\|domain\)=.*,#&,' WEB-INF/classes/D2FS.properties
jar -uvf $WS_HOME/applications/D2.war WEB-INF/classes/D2FS.properties

 

Also if you defined which docbase should be the default one in D2 and that this docbase is DOCBASE1 then you need to change the default docbase to DOCBASE2 or DOCBASE3. In my case, I will use DOCBASE2 as new default docbase:

jar -xvf $WS_HOME/applications/D2.war WEB-INF/classes/config.properties
sed -i 's,^defaultRepository=.*,defaultRepository=DOCBASE2,' WEB-INF/classes/config.properties
jar -uvf $WS_HOME/applications/D2.war WEB-INF/classes/config.properties

 

Finally if you are using Single Sign-On, you will have a SSO User. This is defined inside the d2fs-trust.properties file with recent versions of D2 while it was defined in the shiro.ini file before. Since I’m using a D2 4.5, the commands would be:

jar -xvf $WS_HOME/applications/D2.war WEB-INF/classes/d2fs-trust.properties
sed -i 's,^DOCBASE1.user=.*,#&,' WEB-INF/classes/d2fs-trust.properties
jar -uvf $WS_HOME/applications/D2.war WEB-INF/classes/d2fs-trust.properties

 

 – D2-Config:

Usually nothing is needed. Only running docbases will be available through D2-Config.

 

 – DA:

Usually nothing is needed, unless you have specific customization for DA, in which case you probably need to take a look at the files under the “custom” folder.

 

3. Full Text Server

For the Full Text Server, the steps are also relatively easy. The only thing that needs to be done is to stop the Index Agent related to the docbase DOCBASE1 and prevent it from starting again. In our environments, since we sometimes have several docbases installed on the same Content Server and several Index Agents installed on the same Full Text, then we need to differentiate the name of the Index Agents. We usually only add the name of the docbase at the end: Indexagent_DOCBASE1. So let’s start with stopping the Index Agent:

ps -ef | grep "Indexagent_DOCBASE1" | grep -v grep
$XPLORE_HOME/jboss7.1.1/server/stopIndexagent_DOCBASE1.sh

 

Once done and if I use the global startstop script I mentioned earlier in this blog, then the only remaining step is preventing the Index Agent to start again and that can be done in the following way:

mv $XPLORE_HOME/jboss7.1.1/server/startIndexagent_DOCBASE1.sh $XPLORE_HOME/jboss7.1.1/server/startIndexagent_DOCBASE1.sh_deactivated
vi $XPLORE_HOME/scripts/startstop
    ==> Duplicate the line starting with "INDEXAGENTS=..."
    ==> Comment one of the two lines and remove the Index Agent related to DOCBASE1 from the list that isn't commented
    ==> In the end, you should have something like:
        INDEXAGENTS="Indexagent_DOCBASE2 Indexagent_DOCBASE3"
        #INDEXAGENTS="Indexagent_DOCBASE1 Indexagent_DOCBASE2 Indexagent_DOCBASE3"

 

If you have a monitoring running on the Full Text Server for this Index Agent, don’t forget to disable it.

 

4. ADTS

The last section of this blog will talk about the ADTS (Advanced Document Transformation Services), also called the Rendition Server. The ADTS is fairly similar to all other Documentum components: first you start with installing the different binaries and then you can configure a docbase to use/be supported by the ADTS. By doing that, the ADTS will update some configuration files that therefore need to be updated again if you want to deactivate a docbase. As you know, the ADTS is a Windows Server so I won’t show you commands to be executed in this section, I will just point you to the configuration files that need to be edited and what to update inside them. In this section, I will use %ADTS_HOME% as the folder under which the ADTS has been installed. It’s usually a good idea to install the ADTS under a specific/separated drive (not the OS drive) like D:\CTS\.

So the first thing to do is to prevent the different profiles for a docbase to be loaded:

Open the file "%ADTS_HOME%\CTS\config\CTSProfileService.xml"
    ==> Comment the whole "ProfileManagerContext" XML tag related to DOCBASE1
    ==> In the end, you should have something like:
        <!--ProfileManagerContext DocbaseName="DOCBASE1" ProcessExternally="false">
            <CTSServerProfile CTSProfileValue="%ADTS_HOME%\CTS\\docbases\\DOCBASE1\\config\\profiles\\lightWeightProfiles" CTSProfileName="lightWeightProfile"/>
            <CTSServerProfile CTSProfileValue="%ADTS_HOME%\CTS\\docbases\\DOCBASE1\\config\\profiles\\lightWeightSystemProfiles" CTSProfileName="lightWeightSystemProfile"/>
            <CTSServerProfile CTSProfileValue="%ADTS_HOME%\CTS\\docbases\\DOCBASE1\\config\\profiles\\heavyWeightProfiles" CTSProfileName="heavyWeightProfile"/>
            <CTSServerProfile CTSProfileValue="/System/Media Server/Profiles" CTSProfileName="lightWeightProfileFolder"/>
            <CTSServerProfile CTSProfileValue="/System/Media Server/System Profiles" CTSProfileName="lightWeightSystemProfileFolder"/>
            <CTSServerProfile CTSProfileValue="/System/Media Server/Command Line Files" CTSProfileName="heavyWeightProfileFolder"/>
            <CTSServerProfile CTSProfileValue="%ADTS_HOME%\CTS\docbases\DOCBASE1\config\temp_profiles" CTSProfileName="tempFileDir"/>
            <CTSServerProfile CTSProfileValue="ProfileSchema.dtd" CTSProfileName="lwProfileDTD"/>
            <CTSServerProfile CTSProfileValue="MP_PROPERTIES.dtd" CTSProfileName="hwProfileDTD"/>
            <ForClients>XCP</ForClients>
        </ProfileManagerContext-->

 

Once that is done, the queue processors need to be disabled too:

Open the file "%ADTS_HOME%\CTS\config\CTSServerService.xml"
    ==> Comment the two "QueueProcessorContext" XML tags related to DOCBASE1
    ==> In the end, you should have something like (I'm not displaying the whole XML tags since they are quite long...):
        <!--QueueProcessorContext DocbaseName="DOCBASE1">
            <CTSServer AttributeName="queueItemName" AttributeValue="dm_mediaserver"/>
            <CTSServer AttributeName="queueInterval" AttributeValue="10"/>
            <CTSServer AttributeName="maxThreads" AttributeValue="10"/>
            ...
            <CTSServer AttributeName="processOnlyParked" AttributeValue=""/>
            <CTSServer AttributeName="parkingServerName" AttributeValue=""/>
            <CTSServer AttributeName="notifyFailureMessageAdmin" AttributeValue="No"/>
        </QueueProcessorContext-->
        <!--QueueProcessorContext DocbaseName="DOCBASE1">
            <CTSServer AttributeName="queueItemName" AttributeValue="dm_autorender_win31"/>
            <CTSServer AttributeName="queueInterval" AttributeValue="10"/>
            <CTSServer AttributeName="maxThreads" AttributeValue="10"/>
            ...
            <CTSServer AttributeName="processOnlyParked" AttributeValue=""/>
            <CTSServer AttributeName="parkingServerName" AttributeValue=""/>
            <CTSServer AttributeName="notifyFailureMessageAdmin" AttributeValue="No"/>
        </QueueProcessorContext-->

 

After that, there is only one last configuration file to be updated and that’s the session manager which is the one responsible for the errors printed during startup of the ADTS because it defines which docbases the ADTS should try to contact, using which user/password and how many tries should be perform:

Open the file "%ADTS_HOME%\CTS\config\SessionService.xml"
    ==> Comment the whole "LoginContext" XML tag related to DOCBASE1
    ==> In the end, you should have something like:
        <!--LoginContext DocbaseName="DOCBASE1" Domain="" isPerformanceLogRepository="false">
            <CTSServer AttributeName="userName" AttributeValue="adtsuser"/>
            <CTSServer AttributeName="passwordFile" AttributeValue="%ADTS_HOME%\CTS\docbases\DOCBASE1\config\pfile\mspassword.txt"/>
            <CTSServer AttributeName="maxConnectionRetries" AttributeValue="10"/>
        </LoginContext-->

 

Once the configuration files have been updated, simply restart the ADTS services for the changes to be applied.

 

And here we go, you should have a clean environment with one less docbase configured without having to remove it on all servers. As a final note, if you ever want to reactivate the docbase, simply uncomment everything that was commented above, restore the default line from the custom “startstop” scripts and rename the Documentum start scripts with their original names (without the “_deactivated”) on the Content Server and Full Text Server.

 

 

Cet article Documentum – Deactivation of a docbase without uninstallation est apparu en premier sur Blog dbi services.

Service “696c6f76656d756c746974656e616e74″ has 1 instance(s).

Sat, 2017-04-08 02:53

Weird title, isn’t it? That was my reaction when I did my first ‘lsnrctl status’ in 12.2: weird service name… If you have installed 12.2 multitenant, then you have probably seen this strange service name registered in your listener. One per PDB. It is not a bug. It is an internal service used to connect to the remote PDB for features like Proxy PDB. This name is the GUID of the PDB which makes this service independent of the name or the physical location of the PDB. You can use it to connect to the PDB, but should not. It is an internal service name. But on a lab, let’s play with it.

CDB

I have two Container Databases on my system:

18:01:33 SQL> connect sys/oracle@//localhost/CDB2 as sysdba
Connected.
18:01:33 SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO

CDB2 has been created without any pluggable databases (except PDB$SEED of course).

18:01:33 SQL> connect sys/oracle@//localhost/CDB1 as sysdba
Connected.
18:01:33 SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 READ WRITE NO

CDB1 has one pluggable database PDB1.

PDB1 has its system files in /u01/oradata/CDB1/PDB1/ and I’ve a user tablespace datafiles elsewhere:

18:01:33 SQL> select con_id,file_name from cdb_data_files;
CON_ID FILE_NAME
------ -------------------------------------
1 /u01/oradata/CDB1/users01.dbf
1 /u01/oradata/CDB1/undotbs01.dbf
1 /u01/oradata/CDB1/system01.dbf
1 /u01/oradata/CDB1/sysaux01.dbf
4 /u01/oradata/CDB1/PDB1/undotbs01.dbf
4 /u01/oradata/CDB1/PDB1/sysaux01.dbf
4 /u01/oradata/CDB1/PDB1/system01.dbf
4 /u01/oradata/CDB1/PDB1/USERS.dbf
4 /var/tmp/PDB1USERS2.dbf

Both are registered to the same local listener:

SQL> host lsnrctl status
 
LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 07-APR-2017 18:01:33
 
Copyright (c) 1991, 2016, Oracle. All rights reserved.
 
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date 07-APR-2017 07:53:06
Uptime 0 days 10 hr. 8 min. 27 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Log File /u01/app/oracle/diag/tnslsnr/VM104/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=VM104)(PORT=1521)))
Services Summary...
Service "4aa269fa927779f0e053684ea8c0c27f" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB2" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB2XDB" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
The command completed successfully

Each container database declares its db_unique_name as a service: CDB1 and CDB2, with an XDB service for each: CDB1XDB and CDB2XDB, each pluggable database has also its service: PDB1 here. This is what we had in 12.1 but in 12.2 there is one more service with a strange name in hexadecimal: 4aa269fa927779f0e053684ea8c0c27f

Connect to PDB without a service name?

Want to know more about it? Let’s try to connect to it:

SQL> connect sys/oracle@(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=4aa269fa927779f0e053684ea8c0c27f))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.78.104)(PORT=1521))) as sysdba
Connected.
SQL> select sys_context('userenv','cdb_name'), sys_context('userenv','con_name'), sys_context('userenv','service_name') from dual;
 
SYS_CONTEXT('USERENV','CDB_NAME') SYS_CONTEXT('USERENV','CON_NAME') SYS_CONTEXT('USERENV','SERVICE_NAME')
--------------------------------- --------------------------------- -------------------------------------
CDB1 PDB1 SYS$USERS

With this service, I can connect to the PDB1 but the service name I used in the connection string is not a real service:

SQL> select name from v$services;
 
NAME
----------------------------------------------------------------
pdb1
 
SQL> show parameter service
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
service_names string CDB1

The documentation says that SYS$USERS is the default database service for user sessions that are not associated with services so I’m connected to a PDB here without a service.

GUID

The internal service name is the GUID of the PDB, which identifies the container even after unplug/plug.

SQL> select pdb_id,pdb_name,con_uid,guid from dba_pdbs;
 
PDB_ID PDB_NAME CON_UID GUID
------ -------- ------- ----
4 PDB1 2763763322 4AA269FA927779F0E053684EA8C0C27F

Proxy PDB

This internal service has been introduced in 12cR2 for Proxy PDB feature: access to a PDB through another one, so that you don’t have to change the connection string when you migrate the PDB to another server.

I’ll create a Proxy PDB in CDB2 to connect to PDB1 which is in CDB1. This is simple: create a database link for the creation of the Proxy PDB which I call PDB1PX1:

18:01:33 SQL> connect sys/oracle@//localhost/CDB2 as sysdba
Connected.
18:01:33 SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
 
18:01:33 SQL> create database link CDB1 connect to system identified by oracle using '//localhost/CDB1';
Database link CDB1 created.
 
18:01:38 SQL> create pluggable database PDB1PX1 as proxy from PDB1@CDB1
file_name_convert=('/u01/oradata/CDB1/PDB1','/u01/oradata/CDB1/PDB1PX1');
 
Pluggable database PDB1PX1 created.
 
18:02:14 SQL> drop database link CDB1;
Database link CDB1 dropped.

The Proxy PDB clones the system tablespaces, and this is why I had to give a file_name_convert. Note that the user tablespace datafile is not cloned, so I don’t need to convert the ‘/var/tmp/PDB1USERS2.dbf’. The dblink is not needed anymore once the Proxy PDB is created, as it is used only for the clone of system tablespaces. The PDB is currently in mount.

18:02:14 SQL> connect sys/oracle@//localhost/CDB2 as sysdba
Connected.
18:02:14 SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB1PX1 MOUNTED

The system tablespaces are there (I’m in 12.2 with local undo which is required for Proxy PDB feature)

18:02:14 SQL> select con_id,file_name from cdb_data_files;
 
CON_ID FILE_NAME
------ ---------
1 /u01/oradata/CDB2/system01.dbf
1 /u01/oradata/CDB2/sysaux01.dbf
1 /u01/oradata/CDB2/users01.dbf
1 /u01/oradata/CDB2/undotbs01.dbf

I open the PDB

18:02:19 SQL> alter pluggable database PDB1PX1 open;
Pluggable database PDB1PX1 altered.

connect

I have now 3 ways to connect to PDB1: with the PDB1 service, with the internal service, and through the Proxy PDB service.
I’ve tested the 3 ways:


18:02:45 SQL> connect demo/demo@//localhost/PDB1
18:02:56 SQL> connect demo/demo@//localhost/PDB1PX1
18:03:06 SQL> connect demo/demo@//localhost/4aa269fa927779f0e053684ea8c0c27f

and I’ve inserted each time into a DEMO table the information about my connection:
SQL> insert into DEMO select '&_connect_identifier' "connect identifier", current_timestamp "timestamp", sys_context('userenv','cdb_name') "CDB name", sys_context('userenv','con_name') "con name" from dual;

Here is the result:

connect identifier timestamp CDB name container name
------------------ --------- -------- --------------
//localhost/PDB1 07-APR-17 06.02.50.977839000 PM CDB1 PDB1
//localhost/PDB1PX1 07-APR-17 06.03.01.492946000 PM CDB1 PDB1
//localhost/4aa269fa927779f0e053684ea8c0c27f 07-APR-17 06.03.11.814039000 PM CDB1 PDB1

We are connected to the same databases. As for this test I’m on the same server with same listener, I can check what is logged in the listener log.

Here are the $ORACLE_BASE/diag/tnslsnr/$(hostname)/listener/alert/log.xml entries related to my connections.

//localhost/PDB1

When connecting directly to PDB1 the connection is simple:


<msg time='2017-04-07T18:02:45.644+02:00' org_id='oracle' comp_id='tnslsnr'
type='UNKNOWN' level='16' host_id='VM104'
host_addr='192.168.78.104' pid='1194'>
<txt>07-APR-2017 18:02:45 * (CONNECT_DATA=(SERVICE_NAME=PDB1)(CID=(PROGRAM=java)(HOST=VM104)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=27523)) * establish * PDB1 * 0
</txt>
</msg>

I am connecting with SQLcl which is java: (PROGRAM=java)

//localhost/PDB1PX1

When connecting through the Proxy PDB I see the connection to the Proxy PDBX1:


<msg time='2017-04-07T18:02:56.058+02:00' org_id='oracle' comp_id='tnslsnr'
type='UNKNOWN' level='16' host_id='VM104'
host_addr='192.168.78.104' pid='1194'>
<txt>07-APR-2017 18:02:56 * (CONNECT_DATA=(SERVICE_NAME=PDB1PX1)(CID=(PROGRAM=java)(HOST=VM104)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=27524)) * establish * PDB1PX1 * 0
</txt>
</msg>

This is the java connection. But I can also see the connection to the remote PDB1 from the Proxy PDB


<msg time='2017-04-07T18:03:01.375+02:00' org_id='oracle' comp_id='tnslsnr'
type='UNKNOWN' level='16' host_id='VM104'
host_addr='192.168.78.104' pid='1194'>
<txt>07-APR-2017 18:03:01 * (CONNECT_DATA=(SERVICE_NAME=4aa269fa927779f0e053684ea8c0c27f)(CID=(PROGRAM=oracle)(HOST=VM104)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.78.104)(PORT=16787)) * establish * 4aa269fa927779f0e053684ea8c0c27f * 0
</txt>
</msg>

Here the program is (PROGRAM=oracle) which is a CDB2 instance process connecting to the CDB1 remote through the internal service.

//localhost/4aa269fa927779f0e053684ea8c0c27f

When I connect to the internal service, I see the same connection to PDB1’s GUID but from (PROGRAM=java) directly


<msg time='2017-04-07T18:03:06.671+02:00' org_id='oracle' comp_id='tnslsnr'
type='UNKNOWN' level='16' host_id='VM104'
host_addr='192.168.78.104' pid='1194'>
<txt>07-APR-2017 18:03:06 * (CONNECT_DATA=(SERVICE_NAME=4aa269fa927779f0e053684ea8c0c27f)(CID=(PROGRAM=java)(HOST=VM104)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=27526)) * establish * 4aa269fa927779f0e053684ea8c0c27f * 0
</txt>
</msg>

One more…

So each user PDB, in addition to the PDB name and additional services you have defined, registers an additional internal service, whether the PDB is opened our closed. And the fun is that Proxy PDB also register this additional service. Here is my listener status:


Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=VM104)(PORT=1521)))
Services Summary...
Service "4aa269fa927779f0e053684ea8c0c27f" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "4c96bda23b8e41fae053684ea8c0918b" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB2" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB2XDB" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "pdb1px1" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
The command completed successfully

This “4c96bda23b8e41fae053684ea8c0918b” is the GUID of the Proxy PDB.

SQL> select sys_context('userenv','cdb_name'), sys_context('userenv','con_name'), sys_context('userenv','service_name') from dual;
 
SYS_CONTEXT('USERENV','CDB_NAME')
--------------------------------------------------------------------------------
SYS_CONTEXT('USERENV','CON_NAME')
--------------------------------------------------------------------------------
SYS_CONTEXT('USERENV','SERVICE_NAME')
--------------------------------------------------------------------------------
CDB1
PDB1
SYS$USERS

So that’s a fourth way to connect to PDB1: through the internal service of the Proxy PDB.

Then you can immediately imagine what I tried…

ORA-65280

Because the internal service name is used to connect through Proxy PDB, can I create an proxy for the proxy?

18:03:32 SQL> create pluggable database PDB1PX2 as proxy from PDB1PX1@CDB2
2 file_name_convert=('/u01/oradata/CDB1/PDB1/PX1','/u01/oradata/CDB1/PDB1PX2');
 
Error starting at line : 76 File @ /media/sf_share/122/blogs/proxypdb.sql
In command -
create pluggable database PDB1PX2 as proxy from PDB1PX1@CDB2
file_name_convert=('/u01/oradata/CDB1/PDB1/PX1','/u01/oradata/CDB1/PDB1PX2')
Error report -
ORA-65280: The referenced pluggable database is a proxy pluggable database.

Answer is no. You cannot nest the Proxy PDB.

So what?

Don’t panic when looking at services registered in the listener. Those hexadecimal service names are expected in 12.2, with one per user PDB. You see them, but have no reason to use them directly. You will use them indirectly when creating a Proxy PDB which makes the location where users connect independent from the physical location of the PDB. Very interesting from migration because client configuration is independent from the migration (think hybrid-cloud). You can use this feature even without the multitenant option. Want to see all multitenant architecture options available without the option? Look at the ITOUG Tech day agenda

 

Cet article Service “696c6f76656d756c746974656e616e74″ has 1 instance(s). est apparu en premier sur Blog dbi services.

Trace files segmented in multiple parts as a workaround for bug 23300142

Fri, 2017-04-07 12:27

Today I visited a customer, who deleted a Data Guard configuration (i.e. a temporary Data Guard setup through the broker was deleted). The LOG_ARCHIVE_DEST_STATE_2 on the primary database was set to DEFER temporarily. That resulted in trace-files with name *tt*.trc to become huge (GBytes after a couple of days). Analysis showed that this was caused by bug 23300142 in 12.1.0.2. See My Oracle Support Note

Bug 23300142 - TT background process trace file message: async ignored current log: kcclenal clear thread open (Doc ID 23300142.8)

for details.
Unfortunately the bug does not have a workaround.
Due to the fact that the affected development-databases (which were now normal single instances without Data Guard) could not be restarted, I searched for a temporary workaround to stop the trace-files from growing further. Limiting the trace-file size on the database with

alter system set max_dump_file_size='100M';

did actually not always work to limit the file size. Here an example of a huge trace file (over 5GB):


$ find . -name "*tt*.trc" -ls | tr -s " " | cut -d " " -f7-11 | sort -n
...
5437814195 Apr 7 10:46 ./xxxxxx_site1/XXXXXX/trace/XXXXXX_tt00_28304.trc

However, what came in handy was the uts-trace-segmentation feature of 12c. See Jonathan Lewis’ blog here:

https://jonathanlewis.wordpress.com/2016/01/26/trace-file-size

I.e. I left all DBs on max_dump_file_size=unlimited and set


SQL> alter system set "_uts_first_segment_size" = 52428800 scope=memory;
SQL> alter system set "_uts_trace_segment_size" = 52428800 scope=memory;

Unfortunately setting the limit to the tt-background-process alone does not work:


SQL> exec dbms_system.set_int_param_in_session(sid => 199, serial# => 44511, parnam => '_uts_trace_segment_size', intval => 52428800);
BEGIN dbms_system.set_int_param_in_session(sid => 199, serial# => 44511, parnam => '_uts_trace_segment_size', intval => 52428800); END;
 
*
ERROR at line 1:
ORA-44737: Parameter _uts_trace_segment_size did not exist.
ORA-06512: at "SYS.DBMS_SYSTEM", line 117
ORA-06512: at line 1

With the default setting of “_uts_trace_segments” (Maximum number of trace segments) = 5 I could limit the maximum size of the trace of 1 DB to 250MB (50MB * 5). Below you can see only 4 files, because of 2 tests with earlier splittings of the trace-file:


$ ls -ltr *_tt00_28304*.trc
-rw-r----- 1 oracle dba 52428964 Apr 7 14:14 XXXXXX_tt00_28304_3.trc
-rw-r----- 1 oracle dba 52428925 Apr 7 16:07 XXXXXX_tt00_28304_4.trc
-rw-r----- 1 oracle dba 52428968 Apr 7 17:12 XXXXXX_tt00_28304_5.trc
-rw-r----- 1 oracle dba 43887950 Apr 7 18:50 XXXXXX_tt00_28304.trc

The feature of segmented trace-files may help a lot in situations like bug 23300142.

REMARK: Do not use underscore parameters in production environments without agreement from Oracle Support.

 

Cet article Trace files segmented in multiple parts as a workaround for bug 23300142 est apparu en premier sur Blog dbi services.

12cR2 DML monitoring and Statistics Advisor

Thu, 2017-04-06 15:40

Monitoring DML to get an idea of the activity on our tables is not new. The number of insert/delete/update/truncate since last stats gathering is tracked automatically. The statistics gathering job use it to list and prioritize tables that need fresh statistics. This is for slow changes on tables. In 12.2 we have the statistics advisor that goes further, with a rule that detects volatile tables:

SQL> select * from V$STATS_ADVISOR_RULES where rule_id=14;
 
RULE_ID NAME RULE_TYPE DESCRIPTION CON_ID
------- ---- --------- ----------- ------
14 LockVolatileTable OBJECT Statistics for objects with volatile data should be locked 0

But to detect volatile tables, you need to track DML frequency with finer grain. Let’s investigate what is new here in 12.2

Statistics Advisor tracing

DBMS_STATS has its trace mode enabled as a global preference. It is not documented, but it works with powers of two. 12.1.0.2 introduced 262144 to trace system statistics gathering, so let’s try the next one: 524288

SQL> exec dbms_stats.set_global_prefs('TRACE',0+524288)
PL/SQL procedure successfully completed.

After a while, I grepped my trace directory for DBMS_STATS and found the MMON slave trace (ORCLA_m001_30694.trc here):

*** 2017-04-06T14:10:11.979283+02:00
*** SESSION ID:(81.2340) 2017-04-06T14:10:11.979302+02:00
*** CLIENT ID:() 2017-04-06T14:10:11.979306+02:00
*** SERVICE NAME:(SYS$BACKGROUND) 2017-04-06T14:10:11.979309+02:00
*** MODULE NAME:(MMON_SLAVE) 2017-04-06T14:10:11.979313+02:00
*** ACTION NAME:(Flush KSXM hash table action) 2017-04-06T14:10:11.979317+02:00
*** CLIENT DRIVER:() 2017-04-06T14:10:11.979320+02:00
 
...
 
DBMS_STATS: compute_volatile_flag: objn=74843, flag=0, new_flag=0, inserts_new=619, updates_new=0, deletes_new=0, inserts_old=619, updates_old=0, deletes_old=0, rowcnt=, rowcnt_loc=, stale_pcnt=10, gather=NO_GATHER, flag_result=0
DBMS_STATS: compute_volatile_flag: objn=74862, flag=0, new_flag=0, inserts_new=4393, updates_new=0, deletes_new=0, inserts_old=4393, updates_old=0, deletes_old=0, rowcnt=, rowcnt_loc=, stale_pcnt=10, gather=NO_GATHER, flag_result=0
DBMS_STATS: compute_volatile_flag: objn=74867, flag=1, new_flag=0, inserts_new=4861477, updates_new=584000, deletes_new=13475192, inserts_old=3681477, updates_old=466000, deletes_old=12885192, rowcnt=, rowcnt_loc=, stale_pcnt=10, gather=NO_GATHER, flag_result=1

Those entries appear every hour. Obviously, they are looking at some table (by their object_id) and computes a new flag from an existing flag and statistics about new and old DML (insert, update, delete). There’s a mention or row count and stale percentage. Obviously, the volatility of tables est computed every hour (mentions gather=NO_GATHER) or when we gather statistics (gather=GATHER). This goes beyond the DML monitoring from previous release, but is probably based on it.

Testing some DML

SQL> delete from DEMO;
10000 rows deleted.
 
SQL> insert into DEMO select rownum from xmltable('1 to 10000');
10000 rows created.
 
SQL> commit;
Commit complete.
 
SQL> select count(*) numrows from DEMO;
NUMROWS
----------
10000
 
SQL> update demo set n=n+1 where rownum lt;= 2000;
 
2000 rows updated.
 
SQL> insert into DEMO select rownum from xmltable('1 to 10000');
 
10000 rows created.

I deleted 10000 rows and inserted 10000, with a commit at the end. I updated 2000 ones and inserted 10000 again, without commit.

x$ksxmme

DML monitoring is done in memory, I order to see the changes in DBA_TAB_MODIFICATIONS, we need to flush it. But this in-memory information is visible in X$ fixed view:

SQL> select * from X$KSXMME where objn=&object_id;
old 1: select * from X$KSXMME where objn=&object_id
new 1: select * from X$KSXMME where objn= 74867
 
ADDR INDX INST_ID CON_ID CHUNKN SLOTN OBJN INS UPD DEL DROPSEG CURROWS PAROBJN LASTUSED FLAGS
---------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
00007F526E0B81F0 0 1 0 64 256 74867 20000 2000 10000 0 2350000 0 1491467123 128

Here are my 10000 deletes + 10000 inserts + 2000 updates + 10000 inserts. Of course the uncommitted ones are there because DML tracking do not keep the numbers for each transaction in order to update later what is committed or not.

The proof is that when I rollback, the numbers do not change:

SQL> rollback;
Rollback complete.
 
SQL> select * from X$KSXMME where objn=&object_id;
old 1: select * from X$KSXMME where objn=&object_id
new 1: select * from X$KSXMME where objn= 74867
 
ADDR INDX INST_ID CON_ID CHUNKN SLOTN OBJN INS UPD DEL DROPSEG CURROWS PAROBJN LASTUSED FLAGS
---------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
00007F526DDF47F8 0 1 0 64 256 74867 20000 2000 10000 0 2350000 0 1491467123 128

Yes, there is an estimation of the current number of rows here, in real-time. This is used to compare the changes with the total number, but you can use it to see the progress of a big transaction, giving a view of uncommitted changes.

sys.mon_mods_all$

The table sys.mon_mods_all$ is what is behind DBA_TAB_MODIFICATIONS (not exactly, but that will be for another blog post) and you have to flush what’s in memory to see the latest changes there:

SQL> exec dbms_stats.flush_database_monitoring_info;
PL/SQL procedure successfully completed.
 
SQL> select * from sys.mon_mods_all$ where obj#=&object_id;
old 1: select * from sys.mon_mods_all$ where obj#=&object_id
new 1: select * from sys.mon_mods_all$ where obj#= 74867
 
OBJ# INSERTS UPDATES DELETES TIMESTAMP FLAGS DROP_SEGMENTS
---------- ---------- ---------- ---------- ------------------ ---------- -------------
74867 5581477 656000 13835192 06-APR 15:10:53 1 0

The flag 1 means that the table has been truncated since the latest stats gathering.

This is what we already know from previous release. Nothing to do with the trace we see every hour in MMON slave.

sys.optstat_snapshot$

What happens every hour is that a snapshot of sys.mon_mods_all$ is stored in sys.optstat_snapshot$:

SQL> select * from sys.optstat_snapshot$ where obj#=&object_id order by timestamp;
old 1: select * from sys.optstat_snapshot$ where obj#=&object_id order by timestamp
new 1: select * from sys.optstat_snapshot$ where obj#= 74867 order by timestamp
 
OBJ# INSERTS UPDATES DELETES FLAGS TIMESTAMP
---------- ---------- ---------- ---------- ---------- ------------------
74867 999 0 0 32 05-APR-17 17:27:01
74867 1997 0 0 32 05-APR-17 17:33:25
74867 1997 0 0 32 05-APR-17 17:33:31
74867 1997 0 0 32 05-APR-17 17:33:32
74867 80878 0 160 0 05-APR-17 18:59:37
74867 90863 0 210 0 05-APR-17 20:53:07
74867 10597135 0 410 0 05-APR-17 21:53:13
74867 10598134 0 410 32 05-APR-17 22:02:38
74867 38861 0 10603745 1 06-APR-17 08:17:58
74867 38861 0 10603745 1 06-APR-17 09:18:04
74867 581477 124000 11175192 1 06-APR-17 10:11:27
74867 1321477 230000 11705192 1 06-APR-17 11:09:50
74867 2481477 346000 12285192 1 06-APR-17 12:09:56
74867 3681477 466000 12885192 1 06-APR-17 01:10:04
74867 4861477 584000 13475192 1 06-APR-17 02:10:11
74867 5561477 654000 13825192 1 06-APR-17 03:10:19

You see snapshots every hour, the latest being 03:10, 02:10, 01.10, 12:09, 11:09, …
You see additional snapshots at each statistics gathering. I’ve run dbms_stats.gather_table_stats at 17:27 and 17:33 several times the day before. Those snapshots are flagged 32.
The statistics was gathered again at 20:02 (the auto job) and I’ve truncated the table after that which is why the flag is 1.

dbms_stats_advisor.compute_volatile_flag

My guess is that there should be a flag for volatile tables here, because I’ve seen a trace for compute_volatile_flag in MMON trace, so I’ve enabled sql_trace for the MMON slave, and here is the query which takes the snapshot:

insert /* KSXM:TAKE_SNPSHOT */ into sys.optstat_snapshot$ (obj#, inserts, updates, deletes, timestamp, flags) (select m.obj#, m.inserts, m.updates, m.deletes, systimestamp, dbms_stats_advisor.compute_volatile_flag( m.obj#, m.flags, :flags, m.inserts, m.updates, m.deletes, s.inserts, s.updates, s.deletes, null, nvl(to_number(p.valchar), :global_stale_pcnt), s.gather) flags from sys.mon_mods_all$ m, (select si.obj#, max(si.inserts) inserts, max(si.updates) updates, max(si.deletes) deletes, decode(bitand(max(si.flags), :gather_flag), 0, 'NO_GATHER', 'GATHER') gather, max(si.timestamp) timestamp from sys.optstat_snapshot$ si, (select obj#, max(timestamp) ts from sys.optstat_snapshot$ group by obj#) sm where si.obj# = sm.obj# and si.timestamp = sm.ts group by si.obj#) s, sys.optstat_user_prefs$ p where m.obj# = s.obj#(+) and m.obj# = p.obj#(+) and pname(+) = 'STALE_PERCENT' and dbms_stats_advisor.check_mmon_policy_violation(rownum, 6, 2) = 0)

It reads the current values (from sys.mon_mods_all$) and the last values (from sys.optstat_snapshot$), reads the stale percentage parameter, and calls the dbms_stats_advisor.compute_volatile_flag function that updates the flag with one passed as :flag, probably adding the value 64 (see below) when table is volatile (probably when sum of DML is over the row count + stale percentage). The function is probably different when the snapshots comes from statistics gathering (‘GATHER’) or from DML monitoring (‘NO_GATHER’) because the number of rows is absolute or relative to the previous one.

From the trace of bind variables, or simply from the dbms_stats trace, I can see all values:
DBMS_STATS: compute_volatile_flag: objn=74867, flag=1, new_flag=0, inserts_new=5701477, updates_new=668000, deletes_new=13895192, inserts_old=5701477, updates_old=668000, deletes_old=13895192, rowcnt=, rowcnt_loc=, stale_pcnt=10, gather=NO_GATHER, flag_result=1
DBMS_STATS: compute_volatile_flag: objn=74867, flag=1, new_flag=0, inserts_new=4861477, updates_new=584000, deletes_new=13475192, inserts_old=3681477, updates_old=466000, deletes_old=12885192, rowcnt=, rowcnt_loc=, stale_pcnt=10, gather=NO_GATHER, flag_result=1
DBMS_STATS: compute_volatile_flag: objn=74867, flag=1, new_flag=0, inserts_new=5561477, updates_new=654000, deletes_new=13825192, inserts_old=4861477, updates_old=584000, deletes_old=13475192, rowcnt=, rowcnt_loc=, stale_pcnt=10, gather=NO_GATHER, flag_result=1

The input flag is 1 and the output flag is 1. And I think that, whatever the number of DML we have, this is because the new_flag=0

This explains why I was not able to have snapshots flagged as volatile even when changing a lot of rows. Then How can the statistics advisor detect my volatile table?

Statistics Advisor

I’ve traced the statistics advisor

set long 100000 longc 10000
variable t varchar2(30)
variable e varchar2(30)
variable r clob
exec :t:= DBMS_STATS.CREATE_ADVISOR_TASK('my_task');
exec :e:= DBMS_STATS.EXECUTE_ADVISOR_TASK('my_task');
exec :r:= DBMS_STATS.REPORT_ADVISOR_TASK('my_task');
print r

No ‘LockVolatileTable’ rule has raised a recommendation, but I’ve seen a call to the DBMS_STATS.CHECK_VOLATILE function with an object_id as parameter.

dbms_stats_internal.check_volatile

In order to understand what are the criteria, I’ve run (with sql_trace) the function on my table:

SQL> select dbms_stats_internal.check_volatile(&object_id) from dual;
old 1: select dbms_stats_internal.check_volatile(&object_id) from dual
new 1: select dbms_stats_internal.check_volatile( 74867) from dual
 
DBMS_STATS_INTERNAL.CHECK_VOLATILE(74867)
------------------------------------------
F

I suppose ‘F’ is false, which explains why my table was not considered as volatile.

Here is the trace with binds:

PARSING IN CURSOR #140478915921360 len=191 dep=1 uid=0 oct=3 lid=0 tim=99947151021 hv=976524548 ad='739cb468' sqlid='1r3ujfwx39584'
SELECT SUM(CASE WHEN ISVOLATILE > 0 THEN 1 ELSE 0 END) FROM (SELECT OBJ#, BITAND(FLAGS, :B2 ) ISVOLATILE FROM OPTSTAT_SNAPSHOT$ WHERE OBJ# = :B1 ORDER BY TIMESTAMP DESC) O WHERE ROWNUM < :B3
END OF STMT
...
BINDS #140478915921360:
 
Bind#0
oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=72 off=0
kxsbbbfp=7fc3cbe1c158 bln=22 avl=02 flg=05
value=64
Bind#1
oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=0 off=24
kxsbbbfp=7fc3cbe1c170 bln=22 avl=04 flg=01
value=74867
Bind#2
oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=0 off=48
kxsbbbfp=7fc3cbe1c188 bln=22 avl=02 flg=01
value=24

So, here is what the algorithm looks like:

  1. sys.opstat_snapshot$ is read for the latest 24 snapshots (remember that we have snapshots every hour + at each statistics gathering)
  2. ‘ISVOLATILE’ is 1 when the flags from the snapshots has flag 64. This is how I guessed that snapshots should me flagged with 64 by compute_volatile_flag.
  3. And finally, the number of ‘ISVOLATILE’ ones is summed.

So, it seems that the Statistics Advisor will raise a recommendation when the table has been flagged as volatile multiple times over the last 24 hour. How many? let’s guess:

SQL> insert into sys.optstat_snapshot$ select &object_id,0,0,0,64,sysdate from xmltable('1 to 12');
old 1: insert into sys.optstat_snapshot$ select &object_id,0,0,0,64,sysdate from xmltable('1 to 12')
new 1: insert into sys.optstat_snapshot$ select 74867,0,0,0,64,sysdate from xmltable('1 to 12')
 
12 rows created.
 
SQL> select dbms_stats_internal.check_volatile(&object_id) from dual;
old 1: select dbms_stats_internal.check_volatile(&object_id) from dual
new 1: select dbms_stats_internal.check_volatile( 74867) from dual
 
DBMS_STATS_INTERNAL.CHECK_VOLATILE(74867)
-----------------------------------------
F
 
SQL> rollback;
 
Rollback complete.

I’ve called the function after inserting various number of lines with flag=63 into sys.optstat_snapshot$ and up to 12 snapshots, it is still not considered as volatile.
Please remember that this is a lab, we are not expected to update the internal dictionary tables ourselves.

Now inserting one more:

SQL> insert into sys.optstat_snapshot$ select &object_id,0,0,0,64,sysdate from xmltable('1 to 13');
old 1: insert into sys.optstat_snapshot$ select &object_id,0,0,0,64,sysdate from xmltable('1 to 13')
new 1: insert into sys.optstat_snapshot$ select 74867,0,0,0,64,sysdate from xmltable('1 to 13')
 
13 rows created.
 
SQL> select dbms_stats_internal.check_volatile(&object_id) from dual;
old 1: select dbms_stats_internal.check_volatile(&object_id) from dual
new 1: select dbms_stats_internal.check_volatile( 74867) from dual
 
DBMS_STATS_INTERNAL.CHECK_VOLATILE(74867)
-----------------------------------------
T
 
SQL> rollback;
 
Rollback complete.

Good I have a ‘T’ here for ‘true. I conclude that the Statistics Advisor recommends to lock the stats on tables when half of the last 24h hours snapshots have encountered more than STALE_PERCENT modifications.

So what?

My table was not considered as volatile. None of the snapshots have been flagged as volatile. I’m quite sure that the number of DML is sufficient, so I suppose that this is disabled by default and I don’t know how to enable it. What I want to see is the compute_volatile_flag called with new_flag=64 so that snapshots are flagged when a large percentage or rows have been modified, so that enough snapshots have been flagged to be considered by the the check_volatile function.
Even if it is enabled, I think that there are more cases where tables should have statistics locked. Even if a table is empty for 5 minutes per day, we must be sure that the statistics are not gathered at that time. And looking at the Statistics Advisor thresholds, this case is far from being detected.
Final thought here: do you realize that you buy an expensive software to detect the changes happening on your tables, guess how the tables are updated, and recommend (and even implement) a general best practice? Does it mean that, today, we put in production some applications where we have no idea about what it does? Aren’t we supposed to design the application, document which tables are volatile and when they are loaded in bulk, and when to gather stats and lock them?

 

Cet article 12cR2 DML monitoring and Statistics Advisor est apparu en premier sur Blog dbi services.

OUD – Oracle Unified Directory 11.1.2.3, Oracle generates more and more LDAP lookups with every release

Thu, 2017-04-06 04:50

After installing OUD some time ago, I was doing some tests to see how it performs, and as long as I do ldap searching on the command line it looks very good. I am running Unified Directory 11.1.2.3.170117 (latest PSU), just for the protocol and I use the OUD only for TNS resolving and nothing else. However, Oracle clients are not connecting with “ldapsearch”, they are using “sqlplus” and the TNS name is resolved automatically in background.

I do have the following ldap.ora and sqlnet.ora. Very simply and nothing special.

oracle@dbidg03:/u01/app/oracle/network/admin/ [DBIT122] cat ldap.ora
DIRECTORY_SERVERS= (dbidg01:1389)
DEFAULT_ADMIN_CONTEXT = "dc=dbi,dc=com"
DIRECTORY_SERVER_TYPE = OID

oracle@dbidg03:/u01/app/oracle/network/admin/ [DBIT122] cat sqlnet.ora
NAMES.DIRECTORY_PATH = (TNSNAMES,LDAP,EZCONNECT)

Here is a little quiz: How many LDAP search requests do you expect when you connect to a 12.2 databases with the following command?

sqlplus system/manager@dbit122_ldap

Only one, right? Oracle looks up the TNS name dbit122_ldap in the OUD and retrieves the connect string. As soon as Oracle has the connect details, OUD does not play any role anymore. In case you do a ldapsearch from the 12.2 Oracle Home, then this is exactly the case.

oracle@dbidg03:/u01/app/oracle/network/admin/ [DBIT122] which ldapsearch
/u01/app/oracle/product/12.2.0/dbhome_1/bin/ldapsearch
oracle@dbidg03:/u01/app/oracle/network/admin/ [DBIT122] ldapsearch -v -h dbidg01 -p 1389 -b "cn=DBIT122_LDAP,cn=OracleContext,dc=dbi,dc=com" -s base "(objectclass=*)" "objectclass,orclNetDescString,orclNetDescName,orclVersion"
ldap_open( dbidg01, 1389 )
filter pattern: (objectclass=*)
returning: objectclass,orclNetDescString,orclNetDescName,orclVersion
filter is: ((objectclass=*))
cn=dbit122_ldap,cn=OracleContext,dc=dbi,dc=com
1 matches

In the OUD access log, you can see it clearly. One connect, the bind, the search request and finally the disconnect. Exactly how it should be, and the etime is 1 millisecond. That’s the elapsed time to deliver the search request which is very fast.

[dbafmw@dbidg01 logs]$ tail -40f /u01/app/oracle/product/Middleware/11.1.2.3/asinst_1/OUD/logs/access
...
[06/Apr/2017:10:46:49 +0200] CONNECT conn=877 from=192.168.56.203:21971 to=192.168.56.201:1389 protocol=LDAP
[06/Apr/2017:10:46:49 +0200] BIND REQ conn=877 op=0 msgID=1 type=SIMPLE dn="" version=3
[06/Apr/2017:10:46:49 +0200] BIND RES conn=877 op=0 msgID=1 result=0 authDN="" etime=0
[06/Apr/2017:10:46:49 +0200] SEARCH REQ conn=877 op=1 msgID=2 base="cn=DBIT122_LDAP,cn=OracleContext,dc=dbi,dc=com" scope=base filter="(objectclass=*)" attrs="objectclass,orclNetDescString,orclNetDescName,orclVersion"
[06/Apr/2017:10:46:49 +0200] SEARCH RES conn=877 op=1 msgID=2 result=0 nentries=1 etime=1
[06/Apr/2017:10:46:49 +0200] UNBIND REQ conn=877 op=2 msgID=3
[06/Apr/2017:10:46:49 +0200] DISCONNECT conn=877 reason="Client Disconnect"

Ok. Let’s do the first test with Oracle 10.2.0.5. I know, it is not supported, however, regarding LDAP searches it is a version  where everything is ok. My test is very simple, just a sqlplus connection and then an exit. Nothing else.

oracle@dbidg03:/u01/app/oracle/network/admin/ [DBIT102] sqlplus -V

SQL*Plus: Release 10.2.0.5.0 - Production

oracle@dbidg03:/u01/app/oracle/network/admin/ [DBIT102] sqlplus system/manager@dbit122_ldap

SQL*Plus: Release 10.2.0.5.0 - Production on Thu Apr 6 11:00:02 2017

Copyright (c) 1982, 2010, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

In the OUD access log I see, like expected only one search request.

[dbafmw@dbidg01 logs]$ tail -40f /u01/app/oracle/product/Middleware/11.1.2.3/asinst_1/OUD/logs/access
...
[06/Apr/2017:11:01:18 +0200] CONNECT conn=879 from=192.168.56.203:21974 to=192.168.56.201:1389 protocol=LDAP
[06/Apr/2017:11:01:18 +0200] BIND REQ conn=879 op=0 msgID=1 type=SIMPLE dn="" version=3
[06/Apr/2017:11:01:18 +0200] BIND RES conn=879 op=0 msgID=1 result=0 authDN="" etime=0
[06/Apr/2017:11:01:18 +0200] SEARCH REQ conn=879 op=1 msgID=2 base="cn=dbit122_ldap,cn=OracleContext,dc=dbi,dc=com" scope=base filter="(objectclass=*)" attrs="objectclass,orclNetDescString,orclNetDescName,orclVersion"
[06/Apr/2017:11:01:18 +0200] SEARCH RES conn=879 op=1 msgID=2 result=0 nentries=1 etime=2
[06/Apr/2017:11:01:18 +0200] UNBIND REQ conn=879 op=2 msgID=3
[06/Apr/2017:11:01:18 +0200] DISCONNECT conn=879 reason="Client Disconnect"

Let’s to the same now with 11.2.0.4. This time with a fully supported version. Yes. It still is. :-)

oracle@dbidg03:/u01/app/oracle/network/admin/ [DBIT112] sqlplus -V

SQL*Plus: Release 11.2.0.4.0 Production

oracle@dbidg03:/u01/app/oracle/network/admin/ [DBIT112] sqlplus system/manager@dbit122_ldap

SQL*Plus: Release 11.2.0.4.0 Production on Thu Apr 6 11:03:17 2017

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

Wowwwww … now I see already two search request on the OUD. To be honest, I haven’t expected that. One should be sufficient from my point of view.

[dbafmw@dbidg01 logs]$ tail -40f /u01/app/oracle/product/Middleware/11.1.2.3/asinst_1/OUD/logs/access
...
[06/Apr/2017:11:03:43 +0200] CONNECT conn=882 from=192.168.56.203:21979 to=192.168.56.201:1389 protocol=LDAP
[06/Apr/2017:11:03:43 +0200] BIND REQ conn=882 op=0 msgID=1 type=SIMPLE dn="" version=3
[06/Apr/2017:11:03:43 +0200] BIND RES conn=882 op=0 msgID=1 result=0 authDN="" etime=0
[06/Apr/2017:11:03:43 +0200] SEARCH REQ conn=882 op=1 msgID=2 base="cn=dbit122_ldap,cn=OracleContext,dc=dbi,dc=com" scope=base filter="(objectclass=*)" attrs="objectclass,orclNetDescString,orclNetDescName,orclVersion"
[06/Apr/2017:11:03:43 +0200] SEARCH RES conn=882 op=1 msgID=2 result=0 nentries=1 etime=1
[06/Apr/2017:11:03:43 +0200] UNBIND REQ conn=882 op=2 msgID=3
[06/Apr/2017:11:03:43 +0200] DISCONNECT conn=882 reason="Client Disconnect"
[06/Apr/2017:11:03:43 +0200] CONNECT conn=883 from=192.168.56.203:21980 to=192.168.56.201:1389 protocol=LDAP
[06/Apr/2017:11:03:43 +0200] BIND REQ conn=883 op=0 msgID=1 type=SIMPLE dn="" version=3
[06/Apr/2017:11:03:43 +0200] BIND RES conn=883 op=0 msgID=1 result=0 authDN="" etime=1
[06/Apr/2017:11:03:43 +0200] SEARCH REQ conn=883 op=1 msgID=2 base="cn=dbit122_ldap,cn=OracleContext,dc=dbi,dc=com" scope=base filter="(objectclass=*)" attrs="objectclass,orclNetDescString,orclNetDescName,orclVersion"
[06/Apr/2017:11:03:43 +0200] SEARCH RES conn=883 op=1 msgID=2 result=0 nentries=1 etime=2
[06/Apr/2017:11:03:43 +0200] UNBIND REQ conn=883 op=2 msgID=3
[06/Apr/2017:11:03:43 +0200] DISCONNECT conn=883 reason="Client Disconnect"

But when you think, it can’t get worse, then do the same simple test with a 12.1.0.2 Oracle client.

oracle@dbidg03:/u01/app/oracle/network/admin/ [DBIT121] sqlplus -V

SQL*Plus: Release 12.1.0.2.0 Production

oracle@dbidg03:/u01/app/oracle/network/admin/ [DBIT121] sqlplus system/manager@dbit122_ldap

SQL*Plus: Release 12.1.0.2.0 Production on Thu Apr 6 11:06:18 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Thu Apr 06 2017 11:03:43 +02:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

Incredible, it is issuing three ldap search requests against the OUD for a simple sqlplus connection.

[dbafmw@dbidg01 logs]$ tail -40f /u01/app/oracle/product/Middleware/11.1.2.3/asinst_1/OUD/logs/access
...
[06/Apr/2017:11:06:41 +0200] CONNECT conn=887 from=192.168.56.203:21986 to=192.168.56.201:1389 protocol=LDAP
[06/Apr/2017:11:06:41 +0200] BIND REQ conn=887 op=0 msgID=1 type=SIMPLE dn="" version=3
[06/Apr/2017:11:06:41 +0200] BIND RES conn=887 op=0 msgID=1 result=0 authDN="" etime=0
[06/Apr/2017:11:06:41 +0200] SEARCH REQ conn=887 op=1 msgID=2 base="cn=dbit122_ldap,cn=OracleContext,dc=dbi,dc=com" scope=base filter="(objectclass=*)" attrs="objectclass,orclNetDescString,orclNetDescName,orclVersion"
[06/Apr/2017:11:06:41 +0200] SEARCH RES conn=887 op=1 msgID=2 result=0 nentries=1 etime=1
[06/Apr/2017:11:06:41 +0200] UNBIND REQ conn=887 op=2 msgID=3
[06/Apr/2017:11:06:41 +0200] DISCONNECT conn=887 reason="Client Disconnect"
[06/Apr/2017:11:06:41 +0200] CONNECT conn=888 from=192.168.56.203:21987 to=192.168.56.201:1389 protocol=LDAP
[06/Apr/2017:11:06:41 +0200] BIND REQ conn=888 op=0 msgID=1 type=SIMPLE dn="" version=3
[06/Apr/2017:11:06:41 +0200] BIND RES conn=888 op=0 msgID=1 result=0 authDN="" etime=0
[06/Apr/2017:11:06:41 +0200] SEARCH REQ conn=888 op=1 msgID=2 base="cn=dbit122_ldap,cn=OracleContext,dc=dbi,dc=com" scope=base filter="(objectclass=*)" attrs="objectclass,orclNetDescString,orclNetDescName,orclVersion"
[06/Apr/2017:11:06:41 +0200] SEARCH RES conn=888 op=1 msgID=2 result=0 nentries=1 etime=2
[06/Apr/2017:11:06:41 +0200] UNBIND REQ conn=888 op=2 msgID=3
[06/Apr/2017:11:06:41 +0200] DISCONNECT conn=888 reason="Client Disconnect"
[06/Apr/2017:11:06:41 +0200] CONNECT conn=889 from=192.168.56.203:21988 to=192.168.56.201:1389 protocol=LDAP
[06/Apr/2017:11:06:41 +0200] BIND REQ conn=889 op=0 msgID=1 type=SIMPLE dn="" version=3
[06/Apr/2017:11:06:41 +0200] BIND RES conn=889 op=0 msgID=1 result=0 authDN="" etime=1
[06/Apr/2017:11:06:41 +0200] SEARCH REQ conn=889 op=1 msgID=2 base="cn=dbit122_ldap,cn=OracleContext,dc=dbi,dc=com" scope=base filter="(objectclass=*)" attrs="objectclass,orclNetDescString,orclNetDescName,orclVersion"
[06/Apr/2017:11:06:41 +0200] SEARCH RES conn=889 op=1 msgID=2 result=0 nentries=1 etime=2
[06/Apr/2017:11:06:41 +0200] UNBIND REQ conn=889 op=2 msgID=3
[06/Apr/2017:11:06:41 +0200] DISCONNECT conn=889 reason="Client Disconnect"

The last test is now with a 12cR2 client. Will it increase now to 4?

oracle@dbidg03:/u01/app/oracle/network/admin/ [DBIT122] sqlplus system/manager@dbit122_ldap

SQL*Plus: Release 12.2.0.1.0 Production on Thu Apr 6 11:09:08 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Last Successful login time: Thu Apr 06 2017 11:06:41 +02:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

No, it did not increase to 4. But with 12cR2 you will see like with 12cR1 also 3 search requests against the OUD.

[dbafmw@dbidg01 logs]$ tail -40f /u01/app/oracle/product/Middleware/11.1.2.3/asinst_1/OUD/logs/access
...
[06/Apr/2017:11:09:07 +0200] CONNECT conn=890 from=192.168.56.203:21990 to=192.168.56.201:1389 protocol=LDAP
[06/Apr/2017:11:09:07 +0200] BIND REQ conn=890 op=0 msgID=1 type=SIMPLE dn="" version=3
[06/Apr/2017:11:09:07 +0200] BIND RES conn=890 op=0 msgID=1 result=0 authDN="" etime=1
[06/Apr/2017:11:09:07 +0200] SEARCH REQ conn=890 op=1 msgID=2 base="cn=dbit122_ldap,cn=OracleContext,dc=dbi,dc=com" scope=base filter="(objectclass=*)" attrs="objectclass,orclNetDescString,orclNetDescName,orclVersion"
[06/Apr/2017:11:09:07 +0200] SEARCH RES conn=890 op=1 msgID=2 result=0 nentries=1 etime=2
[06/Apr/2017:11:09:07 +0200] UNBIND REQ conn=890 op=2 msgID=3
[06/Apr/2017:11:09:07 +0200] DISCONNECT conn=890 reason="Client Disconnect"
[06/Apr/2017:11:09:07 +0200] CONNECT conn=891 from=192.168.56.203:21991 to=192.168.56.201:1389 protocol=LDAP
[06/Apr/2017:11:09:07 +0200] BIND REQ conn=891 op=0 msgID=1 type=SIMPLE dn="" version=3
[06/Apr/2017:11:09:07 +0200] BIND RES conn=891 op=0 msgID=1 result=0 authDN="" etime=0
[06/Apr/2017:11:09:07 +0200] SEARCH REQ conn=891 op=1 msgID=2 base="cn=dbit122_ldap,cn=OracleContext,dc=dbi,dc=com" scope=base filter="(objectclass=*)" attrs="objectclass,orclNetDescString,orclNetDescName,orclVersion"
[06/Apr/2017:11:09:07 +0200] SEARCH RES conn=891 op=1 msgID=2 result=0 nentries=1 etime=1
[06/Apr/2017:11:09:07 +0200] UNBIND REQ conn=891 op=2 msgID=3
[06/Apr/2017:11:09:07 +0200] DISCONNECT conn=891 reason="Client Disconnect"
[06/Apr/2017:11:09:07 +0200] CONNECT conn=892 from=192.168.56.203:21992 to=192.168.56.201:1389 protocol=LDAP
[06/Apr/2017:11:09:07 +0200] BIND REQ conn=892 op=0 msgID=1 type=SIMPLE dn="" version=3
[06/Apr/2017:11:09:07 +0200] BIND RES conn=892 op=0 msgID=1 result=0 authDN="" etime=0
[06/Apr/2017:11:09:07 +0200] SEARCH REQ conn=892 op=1 msgID=2 base="cn=dbit122_ldap,cn=OracleContext,dc=dbi,dc=com" scope=base filter="(objectclass=*)" attrs="objectclass,orclNetDescString,orclNetDescName,orclVersion"
[06/Apr/2017:11:09:07 +0200] SEARCH RES conn=892 op=1 msgID=2 result=0 nentries=1 etime=1
[06/Apr/2017:11:09:07 +0200] UNBIND REQ conn=892 op=2 msgID=3
[06/Apr/2017:11:09:07 +0200] DISCONNECT conn=892 reason="Client Disconnect"

So what is the reason for this high increase in ldap searches. Instead of 1, it is doing 3 with 12cR1 and 12cR2, and 2 with 11gR2. That is 66% more than with Oracle 10gR2 clients. That’s enormous from my point view. Quite a huge extra load on your OUD server, when  you upgrade your Oracle clients.

To make it short, I have no answer. It might be related to the old Oracle Names code, which seems that it is still there. I have found errors in the client trace file regarding a A.SMD query. The A.SMD call is coming from the old Oracle Names server, where you could have done stuff like “NAMESCTL> QUERY DB920.oracle.com A.SMD”. But this is really a long time ago. My last Oracle Name server, I have seen in 2002.

oracle@dbidg02:/u01/app/oracle/network/trc/ [DBIT122] cat 12.2_client.trc | grep A.SMD
(4144394624) [04-APR-2017 14:38:18:633] nnfttran: Error querying DBIT122_LDAP of attribute A.SMD errcode 408
(4144394624) [04-APR-2017 14:38:18:642] nnfttran: Error querying DBIT122_LDAP of attribute A.SMD errcode 408
(4144394624) [04-APR-2017 14:38:18:646] nnfttran: Error querying DBIT122_LDAP of attribute A.SMD errcode 408

If I take a look at my 12cR2 adapters I have no Oracle Names compiled in. I don’t know if this is possible at all, with 12c.

oracle@dbidg03:/u01/app/oracle/network/admin/ [DBIT122] adapters | egrep -A 5 "Installed Oracle Net naming methods"
Installed Oracle Net naming methods are:

    Local Naming (tnsnames.ora)
    Oracle Directory Naming
    Oracle Host Naming
Conclusion

Ok. What should I say … take care if you upgrade your clients to more recent versions, in case you use OUD to resolve your names. It might generate some extra load on your OUD servers. More and more with every release since 10gR2. By the way … I have opened a SR at Oracle, because this seems to be a bug for me. I was very surprised, that I was the first one facing this issue. Will keep you posted as soon as I have results. ;-)

 

Cet article OUD – Oracle Unified Directory 11.1.2.3, Oracle generates more and more LDAP lookups with every release est apparu en premier sur Blog dbi services.

Pages