Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 7 hours 53 min ago

PostgreSQL 10 Beta 1 : pg_hba_file_rules view

Fri, 2017-06-30 10:12

This small blog to share a helpful stuff with the pg_hba.conf file :

The pg_hba.conf file is the central configuration file to control client authentication.
It is located in the database cluster’s data directory :
postgres@ppas01:/u02/pgdata/PG10BETA1/ [PG10BETA1] ll pg_hba.conf
lrwxrwxrwx. 1 postgres postgres 59 Jun 30 10:19 pg_hba.conf
postgres@ppas01:/u02/pgdata/PG10BETA1/ [PG10BETA1]

When you add or modify an entry in this file, you have to reload the cluster to take the changes in account :

postgres@ppas01:/u02/pgdata/PG10BETA1/ [PG10BETA1] echo "host all efm 192.168.22.38/35 md5" >> /u02/pgdata/PG10BETA1/pg_hba.conf
postgres@ppas01:/u02/pgdata/PG10BETA1/ [PG10BETA1] postgres@ppas01:/u02/pgdata/PG10BETA1/ [PG10BETA1] pg_ctl -D "/u02/pgdata/PG10BETA1/" reload
server signaled
postgres@ppas01:/u02/pgdata/PG10BETA1/ [PG10BETA1]

Oops… Have you seen my mistake ? Probably not. The system didn’t see it either.
That was in Postgres 9.6 and older versions.

In Postgres 10 there is now a new view called “pg_hba_file_rules” which will returns the current content of the pg_hba.conf file entries AND reports the errors :
postgres=# select version();
version
-------------------------------------------------------------------------------------------------------------------------------
PostgreSQL 10beta1 dbi services build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit
(1 row)

postgres=# select line_number, error, type, database, user_name, address, netmask from pg_hba_file_rules;
line_number | error | type | database | user_name | address | netmask
-------------+-------------------------------------------------+-------+---------------+-----------+-----------+-----------------------------------------
80 | | local | {all} | {all} | |
82 | | host | {all} | {all} | 127.0.0.1 | 255.255.255.255
84 | | host | {all} | {all} | ::1 | ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff
87 | | local | {replication} | {all} | |
88 | | host | {replication} | {all} | 127.0.0.1 | 255.255.255.255
89 | | host | {replication} | {all} | ::1 | ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff
90 | invalid CIDR mask in address "192.168.22.38/35" | | | | |
(7 rows)

This will allows you to get a quick look at what is wrong in pg_hba.conf and fixing it as necessary.

 

Cet article PostgreSQL 10 Beta 1 : pg_hba_file_rules view est apparu en premier sur Blog dbi services.

ODA X6 installation: re-image

Wed, 2017-06-28 15:44

The Oracle Database Appliance is shipped with a bare-metal installation which may not be the latest version. You may want to have it virtualized, or get the latest version to avoid further upgrade, or install an earlier version to be in the same configuration as another ODA already in production. The easiest for all cases is to start with a re-image as soon as the ODA is plugged. This post is not a documentation, just a quick cheat sheet.

I don’t want to spend hours in the data center, so the first step, once the ODA is racked, cabled and plugged, is to get it accessible from the management network. Then all tasks can be done from a laptop, accessing the ILOM interface through a browser (Java required, and preferably 32-bits) before the public network is setup.

NET MGMT

Here is the back of the server where you find the management network port.
ODA-reimage-000
This was an X5-2 but very similar to X6-2. You can look at the 3D view of X6-2 to get a better view.
There is also VGA and USB to plug a monitor and keyboard just for the time to setup the management network.

You can also use the serial port which is just next to it but I usually don’t have a serial adapter for my laptop.

First checks

You can plug a keyboard and monitor and log on the server: root password is welcome1 (no need to change it for the moment as we will re-image the machine)

In ODA X6-HA you have two nodes, numbered 0 and 1 and called oak1 and oak2 … be careful. You may wonder which server is node 0 and which one is node 1, because the servers are the same. Yes, that’s true. The nodes are identified from the storage port they are connected to. The node 0 is the one connected to the blue mini-SAS and the red ones are for node 1.

  • Blue cable – Node 0 – oak1
  • Red cable – Node 0 – oak2

Here I’m on node 0 and check hardware version

[root@oak1 ~]# /opt/oracle/oak/bin/oakcli show env_hw
BM ODA X6-2

This means Bare Metal X6-2 HA, a virtualized one would show VM-ODA_BASE ODA X6-2, and ODA X6-2S would show BM ODA_Lite X6-2 Small.

From storage topology, we confirm that this node has been recognized as node 0:


[root@oak1 ~]# /opt/oracle/oak/bin/oakcli validate -c StorageTopology
It may take a while. Please wait...
INFO : ODA Topology Verification
INFO : Running on Node0
INFO : Check hardware type
SUCCESS : Type of hardware found : X5-2
INFO : Check for Environment (Bare Metal or Virtual Machine)

And finally check the version because you may be lucky to get the version you want and then don’t need to re-image (I switched to node 1 here):

ODA-reimage-010

Of course, you don’t need to re-image when you want an higher version. You can upgrade it, but re-image is simple.

BMC Network

The management network interface can get an address from DHCP. But who runs DHCP on the management network? There are two ways to assign a static IP to the management network.

You may use the IPMI Tool commands:
ipmitool -I open sunoem cli
cd SP
cd Network
ls
set pendingipadress=192.168.15.101
set pendingnetmask=255.255.255.0
set pendingipgateway=192.168.15.1
set commitpending=true

Here is an example:
oda-ipmitool

Or you can go to BIOS. It goes fast but filming in slow motion shows that the key is F2:
ODA-reimage-001
Once you are in the BIOS interface, go to Advanced, then choose ‘BMC Network configuration’ to configure IPMI and enter the following BMC Lan information:
 
IPv4 IP Assignment [Static] IPv4 address 192.168.15.102
IPv4 Subnet Mask 255.255.255.0
IPv4 Default Gateway 192.168.15.1

Once you are in the BIOS you can choose to boot on the CD-ROM first because I sometimes have problems to set that from the ILOM Web interface.

Once you have validated that the ILOM IP address can be reached from your office, you can remove the keyboard and monitor and leave the datacenter. the hardware is ok. Now the software can be deployed from the ILOM Web interface. The password to connect to ILOM from the Web browser is ‘changeme’ and you should change it.

Software

The latest ISO image for Bare Metal or Virtualized ODA can be found from Note 88888.1
You can also find the oldest versions:

From the ILOM you you get to the console of node 0:

ODA-reimage-002

From there, you can attach the ISO image: ILOM Remote Control / Devices / CD-ROM Images

and then reboot on the CD-ROM:

Reboot (Host Control / Next Boot Device / CD-ROM )

ODA-reimage-003

Do that for both nodes (you can run them at the same time) and prepare the information for the deployment and download the ‘End-User RDBMS Clone files’ for the database version you want.

You can download the Oracle Appliance Manager Configurator and take your time to setup and verify the configuration.

configure firstnet

Here is part of the information to prepare. First, you will configure the network to be able to scp the software (Grid Infrastructure and Database):

[root@oak1 ~]# /opt/oracle/oak/bin/oakcli configure firstnet
Configure the network for node(s) (local,global) [global]:
The network configuration for both nodes:
hostname: Hostname lookup failure
Domain Name: pachot.net
DNS Servers: Primary DNS Server: 8.8.8.8
Node Name Host Name
0 myserver1
1 myserver2
Choose the network interface to configure (net1,net2) [net1]:
Enter the IP address for net1 on Node 0: 192.168.16.101
Enter the IP address for net1 on Node 1: 192.168.16.102
Netmask for net1: 255.255.255.0
Gateway address for net1 [192.168.16.1]:

Note that this is a funny example. I hope you don’t use the 192.168.16/24 as you public network because this is used for the private interconnect where IP addresses 192.168.16.24 and 192.168.16.25 are hardcoded. But thanks to that the configure-network can be run fron one node only.

Deploy

Now that you have access through the public network, you can copy (scp) the Oracle Home clones and the configuration file to /tmp, unpack the .zip (for i in *.zip ; do /opt/oracle/oak/bin/oakcli unpack -package $i ; done) and run deploy (ssh -X /opt/oracle/oak/bin/oakcli deploy), loading the configuration from your file or entering all information from there. Crossing the fingers, this should go to the end without any problem. On the opposite, My Oracle Support notes may help. The nice thing with ODA is that most of the configurations are similar so there a good chances that a problem was already encountered and documented.

 

Cet article ODA X6 installation: re-image est apparu en premier sur Blog dbi services.

Summer, autumn and winter: a lot of conferences ahead

Tue, 2017-06-27 14:50

It is hot in Europe, it is summer, enjoy, but technology moves fast so you have the chance to already prepare for the next conferences. The IT Tage 2017 will happen the 11th to 14th of December this year and we are happy to be there again.
Selection_001
This event covers a wide range of topics and we will be there again covering and talking about:

In addition you will have the chance to join Jan from EnterpriseDB speaking about the differences between Oracle and PostgreSQL. If you want to hear more about that, you might want to join the PostgreSQL Conference Europe this year.

Selection_002

Maybe Jan and me get the chance to talk there as well, we submitted an interesting session, stay tuned.

Chances are high that you may find us at the ukoug, too:

Selection_004

… and we’ll be at the #DOAG2017 for sure this year again.
Selection_003

Hope to see you soon… Cheers, discussions ahead …

 

Cet article Summer, autumn and winter: a lot of conferences ahead est apparu en premier sur Blog dbi services.

12c Multitenant Internals: VPD for V$ views

Tue, 2017-06-27 13:49

I described in an earlier post on AWR views how the dictionary views were using metadata and object links to show information from other containers. But this mechanism cannot work for fixed views (aka V$) because they don’t have their definition in the dictionary.

The big difference is that most of V$ views are available long before the dictionary is opened or even created. Just start an instance in NOMOUNT and you can query the V$ views. Even in multitenant, you can switch to different containers in MOUNT, and query V$ views, when no dictionary is opened.

SQL> alter database mount;
Database altered.
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED MOUNTED
3 PDB1 MOUNTED
 
SQL> alter session set container=pdb1;
Session altered.
 
SQL> show pdbs;
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
3 PDB1 MOUNTED

V$ views query information from the instance and this information pertain to one container:

  • CON_ID=0 for the CDB itself
  • CON_ID=1 for CDB$ROOT
  • CON_ID=2 for PDB$SEED
  • CON_ID=3 for the first PDB you have created

When you are in root, the V$ views are queried as normal and show all information – from all containers – with their related CON_ID

When you are in a PDB, you must see the objects that belong to your PDB, but not those that belong to other PDBS. But this is not sufficient. For example, you may query the version, and the version is related to the CDB itself, with CON_ID=0:

SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> select * from v$version;
 
BANNER CON_ID
-------------------------------------------------------------------------------- ----------
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production 0
PL/SQL Release 12.2.0.1.0 - Production 0
CORE 12.2.0.1.0 Production 0
TNS for Linux: Version 12.2.0.1.0 - Production 0
NLSRTL Version 12.2.0.1.0 - Production 0

Then, in a PDB you should see your PDB objects and the CON_ID=0 ones. Oracle needs a new mecanism for that. One way would be to switch to root, query the V$ and filter on CON_ID. We don’t need that. Context switch is there to access data from a different container tablespace, because tablepaces are not shared. But V$ views expose data from the instance, and the instance is shared. Any container can see all rows, and we just want to filter some rows.

Here is the execution plan when querying V$VERSION from a PDB:


SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
SQL> explain plan for select * from v$version;
Explained.
 
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 1078166315
 
------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 68 | 0 (0)| 00:00:01 |
|* 1 | FIXED TABLE FULL| X$VERSION | 1 | 68 | 0 (0)| 00:00:01 |
------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(("CON_ID"=0 OR "CON_ID"=3) AND
"INST_ID"=USERENV('INSTANCE'))

An additional predicate (“CON_ID”=0 OR “CON_ID”=3) is added to the view. How is it done? Oracle has a security feature for that: Virtual Private Database – aka Row Level Security – which adds a where clause dynamically.

One way to get more information about virtual private databases is to have an error on its execution and I know that a user with only select privilege cannot EXPLAIN PLAN (see MOS Note 1029064.6).

I connect to a PDB with a low privileged user:
SQL> connect scott/tiger@//localhost/PDB1
Connected.

I explain plan the V$VERSION fixed view.
SQL> explain plan for select * from v$version;
 
Error starting at line : 10 File @ /media/sf_share/122/blogs/multitenant-vpd.sql
In command -
explain plan for select * from v$version
Error report -
ORA-28113: policy predicate has error
28113. 00000 - "policy predicate has error"
*Cause: Policy function generates invalid predicate.
*Action: Review the trace file for detailed error information.

Interesting error which confirms the guess: this is a VPD error and it generates a trace:
*** 2017-06-26T22:45:17.838507+02:00 (PDB1(3))
*** SESSION ID:(141.17865) 2017-06-26T22:45:17.838537+02:00
*** CLIENT ID:() 2017-06-26T22:45:17.838541+02:00
*** SERVICE NAME:(pdb1) 2017-06-26T22:45:17.838545+02:00
*** MODULE NAME:(java@VM104 (TNS V1-V3)) 2017-06-26T22:45:17.838548+02:00
*** ACTION NAME:() 2017-06-26T22:45:17.838552+02:00
*** CLIENT DRIVER:(jdbcoci : 12.2.0.1.0) 2017-06-26T22:45:17.838555+02:00
*** CONTAINER ID:(3) 2017-06-26T22:45:17.838558+02:00
 
-------------------------------------------------------------
Error information for ORA-28113:
Logon user : SCOTT
Table/View : SYS.V_$VERSION
VPD Policy name : CON_ID
Policy function: SYS.CON_ID
RLS view :
SELECT "BANNER","CON_ID" FROM "SYS"."V_$VERSION" "V_$VERSION" WHERE (con_id IN (0, 3) )
ORA-01039: insufficient privileges on underlying objects of the view
-------------------------------------------------------------

There’s no container switch here, all is running in PDB1 with CON_ID=3 and the internal VPD has added a where clause to filter rows with CON_ID=0 and CON_ID=3

Do not search for the VPD policy name ‘CON_ID’ and function ‘CON_ID’ in the dictionary views because this happens even when the dictionary is not accessible. This is an internal policy used when querying fixed views in multitenant and which probably use some of the VPD code only.

 

Cet article 12c Multitenant Internals: VPD for V$ views est apparu en premier sur Blog dbi services.

OUD 11.1.2.3 – ODSM Session Timeout

Tue, 2017-06-27 12:52

The ODSM is a quite powerful tool for managing the Oracle Unified Directory, and quite a lot of default settings are very reasonable. But there is one that disturbed me from the beginning. It is the ODSM Session Timeout. You might see a message like the following.

1

Or maybe this one.

2

It just says, that the page will expire unless a response is received within 2 minutes. Doing some complex searches or similar stuff often takes more than 2 minutes.

So, how can we increase the limit to, let’s say, 2 hours? A value which is more reasonable from my point of view.

In the early days, you had to edit a file called web.xml in a very intuitive directory. ;-) And change the session-timeout to whatever value you like. In my example 2 hours.

$ vi /u01/app/oracle/product/Middleware/11.1.2.3/user_projects/domains/base_domain/servers/AdminServer/tmp/_WL_user/odsm_11.1.1.5.0/d89dm9/war/WEB-INF/web.xml

...
...

  <session-config>
    <session-timeout>7200</session-timeout>
  </session-config>

However, changing the value here has the disadvantage that it might not be permanent.  This directory is a temporary cache directory for the WebLogic server.  So, if you apply a patch that overwrites the …/odsm/odsm.ear file, the changes you made to web.xml in the temporary cache directory are also overwritten. So, it is not a good long term solution.

Way better is to do it via the WebLogic Console.

Login to your WebLogic Console with the WebLogic user and navigate to “Deployments”. From there select the “/odsm” module.

3

Now move to the “Configuration” tab and change the Session Timeout (in seconds) to 7200, in case you want a 2h session timeout.

4

5

Now save the changes, and click another time OK, to save the Deployment Plan.

6

7

That’s it.

Conclusion

The default session timeout is way too short from my point of view. But no worries. Changing it via the WebLogic Console is quite easy and it might save you a lot of headaches. :-)

 

Cet article OUD 11.1.2.3 – ODSM Session Timeout est apparu en premier sur Blog dbi services.

12cR2 Application Containers and Foreign Keys

Mon, 2017-06-19 15:58

Application Container brings a new way to share data among databases, and adds a new dimension to referential integrity. A foreign key in an application PDB can reference a row belonging to a root data link table. But then, should a delete on the root validate that there are no orpheans in the application PDBs? And what if those PDBs are closed at the time of this delete? Here is a small example.

If you run this in 12.2.0.1 you will get an error because the search for parent key is done only on the current container. This is considered as a bug: 21955394: CDB:ORA-02291 WHEN FOREIGN KEY REFERS TO THE PRIMARY KEY IN DATA LINK

The example that follows is run with the patch applied to fix this bug.

I’m connecting to root where I have no user PDB yet.

SQL> connect sys/oracle@//localhost/CDB1A as sysdba
Connected.
 
SQL> select con_id, name, application_root application_root, application_pdb application_pdb,application_root_con_id application_root_con_id from v$containers;
 
CON_ID NAME APPLICATION_ROOT APPLICATION_PDB APPLICATION_ROOT_CON_ID
------ ---- ---------------- --------------- -----------------------
1 CDB$ROOT NO NO
2 PDB$SEED NO NO

I create the application container root

SQL> create pluggable database SCOTT_ROOT as application container admin user SCOTT_ADMIN identified by oracle roles=(DBA);
Pluggable database SCOTT_ROOT created.
 
SQL> alter pluggable database SCOTT_ROOT open;
Pluggable database SCOTT_ROOT altered.
 
SQL> select con_id, name, application_root application_root, application_pdb application_pdb,application_root_con_id application_root_con_id from v$containers;
 
CON_ID NAME APPLICATION_ROOT APPLICATION_PDB APPLICATION_ROOT_CON_ID
------ ---- ---------------- --------------- -----------------------
1 CDB$ROOT NO NO
2 PDB$SEED NO NO

I connect to this application root and start the installation of the application

SQL> connect sys/oracle@//localhost/SCOTT_ROOT as sysdba
Connected.
 
SQL> alter pluggable database application SCOTT begin install '1.0';
Pluggable database APPLICATION altered.

I’m installing SCOTT tables DEPT and EMP tables but I changed their definition from utlsampl.sql:

  • DEPT is an EXTENDED DATA LINK where a set of row is common, inserted on application root and visible by all application PDBs
  • EMP is a METADATA LINK where each application PDB has its own data isolated from others, but having the same structure


SQL> GRANT CONNECT,RESOURCE,UNLIMITED TABLESPACE TO SCOTT IDENTIFIED BY tiger container=all;
Grant succeeded.
 
SQL> alter session set current_schema=SCOTT;
Session altered.
 
SQL> CREATE TABLE DEPT sharing=extended data
2 (DEPTNO NUMBER(2) CONSTRAINT PK_DEPT PRIMARY KEY,
3 DNAME VARCHAR2(14) ,
4 LOC VARCHAR2(13) ) ;
Table DEPT created.
 
SQL> CREATE TABLE EMP sharing=metadata
2 (EMPNO NUMBER(4) CONSTRAINT PK_EMP PRIMARY KEY,
3 ENAME VARCHAR2(10),
4 JOB VARCHAR2(9),
5 MGR NUMBER(4),
6 HIREDATE DATE,
7 SAL NUMBER(7,2),
8 COMM NUMBER(7,2),
9 DEPTNO NUMBER(2) CONSTRAINT FK_DEPTNO REFERENCES DEPT);
Table EMP created.
 
SQL> INSERT INTO DEPT VALUES
2 (10,'ACCOUNTING','NEW YORK');
1 row inserted.
 
SQL> INSERT INTO DEPT VALUES (20,'RESEARCH','DALLAS');
1 row inserted.
 
SQL> INSERT INTO DEPT VALUES
2 (30,'SALES','CHICAGO');
1 row inserted.
 
SQL> INSERT INTO DEPT VALUES
2 (40,'OPERATIONS','BOSTON');
1 row inserted.
 
SQL> COMMIT;
Commit complete.
 
SQL> alter pluggable database application SCOTT end install '1.0';
Pluggable database APPLICATION altered.

The application root has departments 10, 20, 30 and 40 in DEPT shared with all PDBs and has defined that EMP has a foreign key to it.

I create an application PDB from this application root

SQL> create pluggable database SCOTT_ONE admin user SCOTT_ONE_ADMIN identified by covfefe;
Pluggable database SCOTT_ONE created.
 
SQL> alter pluggable database SCOTT_ONE open;
Pluggable database SCOTT_ONE altered.

I sync it to get common DDL and DML applied

SQL> connect sys/oracle@//localhost/SCOTT_ONE as sysdba
Connected.
 
SQL> alter pluggable database application SCOTT sync;
Pluggable database APPLICATION altered.
 
SQL> select name,con_id,application_pdb,application_root_con_id from v$containers;
 
NAME CON_ID APPLICATION_PDB APPLICATION_ROOT_CON_ID
---- ------ --------------- -----------------------
SCOTT_ONE 8 YES 6

Now let’s connect to the application PDB. I can see the DEPT rows inserted from root because it is a DATA LINK.

SQL> connect scott/tiger@//localhost/SCOTT_ONE
Connected.
 
SQL> select * from dept;
 
DEPTNO DNAME LOC
------ ----- ---
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON

EMP is empty here

SQL> select * from emp;
 
no rows selected

I insert an EMP row in the application PDB which references a DEPT row in the application root:

SQL> INSERT INTO EMP VALUES
2 (7369,'SMITH','CLERK',7902,to_date('17-12-1980','dd-mm-yyyy'),800,NULL,20);
 
1 row inserted.

As DEPT is and EXTENDED DATA LINK, I can add new rows in my PDB:

SQL> INSERT INTO DEPT VALUES
2 (50,'MY LOCAL DEPT','LAUSANNE');
 
1 row inserted.

And I can have an EMP row referencing this local parent:

SQL> INSERT INTO EMP VALUES
2 (7499,'ALLEN','SALESMAN',7698,to_date('20-2-1981','dd-mm-yyyy'),1600,300,50);
1 row inserted.
 
SQL> commit;
Commit complete.

This looks good. Now what happens of we delete all rows from DEPT in the application root?

SQL> connect sys/oracle@//localhost/SCOTT_ROOT as sysdba
Connected.
SQL> delete from SCOTT.DEPT;
4 rows deleted.
 
SQL> commit;
Commit complete.

No error here. But then, I have orphans in my application PDB:

SQL> connect scott/tiger@//localhost/SCOTT_ONE
Connected.
SQL> select * from dept;
 
DEPTNO DNAME LOC
---------- -------------- -------------
50 MY LOCAL DEPT LAUSANNE
 
SQL> select * from emp;
 
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
7369 SMITH CLERK 7902 17-DEC-80 800 20
7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 50

So what?

Referential integrity works across containers: an application PDB can reference parent key in the application root (according that bug is fixed). However, no ORA-02292 (child record found) is raised when child records are not in the current container. This one makes sense. Enforcing the verification of child records in all PDBs would require that they are opened, and may require locking the table in all containers. We must be aware that doing DML on the application root can lead to inconsistency if not done correctly.

Operations on the application root are application releases (upgrades and patches) and must be validated and tested carefully. For the example above, deleting all rows from DEPT can be done as an application patch which deletes from the EMP table as well:

SQL> connect sys/oracle@//localhost/SCOTT_ROOT as sysdba
Connected.
SQL> alter pluggable database application SCOTT begin patch 1 ;
Pluggable database APPLICATION altered.
SQL> delete from scott.emp;
0 rows deleted.
SQL> delete from scott.dept where deptno in (10,20,30,40);
4 rows deleted.
SQL> alter pluggable database application SCOTT end patch 1 ;
Pluggable database APPLICATION altered.

The delete from EMP does nothing in the application root here, but it will be done on the PDB when applying the patch:

SQL> select * from dept;
 
DEPTNO DNAME LOC
---------- -------------- -------------
50 MY LOCAL DEPT LAUSANNE
 
SQL> select * from emp;
 
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 50

Note that I’ve defined exactly which rows from DEPT I wanted to delete in the where clause of delete from scott.dept where deptno in (10,20,30,40);
You may be tempted to do something like: delete from scott.dept where deptno in (select deptno from scott.dept);
But keep in mind that the statements you run in the root are re-played as-is in the PDBs. And when you sync the PDB, you can see no rows from DEPT because there were already purged from the root. Actually, what you want is to delete from EMP the rows which refer to the rows you have deleted from the root. It is not possible to get them with a subquery, except if you have stored them into another data link table before deleting them. Changes in the application root must be managed like application patches.

 

Cet article 12cR2 Application Containers and Foreign Keys est apparu en premier sur Blog dbi services.

DBSAT un outil pour la sécurité de vos bases de données Oracle

Fri, 2017-06-16 10:22
Qu’est-ce que DBSAT ?

C’est un outil gratuit d’Oracle que vous pouvez télécharger sur My Oracle Support avec comme référence : Doc Id 2138254.1.
Il a pour but d’évaluer la sécurité de vos bases de données Oracle en analysant la configuration et les stratégies de sécurité mise en place afin de découvrir les risques liés à la sécurité.

Comment cela fonctionne t-il ?

Dans un premier temps il sera nécessaire de collecter les informations de votre database et dans un second temps de générer un rapport.
L’outil met à notre disposition 3 types de rapport.

  • Rapport Texte
  • Rapport Tableau
  • Rapport HTML

En quelques mots :

  • Implémentation et utilisation facile et rapide
  • Rapports détaillés
  • Détecte de véritable problème de sécurité
  • Pas de coûts supplémentaires si vous avez avez un contrat de support Oracle
  • Les résultats sont mis en évidence par différentes couleurs (Bleu, Vert, Jaune, Rouge)
Les différentes étapes nécessaires à la mise en œuvre
  • Installation de l’outil (DBSAT)
  • Collection des informations (DBSAT Collector)
  • Rapport sur l’état des risques (DBSAT Reports)
Installation de l’outil

Cet outil est développé en Python est requiert la version 2.6 ou supérieure (voir la version : python -V).

Le répertoire d’installation peut être ou vous le souhaitez car l’installation n’est qu’une décompression d’un fichier zippé, mais nous vous conseillons de le décompresser dans le répertoire de l’utilisateur  Oracle (/Home/Oracle/DBSAT).

DBSAT Collector doit être exécuté en tant qu’utilisateur OS avec des autorisations de lecture sur les fichiers et les répertoires sous ORACLE_HOME afin de collecter et traiter les données.

Si vous utilisez un environnement Vault il sera nécessaire d’utiliser un utilisateur non-SYS avec le role DV_SECANALYST.

Rôle DV_SECANALYST :

    • CREATE SESSION
    • SELECT on SYS.REGISTRY$HISTORY
    • Role SELECT_CATALOG_ROLE
    • Role DV_SECANALYST (if Database Vault is enabled)
    • Role AUDIT_VIEWER (12c only)
    • Role CAPTURE_ADMIN (12c only)
    • SELECT on SYS.DBA_USERS_WITH_DEFPWD (11g and 12c)
    • SELECT on AUDSYS.AUD$UNIFIED (12c only)

Vous trouverez plus d’information dans la documentation à l’adresse suivante :
https://docs.oracle.com/cd/E76178_01/SATUG/toc.htm

Collection des informations

La collection des informations est obligatoire. Celle-ci sera nécessaire pour la génération des rapports (Texte, HTML ou Tableau).
Très simple à utiliser et sécurisé : dbsat collect /file_name
Capture d’écran 2017-06-09 à 17.00.00

Rapport sur l’état des risques

Le rapport peut être généré de différentes manière en excluant plusieurs sections avec l’option -x.
dbsat report [-a] [-n] [-x section] pathname

Options :
Capture d’écran 2017-06-12 à 17.52.53

Les différentes sections utilisables

USER     : Compte utilisateur
PRIV      : Privileges et Roles
AUTH     : Contrôles authorisations
CRYPT    : Encryption des données
ACCESS  : Contrôle d’accès
AUDIT    : Audit
CONF      : Configuration Base de données
NET         : Configuration réseaux
OS            : Système d’exploitation

Capture d’écran 2017-06-09 à 17.31.14
Une fois décompressé, nous avons nos 3 types de fichier, texte, tableau et html.

Capture d’écran 2017-06-09 à 17.35.39

 Consultation du rapport

Aperçu du rapport.
Si vous utilisez des PDB, il sera nécessaire de collecter les informations auprès de chaque PDB individuellement.

Capture d’écran 2017-06-12 à 17.11.04

Capture d’écran 2017-06-12 à 17.14.03Les différents status du rapport

Vous pouvez utiliser ces status comme un fil rouge pour la mise en œuvre de recommandation. Ceci peut être utilisé pour prioriser et  planifier les modifications en fonction du niveau de risque. Un risque grave pourrait nécessiter des mesures correctives immédiates, alors que d’autres risques pourraient être résolus pendant un temps d’arrêt planifié ou associés à d’autres activités de maintenance.

Passe             : Aucune erreur trouvée
Évaluation   : Nécessite une analyse manuelle
Risque           : Bas
Risque           : Medium significatif
Risque           : Haut

Conclusion

Testez-le sans modération, il vous permettra d’avoir une vue globale sur la mise en place de la sécurité de vos bases de données. Une fois les problèmes identifiés il ne vous restera plus qu’à les corriger.

 

Cet article DBSAT un outil pour la sécurité de vos bases de données Oracle est apparu en premier sur Blog dbi services.

OUD 11.1.2.3 – How to create an OUD Start/Stop/Status script on Oracle Linux 6

Fri, 2017-06-16 08:39

One of the questions that pops up immediately, after you have installed your OUD successfully is how to integrate it into the automatic startup routines of the OS.

My example here show how to do it on Oracle Linux 6. On Oracle Linux 7 it looks a little different. Fortunately, Oracle delivers a script called “create-rc-script”, which can be found in your asinst home directory. It lets you specify the user name under which the OUD will run, the JAVA home and few more stuff. The whole documentation can be found under the following link.

https://docs.oracle.com/cd/E52734_01/oud/OUDAG/appendix_cli.htm#OUDAG01144

Running “–help” gives you all the options.

$ cat /etc/oracle-release
Oracle Linux Server release 6.9

$ ./create-rc-script -V
Oracle Unified Directory 11.1.2.3.170418
Build 20170206221556Z

$ ./create-rc-script --help
Create an RC script that may be used to start, stop, and restart the Directory
Server on UNIX-based systems

Usage:  create-rc-script  {options}
        where {options} include:

-f, --outputFile {path}
    The path to the output file to create
-u, --userName {userName}
    The name of the user account under which the server should run
-j, --javaHome {path}
    The path to the Java installation that should be used to run the server
-J, --javaArgs {args}
    A set of arguments that should be passed to the JVM when running the server

General Options

-V, --version
    Display Directory Server version information
-?, -H, --help
    Display this usage information

Take care that you start the “create-rc-script” script from your asinst_1 home, and not from the Oracle_OUD1 home. The reason for that, is that the “create-rc-script” sets the working directory to your current directory. “WORKING_DIR=`pwd`”, and if not started from the correct directory, you might end up with a not working start/stop script.

So .. to do it correctly, switch to your OUD asinst home first and run it from there. I am using here only a few options. The JAVA home, the user name under which the OUD will run and the output file.

$ cd /u01/app/oracle/product/middleware/asinst_1/OUD/bin

$ pwd
/u01/app/oracle/product/middleware/asinst_1/OUD/bin

$ ./create-rc-script --userName oracle --javaHome /u01/app/oracle/product/middleware/jdk --outputFile /home/oracle/bin/oud

The output generated by the script will be the start/stop script.

$ pwd
/home/oracle/bin

$ ls -l
total 4
-rwxr-xr-x. 1 oracle oinstall 862 Jun 16 13:35 oud

$ cat oud
#!/bin/sh
#
# Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved.
#
#
# chkconfig: 345 90 30
# description: Oracle Unified Directory startup script
#

# Set the path to the Oracle Unified Directory instance to manage
INSTALL_ROOT="/u01/app/oracle/product/middleware/asinst_1/OUD"
export INSTALL_ROOT

# Specify the path to the Java installation to use
OPENDS_JAVA_HOME="/u01/app/oracle/product/middleware/jdk"
export OPENDS_JAVA_HOME

# Determine what action should be performed on the server
case "${1}" in
start)
  /bin/su - oracle -- "${INSTALL_ROOT}/bin/start-ds" --quiet
  exit ${?}
  ;;
stop)
  /bin/su - oracle -- "${INSTALL_ROOT}/bin/stop-ds" --quiet
  exit ${?}
  ;;
restart)
  /bin/su - oracle -- "${INSTALL_ROOT}/bin/stop-ds" --restart --quiet
  exit ${?}
  ;;
*)
  echo "Usage:  $0 { start | stop | restart }"
  exit 1
  ;;
esac

The generated start/stop script looks quite complete. The only thing missing is the “status” section which is quite useful from my point of view. To add the status section, we can use the “status” script, which is also part of the OUD installation.

$ ./status --help
This utility can be used to display basic server information

Usage:  status {options}
        where {options} include:


LDAP Connection Options

-D, --bindDN {bindDN}
    DN to use to bind to the server
    Default value: cn=Directory Manager
-j, --bindPasswordFile {bindPasswordFile}
    Bind password file
-o, --saslOption {name=value}
    SASL bind options
-X, --trustAll
    Trust all server SSL certificates
-P, --trustStorePath {trustStorePath}
    Certificate trust store path
-U, --trustStorePasswordFile {path}
    Certificate trust store PIN file
-K, --keyStorePath {keyStorePath}
    Certificate key store path
-u, --keyStorePasswordFile {keyStorePasswordFile}
    Certificate key store PIN file
-N, --certNickname {nickname}
    Nickname of certificate for SSL client authentication
--connectTimeout {timeout}
    Maximum length of time (in milliseconds) that can be taken to establish a
    connection.  Use '0' to specify no time out
    Default value: 30000

Utility Input/Output Options

-n, --no-prompt
    Use non-interactive mode.  If data in the command is missing, the user is
    not prompted and the tool will fail
-s, --script-friendly
    Use script-friendly mode
--propertiesFilePath {propertiesFilePath}
    Path to the file containing default property values used for command line
    arguments
--noPropertiesFile
    No properties file will be used to get default command line argument values
-r, --refresh {period}
    When this argument is specified, the status command will display its
    contents periodically.  Used to specify the period (in seconds) between two
    displays of the status

General Options

-V, --version
    Display Directory Server version information
-?, -H, --help
    Display this usage information

Take care. Per default, the status utility is an interactive one, and it asks you for the user bind DN and the password. So, the interactive version of that script is not useful for our script.

$ ./status

>>>> Specify Oracle Unified Directory LDAP connection parameters

Administrator user bind DN [cn=Directory Manager]:

Password for user 'cn=Directory Manager':

          --- Server Status ---
Server Run Status:        Started
Open Connections:         6

          --- Server Details ---
Host Name:                dbidg01
Administrative Users:     cn=Directory Manager
Installation Path:        /u01/app/oracle/product/middleware/Oracle_OUD1
Instance Path:            /u01/app/oracle/product/middleware/asinst_1/OUD
Version:                  Oracle Unified Directory 11.1.2.3.170418
Java Version:             1.7.0_141
Administration Connector: Port 4444 (LDAPS)

          --- Connection Handlers ---
Address:Port : Protocol               : State
-------------:------------------------:---------
--           : LDIF                   : Disabled
8899         : Replication (secure)   : Enabled
0.0.0.0:161  : SNMP                   : Disabled
0.0.0.0:1389 : LDAP (allows StartTLS) : Enabled
0.0.0.0:1636 : LDAPS                  : Enabled
0.0.0.0:1689 : JMX                    : Disabled
...
...

And we need to do some adjustments, like in the following example.

./status --trustAll --no-prompt --bindDN cn="Directory Manager" --bindPasswordFile /home/oracle/.oudpwd | head -24

OK. To complete the script, we can add the status section to the script.

$ cat oud

#!/bin/sh
#
# Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved.
#
#
# chkconfig: 345 90 30
# description: Oracle Unified Directory startup script
#

# Set the path to the Oracle Unified Directory instance to manage
INSTALL_ROOT="/u01/app/oracle/product/middleware/asinst_1/OUD"
export INSTALL_ROOT

# Specify the path to the Java installation to use
OPENDS_JAVA_HOME="/u01/app/oracle/product/middleware/jdk"
export OPENDS_JAVA_HOME

# Determine what action should be performed on the server
case "${1}" in
start)
  /bin/su - oracle -- "${INSTALL_ROOT}/bin/start-ds" --quiet
  exit ${?}
  ;;
stop)
  /bin/su - oracle -- "${INSTALL_ROOT}/bin/stop-ds" --quiet
  exit ${?}
  ;;
restart)
  /bin/su - oracle -- "${INSTALL_ROOT}/bin/stop-ds" --restart --quiet
  exit ${?}
  ;;
status)
  /bin/su - oracle -- "${INSTALL_ROOT}/bin/status" --trustAll --no-prompt --bindDN cn="Directory Manager" --bindPasswordFile /home/oracle/.oudpwd | head -24
  exit ${?}
  ;;
*)
  echo "Usage:  $0 { start | stop | restart | status }"
  exit 1
  ;;
esac

Last but not least, we need to move it with the root user to the /etc/init.d directory and add it via chkconfig.

# mv /home/oracle/bin/oud /etc/init.d/
# chkconfig --add oud

# chkconfig --list | grep oud
oud             0:off   1:off   2:off   3:on    4:on    5:on    6:off

That’s all. The OUD part is done now. But what about the ODSM? We want the WebLogic domain to startup automatically as well. For doing so, we need another script.

$ cat /home/oracle/bin/weblogic

#!/bin/sh
#
#
# chkconfig: 345 90 30
# description: WebLogic 10.3.6 startup script
#

# Specify the path to the Java installation to use
JAVA_HOME="/u01/app/oracle/product/middleware/jdk"
export JAVA_HOME

BASE_DOMAIN="/u01/app/oracle/product/middleware/user_projects/domains/base_domain"
export BASE_DOMAIN

# Determine what action should be performed on the server
case "${1}" in
start)
  /bin/su - oracle -c "nohup ${BASE_DOMAIN}/bin/startWebLogic.sh &"
  exit ${?}
  ;;
stop)
  /bin/su - oracle -c "${BASE_DOMAIN}/bin/stopWebLogic.sh"
  exit ${?}
  ;;
restart)
  /bin/su - oracle -c "${BASE_DOMAIN}/bin/stopWebLogic.sh"
  /bin/su - oracle -c "nohup ${BASE_DOMAIN}/bin/startWebLogic.sh &"
  exit ${?}
  ;;
*)
  echo "Usage:  $0 { start | stop | restart }"
  exit 1
  ;;
esac

Now it’s time to move the weblogic to the start routines as well.

# mv /home/oracle/bin/weblogic /etc/init.d/
# chkconfig --add weblogic
# chkconfig --list | grep weblogic
weblogic        0:off   1:off   2:off   3:on    4:on    5:on    6:off

After everything is setup, it is time to test it. ;-)

# chkconfig --list | egrep '(weblogic|oud)'
oud             0:off   1:off   2:off   3:on    4:on    5:on    6:off
weblogic        0:off   1:off   2:off   3:on    4:on    5:on    6:off

# init 6

Now just check if everything came up correctly.

Conclusion

The OUD comes with a script “create-rc-script” which is quite useful. However, in case you have the OSDM and you want the OUD status section as well, some adjustments have to be done.

 

Cet article OUD 11.1.2.3 – How to create an OUD Start/Stop/Status script on Oracle Linux 6 est apparu en premier sur Blog dbi services.

12c NSSn process for Data Guard SYNC transport

Thu, 2017-06-15 10:15

In a previous post https://blog.dbi-services.com/dataguard-wait-events-have-changed-in-12c/ I mentioned the new processes NSA for ASYNC transport and NSS for SYNC transport. I’m answering a bit late to a comment about the number of processes: yes there is one NSSn process per LOG_ARCHIVE_DEST_n destination in SYNC and the numbers match.

Here is my configuration with two physical standby:
DGMGRL> show configuration
 
Configuration - orcl
 
Protection Mode: MaxPerformance
Members:
orcla - Primary database
orclb - Physical standby database
orclc - Physical standby database
 
Fast-Start Failover: DISABLED
 
Configuration Status:
SUCCESS (status updated 56 seconds ago)

Both are in SYNC:
DGMGRL> show database orclb logxptmode;
LogXptMode = 'sync'
DGMGRL> show database orclc logxptmode;
LogXptMode = 'sync'

I can see two NSS processes:
DGMGRL> host ps -edf | grep --color=auto ora_nss[0-9] Executing operating system command(s):" ps -edf | grep --color=auto ora_nss[0-9]"
oracle 4952 1 0 16:05 ? 00:00:00 ora_nss3_ORCLA
oracle 5322 1 0 16:17 ? 00:00:00 ora_nss2_ORCLA

Here are the two log archive dest:
SQL> select name,value from v$parameter where regexp_like(name,'^log_archive_dest_[23]$');
NAME VALUE
---- -----
log_archive_dest_2 service="ORCLB", SYNC AFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name="orclb" net_timeout=30, valid_for=(online_logfile,all_roles)
log_archive_dest_3 service="ORCLC", SYNC AFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name="orclc" net_timeout=30, valid_for=(online_logfile,all_roles)

I set the 3rd one in ASYNC:
DGMGRL> edit database orclc set property logxptmode=ASYNC;
Property "logxptmode" updated

The NSS3 has stopped:
DGMGRL> host ps -edf | grep --color=auto ora_nss[0-9] Executing operating system command(s):" ps -edf | grep --color=auto ora_nss[0-9]"
oracle 5322 1 0 16:17 ? 00:00:00 ora_nss2_ORCLA

I set the 2nd destination to ASYNC:
DGMGRL> edit database orclb set property logxptmode=ASYNC;
Property "logxptmode" updated

The NSS2 has stopped:
DGMGRL> host ps -edf | grep --color=auto ora_nss[0-9] Executing operating system command(s):" ps -edf | grep --color=auto ora_nss[0-9]"

Now starting the 3rd destination first:
DGMGRL> edit database orclc set property logxptmode=SYNC;
Property "logxptmode" updated

I can see that nss3 has started as it is the log_archive_dest_3 which is in SYNC now:
DGMGRL> host ps -edf | grep --color=auto ora_nss[0-9] Executing operating system command(s):" ps -edf | grep --color=auto ora_nss[0-9]"
oracle 5368 1 0 16:20 ? 00:00:00 ora_nss3_ORCLA

Then starting the second one:
DGMGRL> edit database orclb set property logxptmode=SYNC;
Property "logxptmode" updated

Here are the two processes:

DGMGRL> host ps -edf | grep --color=auto ora_nss[0-9] Executing operating system command(s):" ps -edf | grep --color=auto ora_nss[0-9]"
oracle 5368 1 0 16:20 ? 00:00:00 ora_nss3_ORCLA
oracle 5393 1 0 16:20 ? 00:00:00 ora_nss2_ORCLA

So if you see some SYNC Remote Write events in ASH, look at the program name to know which destination it is.

 

Cet article 12c NSSn process for Data Guard SYNC transport est apparu en premier sur Blog dbi services.

A first look at EDB Postgres Enterprise Manager 7 beta – Connecting a PostgreSQL instance

Thu, 2017-06-15 10:14

In the last post we did a click/click/click setup of the PEM server. What we want to do now is to attach a PostgreSQL instance to the PEM server for being able to monitor and administer it. For that we need to install the PEM agent on a host where we have a PostgreSQL instance running (192.168.22.249 in my case, which runs PostgreSQL 10 Beta1). Lets go …

As usual, when you want to have the systemd services generated automatically you should run the installation as root:

[root@pemclient postgres]# ls
pem_agent-7.0.0-beta1-1-linux-x64.run
[root@pemclient postgres]# chmod +x pem_agent-7.0.0-beta1-1-linux-x64.run 
[root@pemclient postgres]# ./pem_agent-7.0.0-beta1-1-linux-x64.run 

The installation itself is not a big deal, just follow the screens:

pem_agent1
pem_agent2
pem_agent3
pem_agent4
pem_agent5
pem_agent6
pem_agent7
pem_agent8
pem_agent9

Once done we have a new systemd service:

[root@pemclient postgres]# systemctl list-unit-files | grep pem
pemagent.service                              enabled 

… and the processes that make up the PEM agent:

[root@pemclient postgres]# ps -ef | grep pem
root      3454     1  0 16:40 ?        00:00:00 /u01/app/postgres/product/pem7/agent/agent/bin/pemagent -c /u01/app/postgres/product/pem7/agent/agent/etc/agent.cfg
root      3455  3454  0 16:40 ?        00:00:00 /u01/app/postgres/product/pem7/agent/agent/bin/pemworker -c /u01/app/postgres/product/pem7/agent/agent/etc/agent.cfg --pid 3454
root      3515  2741  0 16:43 pts/0    00:00:00 grep --color=auto pem

Heading back to the PEM web interface the new agent is visible immediately:
pem_agent10

So, lets add the instance:
pem_agent11
pem_agent12
pem_agent17
pem_agent14
pem_agent15

Of course, we need to allow connections to our PostgreSQL instance from the PEM server. Adding this to the pg_hba.conf and reloading the instance fixes the issue:

host    all             all             192.168.22.248/32       md5

Once done:
pem_agent16

… and the instance is there.

In the next post we’ll setup some monitoring for our newly added PostgreSQL instance.

 

Cet article A first look at EDB Postgres Enterprise Manager 7 beta – Connecting a PostgreSQL instance est apparu en premier sur Blog dbi services.

A first look at EDB Postgres Enterprise Manager 7 beta

Thu, 2017-06-15 08:09

In case you missed it: EnterpriseDB has released the beta of Postgres Enterprise Manager 7 beta. When installation is as easy as for the current version that should just be a matter of clicking next, lets see.

Because the installer will create the systemd services installation should be done as root:

[root@edbpem tmp]$ ls -l
total 289076
-rw-r--r--. 1 root root 296009946 Jun  1 18:58 pem_server-7.0.0-beta1-1-linux-x64.run
[root@edbpem tmp]$ chmod +x pem_server-7.0.0-beta1-1-linux-x64.run 
[root@edbpem tmp]$ ./pem_server-7.0.0-beta1-1-linux-x64.run 

From now on all is graphical and straight forward:

pem7_1
pem7_2
pem7_3
pem7_4
pem7_5
pem7_6
pem7_7
pem7_8
pem7_9
pem7_10
pem7_11
pem7_12
pem7_13
pem7_14
pem7_15
pem7_16
pem7_17
pem7_18
pem7_19
pem7_20
pem7_21
pem7_22
pem7_23
pem7_24
pem7_25
pem7_26
pem7_27
pem7_28
pem7_29
pem7_30
pem7_31
pem7_32

What you get from a process perspective is this:

[root@edbpem tmp]$ ps -ef | grep pem
postgres 13462     1  0 19:17 ?        00:00:00 /u01/app/postgres/product/96/db_2/bin/postgres -D /u02/pgdata/pem
root     13869     1  0 19:18 ?        00:00:00 /u01/app/postgres/product/pem7/agent/bin/pemagent -c /u01/app/postgres/product/pem7/agent/etc/agent.cfg
root     13870 13869  0 19:18 ?        00:00:00 /u01/app/postgres/product/pem7/agent/bin/pemworker -c /u01/app/postgres/product/pem7/agent/etc/agent.cfg --pid 13869
postgres 13883 13462  1 19:18 ?        00:00:02 postgres: agent1 pem 127.0.0.1(53232) idle
postgres 13885 13462  0 19:18 ?        00:00:00 postgres: agent1 pem 127.0.0.1(53234) idle
postgres 13886 13462  0 19:18 ?        00:00:00 postgres: agent1 pem 127.0.0.1(53236) idle
postgres 13887 13462  0 19:18 ?        00:00:00 postgres: agent1 pem 127.0.0.1(53238) idle
postgres 13888 13462  0 19:18 ?        00:00:00 postgres: agent1 pem 127.0.0.1(53240) idle
pem      13938 13937  0 19:18 ?        00:00:00 EDBPEM                                                              -k start -f /u01/app/postgres/product/EnterpriseDB-ApacheHTTPD/apache/conf/httpd.conf
root     14301 11016  0 19:20 pts/0    00:00:00 grep --color=auto pem

Two new systemd services have been created so PEM should startup and shutdown automatically when the server reboots:

[root@edbpem tmp]# systemctl list-unit-files | egrep "pem|postgres"
pemagent.service                              enabled 
postgresql-9.6.service                        enabled 

Lets connect to PEM: https://192.168.22.248:8443/pem

pem7_33

If you have an EDB subscription now it is the time to enter the product key:

pem7_34

What you immediately can see is that the look and feel changed to that of pgadmin4 (A fat client for PEM as in the current version is not available any more):

pem7_35
pem7_36
pem7_37

In a next post we’ll add a PostgreSQL instance and start to monitor it.

 

Cet article A first look at EDB Postgres Enterprise Manager 7 beta est apparu en premier sur Blog dbi services.

Oracle 12.2 – How to rotate the 12.2 listener log (DIAG_ADR_ENABLED_LISTENER = OFF) – Follow Up

Wed, 2017-06-14 02:14

My blog about listener rotation caused some discussion, which is great. :-) It sounds like an easy stuff, but in case of listener logging it isn’t.

https://blog.dbi-services.com/oracle-12-2-how-to-rotate-the-12-2-listener-log-diag_adr_enabled_listener-off/

Many ways do exist to rotate the listener log, but I was trying to point out some issues, because there are a few, e.g.

  1. What happens when the log file reaches 4G?
  2. What are the performance implications?
  3. What happens if I move the listener.log while the listener is running (Open file descriptor)?
  4. And how to rotate the listener log with minimal impact?

The first two points have been discussed already in the previous post, so, I’m not going deeper into those one’s. Let’s take a look at the other ones, and start with the file descriptor issue. In the output below, you can see that the listener has an open file descriptor “3w” which is pointing to “/u01/app/oracle/network/log/listener.log”. It can be quite easily identified by using the lsof utility (list open files).

oracle@dbidg01:/u01/app/oracle/ [DBIT122] ps -ef | grep tnslsnr | grep -v grep
oracle    4686     1  0 07:59 ?        00:00:00 /u01/app/oracle/product/12.2.0/dbhome_1/bin/tnslsnr LISTENER -inherit

oracle@dbidg01:/u01/app/oracle/ [DBIT122] lsof -p 4686 | grep listener.log
COMMAND  PID   USER   FD      TYPE             DEVICE SIZE/OFF      NODE NAME
tnslsnr 4686 oracle    3w      REG              251,4    55239 141607653 /u01/app/oracle/network/log/listener.log

So .. what happens if I move the listener log, while the listener is running.

oracle@dbidg01:/u01/app/oracle/network/log/ [DBIT122] mv listener.log listener.log.1
oracle@dbidg01:/u01/app/oracle/network/log/ [DBIT122]

And to have some data, I’m doing some sqlplus sessions here to generate more log entries …

oracle@dbidg01:/u01/app/oracle/network/log/ [DBIT122] sqlplus hr/hr@DBIT122_SITE1
... 

oracle@dbidg01:/u01/app/oracle/network/log/ [DBIT122] sqlplus hr/hr@DBIT122_SITE1
... 

oracle@dbidg01:/u01/app/oracle/network/log/ [DBIT122] ls -l listener.log
ls: cannot access listener.log: No such file or directory

oracle@dbidg01:/u01/app/oracle/network/log/ [DBIT122] lsof -p 4686 | grep listener.log
COMMAND  PID   USER   FD      TYPE             DEVICE SIZE/OFF      NODE NAME
tnslsnr 4686 oracle    3w      REG              251,4    56143 141607653 /u01/app/oracle/network/log/listener.log.1

As you can see. A new listener.log is not automatically created, and the file descriptor “3w” is pointing now to /u01/app/oracle/network/log/listener.log.1.

That’s why it is not a good idea to move the listener log, without stopping the listener logging first. And please don’t try to send a hang up signal to the listener. The listener does not understand the hang up signal (kill -HUP) and you would kill it immediately. ;-)

Ok. Let’s get back to the question about how to rotate the listener log with minimal impact?

If you have activated the listener logging, then there is a reason for doing that and you don’t want to lose data out of that log file. In case that losing any listener log data is not acceptable, you have to stop the listener, rotate the log and start the listener. There is no way out of that. Of course it has the disadvantage that new sessions cannot be established during that time.

However, if you want the listener to be up and running, and rotate the listener log with an absolute minimum of log data loss (taking here about milliseconds), then I would use the approach which I have described already in my previous post.

I have written quickly a listener rotate script which demonstrates how it can look like. I know that the script is not baby save, and a lot of extra checks and tweaks can be built in, but you will get an idea how it can look like. The script takes two parameters. The first one is the listener name, and the second one is the number of days about how long the listener logs should be kept on disk. Everything older than that will be removed.

oracle@dbidg01:/home/oracle/ [DBIT122] ./rotate_listener.sh LISTENER 7
INFO: Check if Listener LISTENER is running
INFO: Start Listener Rotating for LISTENER
INFO: Current Listener Logging for LISTENER is: ON
INFO: Tuning Listener Logging for LISTENER OFF
INFO: Current Listener Logging for LISTENER is: OFF
INFO: Rotating Listener Log /u01/app/oracle/network/log/listener.log to /u01/app/oracle/network/log/listener.log.1497363899
INFO: Turning on Listener Logging for LISTENER
INFO: Current Listener Logging for LISTENER is: ON
INFO: Rotated successfully Listener Log for LISTENER
INFO: Starting cleanup of old listener log files
INFO: Will delete the following log files
/u01/app/oracle/network/log/listener.log.1497354123
/u01/app/oracle/network/log/listener.log.1497354122
/u01/app/oracle/network/log/listener.log.1497354121
INFO: Finished cleanup

And here are the contents of the script.

oracle@dbidg01:/home/oracle/ [DBIT122] cat rotate_listener.sh
#!/bin/bash
#
#-- Listener Name => parameter 1
#-- Delete listener log files older than number of days => parameter 2
#

#-- set -x

ListenerName=$1
NumberOfDays=$2
#-- %s  seconds since 1970-01-01 00:00:00 UTC
Date=`date +%s`

#-- Check if variable $1 and $2 are empty
if [ -z ${ListenerName} ]; then
        echo ""
        echo "INFO: Please specify the listener name"
        echo "INFO: e.g. ./rotate_listener.sh LISTENER 7"
        echo ""
        exit
fi

if [ -z ${NumberOfDays} ]; then
        echo ""
        echo "INFO: Please specify the number of days"
        echo "INFO: e.g. ./rotate_listener.sh LISTENER 7"
        echo ""
        exit
fi

echo "INFO: Check if Listener ${ListenerName} is running"

ps -ef | grep "tnslsnr ${ListenerName} -inherit" | grep -v grep >/dev/null 2>&1
if [ $? != 0 ]; then
        echo "INFO: Listener ${ListenerName} is not running ... will exit here"
        exit
fi

#-- Set the listener log file
ListenerLogFile=`lsnrctl status ${ListenerName} | grep "Listener Log File" | awk '{ print $4 }'`


echo "INFO: Start Listener Rotating for ${ListenerName}"

#-- Check listener log status
ListenerLogStatus=`lsnrctl <<-EOF  | grep log_status | awk '{ print $6 }'
set displaymode normal
set current_listener ${ListenerName}
show log_status
EOF`

if [ ${ListenerLogStatus} = "ON" ]; then
echo "INFO: Current Listener Logging for ${ListenerName} is: ${ListenerLogStatus}"
echo "INFO: Tuning Listener Logging for ${ListenerName} OFF"

ListenerLogStatus=`lsnrctl <<-EOF | grep log_status | awk '{ print $6 }'
set displaymode normal
set current_listener ${ListenerName}
set log_status off
EOF`

echo "INFO: Current Listener Logging for ${ListenerName} is: ${ListenerLogStatus}"
 if [ ${ListenerLogStatus} = "OFF" ]; then
   echo "INFO: Rotating Listener Log ${ListenerLogFile} to ${ListenerLogFile}.${Date}"
        mv ${ListenerLogFile} ${ListenerLogFile}.${Date}
    echo "INFO: Turning on Listener Logging for ${ListenerName}"

ListenerLogStatus=`lsnrctl <<-EOF | grep log_status | awk '{ print $6 }'
set displaymode normal
set current_listener ${ListenerName}
set log_status on
EOF`

echo "INFO: Current Listener Logging for ${ListenerName} is: ${ListenerLogStatus}"
echo "INFO: Rotated successfully Listener Log for ${ListenerName}"
 fi
fi

echo "INFO: Starting cleanup of old listener log files"
echo "INFO: Will delete the following log files"
ListenerLogDirectory=`dirname $ListenerLogFile`
find ${ListenerLogDirectory} -name "listener.log.*" -mtime +${2} -print
find ${ListenerLogDirectory} -name "listener.log.*" -mtime +${2} -print | xargs rm -f

echo "INFO: Finished cleanup"

#-- EOF
Conclusion

If you run that rotate listener log script during a time (e.g. in the middle of night), where you expect minimal activity on the DB, you can minimize the chance of losing listener log entries even further.

 

Cet article Oracle 12.2 – How to rotate the 12.2 listener log (DIAG_ADR_ENABLED_LISTENER = OFF) – Follow Up est apparu en premier sur Blog dbi services.

Oracle 12.2 – How to run a consistent full database export with dbms_datapump and parallel degree of 8

Tue, 2017-06-13 03:58

Nowadays, since the cloud is becoming more and more important, the PL/SQL API’s become more and more important too. Fortunately, Oracle has quite a lot of them. E.g. How do you run a Data Pump export if you have no ssh connectivity to the server? You could use the old exp tool, which is still available even with Oracle 12.2, or you can use DBMS_DATAPUMP. The Data Pump API is quite good documented at in the following books:

Database Utilities
https://docs.oracle.com/database/122/SUTIL/using-ORACLE_DATAPUMP-api.htm#SUTIL600

Database PL/SQL Packages and Types Reference
https://docs.oracle.com/database/122/ARPLS/DBMS_DATAPUMP.htm#ARPLS66050

But you might might some useful stuff in the $ORACLE_HOME/rdbms/admin/dbmsdp.sql as well.

In this little how to, I would like to show how to create a consistent full database export (parallel 8) with
the Data Pump API.

There are a only a few steps involved to get the job done.

1. Create a directory and grant the necessary privileges to user HR
2. Grant the DATAPUMP_EXP_FULL_DATABASE role to user HR
3. Execute the Data Pump job with DBMS_DATAPUMP
4. Monitor the Data Pump job
5. Optionally, do some cleanup

 

1.) Let’s start with the directory.

SQL> CREATE OR REPLACE DIRECTORY DATAPUMP_DIR AS '/u01/app/oracle/admin/DBIT122/dpdump';

Directory created.

SQL> GRANT READ, WRITE ON DIRECTORY DATAPUMP_DIR TO HR;

Grant succeeded.

2.) Now we grant the DATAPUMP_EXP_FULL_DATABASE role to the HR user

SQL> GRANT DATAPUMP_EXP_FULL_DATABASE TO HR;

Grant succeeded.

Please be aware that the DATAPUMP_EXP_FULL_DATABASE role affects only export operations. It allows the user HR to run these operations:

  • Perform the operation outside of the scope of their schema
  • Monitor jobs that were initiated by another user
  • Export objects (for example, TABLESPACE definitions) that unprivileged users cannot reference

Without this role, you might run into the following error when doing a full export:

ERROR at line 1:
ORA-31631: privileges are required
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4932
ORA-06512: at "SYS.DBMS_DATAPUMP", line 6844
ORA-06512: at line 6

 

3.) Now it’s time to run the Data Pump job. Be aware, that for a consistent export, you need to specify the FLASHBACK_TIME or FLASHBACK_SCN. In my case, I use the FLASHBACK_TIME and set it to the current SYSTIMESTAMP.

Ok. Let’s give it a try.

SQL> connect hr/hr
Connected.

SQL> @exp_datapump.sql
SQL> declare
  2        l_datapump_handle    NUMBER;  -- Data Pump job handle
  3        l_datapump_dir       VARCHAR2(20) := 'DATAPUMP_DIR';  -- Data Pump Directory
  4        l_status             varchar2(200); -- Data Pump Status
  5    BEGIN
  6        l_datapump_handle := dbms_datapump.open(operation => 'EXPORT',  -- operation = EXPORT, IMPORT, SQL_FILE
  7                                                job_mode =>'FULL',  -- job_mode = FULL, SCHEMA, TABLE, TABLESPACE, TRANSPORTABLE
  8                                                job_name => 'DBIT122 EXPORT JOB RUN 003',  -- job_name = NULL (default) or: job name (max 30 chars)
  9                                                version => '12'); -- version = COMPATIBLE (default), LATEST (dbversion), a value (11.0.0 or 12)
 10
 11            dbms_datapump.add_file(handle    => l_datapump_handle,
 12                               filename  => 'exp_DBIT122_%U.dmp',
 13                               directory => l_datapump_dir);
 14
 15        dbms_datapump.add_file(handle    => l_datapump_handle,
 16                               filename  => 'exp_DBIT122.log' ,
 17                               directory => l_datapump_dir ,
 18                               filetype  => DBMS_DATAPUMP.ku$_file_type_log_file);
 19
 20        dbms_datapump.set_parameter(l_datapump_handle,'CLIENT_COMMAND','Full Consistent Data Pump Export of DBIT122 with PARALLEL 8');
 21
 22            dbms_datapump.set_parameter(l_datapump_handle,'FLASHBACK_TIME','SYSTIMESTAMP');
 23
 24        dbms_datapump.set_parallel(l_datapump_handle,8);
 25
 26        dbms_datapump.start_job(handle => l_datapump_handle);
 27
 28        dbms_datapump.wait_for_job(handle => l_datapump_handle,
 29                                   job_state => l_status );
 30
 31        dbms_output.put_line( l_status );
 32
 33        end;
 34  /

PL/SQL procedure successfully completed.

SQL>

4.) In another window, you might want to monitor the status of your export job.

SQL> r
  1  select owner_name, job_name, rtrim(operation) "OPERATION",
  2         rtrim(job_mode) "JOB_MODE", state, attached_sessions
  3    from dba_datapump_jobs
  4   where job_name not like 'BIN$%'
  5*  order by 1,2

OWNER_NAME JOB_NAME                         OPERATION    JOB_MODE     STATE        ATTACHED_SESSIONS
---------- -------------------------------- ------------ ------------ ------------ -----------------
HR         DBIT122 EXPORT JOB RUN 003       EXPORT       FULL         EXECUTING                    1

Cool. If the job finished successfully, you will see 8 dump files, because we specified exp_DBIT122_%U.dmp as the file name, and one log file.

oracle@dbidg01:/u01/app/oracle/admin/DBIT122/dpdump/ [DBIT122] ls -l
total 4752
-rw-r----- 1 oracle oinstall  630784 Jun 13 10:29 exp_DBIT122_01.dmp
-rw-r----- 1 oracle oinstall 3321856 Jun 13 10:29 exp_DBIT122_02.dmp
-rw-r----- 1 oracle oinstall  180224 Jun 13 10:29 exp_DBIT122_03.dmp
-rw-r----- 1 oracle oinstall   57344 Jun 13 10:29 exp_DBIT122_04.dmp
-rw-r----- 1 oracle oinstall  430080 Jun 13 10:28 exp_DBIT122_05.dmp
-rw-r----- 1 oracle oinstall   20480 Jun 13 10:29 exp_DBIT122_06.dmp
-rw-r----- 1 oracle oinstall   28672 Jun 13 10:29 exp_DBIT122_07.dmp
-rw-r----- 1 oracle oinstall  176128 Jun 13 10:28 exp_DBIT122_08.dmp
-rw-r--r-- 1 oracle oinstall   11966 Jun 13 10:29 exp_DBIT122.log

5.) Finally, you might want to do some cleanup, in case you don’t need the dump files and the log files anymore. Or you start your export job with the REUSE_DUMPFILES=YES option. This option overwrites the destination dump file if they exist. In case you want to do the cleanup manually, you can use the ULT_FILE package.

SQL> exec utl_file.fremove( 'DATAPUMP_DIR', 'exp_DBIT122.log' );

PL/SQL procedure successfully completed.

SQL> exec utl_file.fremove( 'DATAPUMP_DIR', 'exp_DBIT122_01.dmp' );

PL/SQL procedure successfully completed.

SQL> exec utl_file.fremove( 'DATAPUMP_DIR', 'exp_DBIT122_02.dmp' );

PL/SQL procedure successfully completed.

SQL> exec utl_file.fremove( 'DATAPUMP_DIR', 'exp_DBIT122_03.dmp' );

PL/SQL procedure successfully completed.

SQL> exec utl_file.fremove( 'DATAPUMP_DIR', 'exp_DBIT122_04.dmp' );

PL/SQL procedure successfully completed.

SQL> exec utl_file.fremove( 'DATAPUMP_DIR', 'exp_DBIT122_05.dmp' );

PL/SQL procedure successfully completed.

SQL> exec utl_file.fremove( 'DATAPUMP_DIR', 'exp_DBIT122_06.dmp' );

PL/SQL procedure successfully completed.

SQL> exec utl_file.fremove( 'DATAPUMP_DIR', 'exp_DBIT122_07.dmp' );

PL/SQL procedure successfully completed.

SQL> exec utl_file.fremove( 'DATAPUMP_DIR', 'exp_DBIT122_08.dmp' );

PL/SQL procedure successfully completed.

SQL>
Conclusion

The PL/SQL API’s become more and more important, especially in cloud environments. It makes quite a lot of sense, from my point of view to look closer into the one or the other. Especially the DBMS_DATAPUMP is an important one for moving data around.

 

 

 

Cet article Oracle 12.2 – How to run a consistent full database export with dbms_datapump and parallel degree of 8 est apparu en premier sur Blog dbi services.

12cR2 PDB refresh as a poor-man standby?

Mon, 2017-06-12 15:04
Disclaimer

My goal here is only to show that the Refreshable PDB feature works by shipping and applying redo, and then can synchronize a copy of the datafiles. I do not recommend to use it for disaster recovery in any production environment yet. Even if I’m using only supported features, those features were not designed for this usage, and are quite new and not stable yet. Disaster Recovery must use safe and proven technologies and this is why I’ll stick with Dbvisit standby for disaster recovery in Standard Edition.

This post explains what I had in my mind whith the following tweet:
CapturePoorManSBY

Primary PRDPDB

On my primary server, I have a CDB1 container database in Standard Edition with one Pluggable Database: PDRDPDB:

21:36:45 SQL> connect sys/oracle@//192.168.78.105/CDB1 as sysdba
Connected.
 
21:36:46 SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PRDPDB READ WRITE NO

I need a user there to be able to remote clone from it:

21:36:46 SQL> grant create session, sysoper, dba to C##DBA identified by oracle container=all;
Grant succeeded.

Standby server

On my standby server, I have a CDB1 container database in Standard Edition, where I create a database link to the production CDB using the user created above to connect to it:

21:36:46 SQL> connect sys/oracle@//192.168.78.106:1522/CDB1 as sysdba
Connected.
21:36:46 SQL> create database link CDB1A connect to C##DBA identified by oracle using '//192.168.78.105/CDB1A';
Database link created.

My standby server runs Grid Infrastructure and has the database created on /acfs which is an ACFS filesystem. We will see the reason later when we will need to create a PDB snapshot copy. Any filesystem where you can use PDB snapshot copy would be fine.

Standby SBYPDB

The creation of the ‘standby’ pluggable database is done with a simple remote clone command and can be run in 12cR2 with the source PRDPDB still opened read write:


21:36:46 SQL> create pluggable database SBYPDB from PRDPDB@CDB1A
21:36:46 2 file_name_convert=('/u01/oradata/CDB1A/PRDPDB','/acfs/oradata/CDB1/SBYPDB')
21:36:46 3 refresh mode every 1 minutes;
 
Pluggable database created.

The REFRESH MODE is a 12cR2 feature which primary goal is to maintain and refresh a master clone for further provisioning of thin clones. This clone is refreshed every 1 minute, which means that I expect to have at most a one minute gap between PRDPDB and SBYPDB data, with the additional time to apply the 1 minute redo, of course.

Activity on the source

I will simulate a crash of the primary server and a failover to the standby, when transactions are running. I’ll run this activity on the SCOTT.EMP table:

21:39:03 SQL> connect scott/tiger@//192.168.78.105/PRDPDB;
Connected.
 
21:39:04 SQL> select * from emp where ename='KING';
 
EMPNO ENAME JOB MGR HIREDATE SAL
---------- ---------- --------- ---------- -------------------- ----------
7839 KING PRESIDENT 17-nov-1981 00:00:00 5000

I’m now updating the date and incrementing the number each second.

21:39:09 SQL> exec for i in 1..150 loop update emp set hiredate=sysdate, sal=sal+1; dbms_lock.sleep(1); commit; end loop
 
PL/SQL procedure successfully completed.

Here is the latest data on the primary server:

21:41:39 SQL> select * from emp where ename='KING';
 
EMPNO ENAME JOB MGR HIREDATE SAL
---------- ---------- --------- ---------- -------------------- ----------
7839 KING PRESIDENT 11-jun-2017 21:41:38 5150

Crash the primary

The primary server is not supposed to be accessible in case of Disaster Recovery, so I’m crashing it:

21:41:39 SQL> disconnect
Disconnected from Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
21:41:39 SQL> connect / as sysdba
Connected.
21:41:39 SQL> shutdown abort
ORACLE instance shut down.

Activate the standby

The datafiles are up to date, with a maximum 1 minute gap and all I want is open it and have the application re-connect to it. However a refreshable clone can be opened only read-only. This makes sense: you cannot apply more redo from source once opened read-write. So my first idea was to stop the refresh mode:

21:41:45 SQL> connect sys/oracle@//192.168.78.106:1522/CDB1 as sysdba
Connected.
21:41:45 SQL> alter session set container=SBYPDB;
Session altered.
 
21:41:45 SQL> alter pluggable database SBYPDB refresh mode none;
alter pluggable database SBYPDB refresh mode none
*
ERROR at line 1:
ORA-17627: ORA-12514: TNS:listener does not currently know of service requested
in connect descriptor
ORA-17629: Cannot connect to the remote database server

It seems that Oracle tries to do one last refresh when you stop the refresh mode, but this fails here because the source is not accessible. I think that it should be possible to open read-write without applying more redo. However, these refreshable clones were not designed for failover.

I hope that one day we will just be able to end refresh mode without connecting to source, accepting to lose the latest transactions.

Open Read Only

Without an access to the source, I stay in refresh mode and I can only open read only:
21:41:45 SQL> alter pluggable database SBYPDB open read-only;
Pluggable database altered.
 
21:41:47 SQL> alter session set container=SBYPDB;
Session altered.
&nsbp;
21:41:47 SQL> alter session set nls_date_format='dd-mon-yyyy hh24:mi:ss';
Session altered.
 
21:41:47 SQL> select * from scott.emp where ename='KING';
 
EMPNO ENAME JOB MGR HIREDATE SAL
---------- ---------- --------- ---------- -------------------- ----------
7839 KING PRESIDENT 11-jun-2017 21:41:01 5113

My data is there, with my less than one minute gap, but that’s not sufficient for me. I want to run my application on it.

Snapshot Clone

My first idea to get the PDB read write on the standby server is to clone it. Of course, the failover time should not depend on the size of the database, so my idea is to do a snapshot copy, and this is why I’ve setup my standby CDB on ACFS. Here I’m cloning the SBYPDB to the same name as the primary: PRDPDB

21:41:47 SQL> alter session set container=CDB$ROOT;
Session altered.
 
21:41:47 SQL> create pluggable database PRDPDB from SBYPDB file_name_convert=('SBYPDB','PRDPDB') snapshot copy;
Pluggable database created.
 
21:42:03 SQL> alter pluggable database PRDPDB open;
Pluggable database altered.

I have now my new PRDPDB opened read write with the latest data that was refreshed:

21:42:26 SQL> alter session set container=PRDPDB;
Session altered.
 
21:42:26 SQL> alter session set nls_date_format='dd-mon-yyyy hh24:mi:ss';
Session altered.
 
21:42:26 SQL> select * from scott.emp where ename='KING';
 
EMPNO ENAME JOB MGR HIREDATE SAL
---------- ---------- --------- ---------- -------------------- ----------
7839 KING PRESIDENT 11-jun-2017 21:41:01 5113

I’m running on a snapshot here. I can stay like that, or plan to move it out of the snapshot in the future. There is no online datafile move in Standard Edition, but there is the online pluggable database relocate. Anyway, running the database in a snapshot is sufficient to run a production after a Disaster Recovery and I can remove the SBYPRD so that there is no need to copy the ACFS extents on future writes.

Keep the snapshot

At that point, you should tell me that I cannot snapshot copy a PDB within the same CDB here because I’m in Standard Edition. And that’s right: you can create only one PDB there and you are supposed to get a ‘feature not enabled’. But I was able to do it here in my lab, with a small trick to inverse the CON_ID sequence:

Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PRDPDB READ WRITE NO
4 SBYPDB MOUNTED

Remote snapshot clone should be possible as well. But there’s another licensing issue here. Using ACFS snapshots for the database is not allowed in Standard Edition. This means that this solution probably requires another snapshot solution than the one I’m using here in my lab.

If you don’t fear to violate the single-tenant rules, you may prefer to keep the SBYPRD for a while. Imagine that you are able to restart the crashed server for a few minutes, then you can do the last refresh of SBYPRD to have a look at the transactions that were lost in the 1 minute window.

I re-start the crashed CDB:

21:42:26 SQL> connect / as sysdba
Connected to an idle instance.
21:42:27 SQL> startup
ORACLE instance started.
 
Total System Global Area 859832320 bytes
Fixed Size 8798552 bytes
Variable Size 356519592 bytes
Database Buffers 486539264 bytes
Redo Buffers 7974912 bytes
Database mounted.
Database opened.

and now, on my standby server, I can finally stop the refresh mode:

21:42:51 SQL> connect sys/oracle@//192.168.78.106:1522/CDB1 as sysdba
Connected.
21:42:52 SQL> alter pluggable database SBYPDB close;
Pluggable database altered.
 
21:42:52 SQL> alter session set container=SBYPDB;
Session altered.
 
21:42:52 SQL> alter pluggable database SBYPDB refresh mode none;
Pluggable database altered.

Be careful not to have jobs or services starting here because your production is now on the snapshot clone PRDPDB running on the same server. Let’s open it:

21:43:02 SQL> alter pluggable database SBYPDB open restricted;
Pluggable database altered.
 
21:43:24 SQL> select * from scott.emp where ename='KING';
 
EMPNO ENAME JOB MGR HIREDATE SAL
---------- ---------- --------- ---------- -------------------- ----------
7839 KING PRESIDENT 11-jun-2017 21:41:38 5150

And here we are with the data at the moment of the crash. Then, the application owner can manually check what was missed between the last refresh (which made its way to PRDPDB) and the crash (visible in SBYPDB).

Unplug/Plug

I was not very satisfied by the snapshot clone because of the limitations in Standard Edition, which is where this solution may be interesting. I have the datafiles but cannot open the SBYPDB read write. I tried to unplug them but cannot because of the refresh mode:

SQL> alter pluggable database SBYPDB unplug into '/tmp/tmp.xml';
 
Error starting at line : 1 in command -
alter pluggable database SBYPDB unplug into '/tmp/tmp.xml'
Error report -
ORA-01113: file 23 needs media recovery
ORA-01110: data file 23: '/acfs/oradata/CDB1/SBYPDB/undotbs01.dbf'
01113. 00000 - "file %s needs media recovery"
*Cause: An attempt was made to online or open a database with a file that
is in need of media recovery.
*Action: First apply media recovery to the file.

I know that I don’t need more recovery. So let’s unplug it in another way:

SQL> alter pluggable database SBYPDB open read only;
Pluggable database SBYPDB altered.
 
SQL> exec dbms_pdb.describe('/tmp/tmp.xml','SBYPDB');
PL/SQL procedure successfully completed.

Then drop it but keep the datafiles:

SQL> alter pluggable database SBYPDB close;
Pluggable database SBYPDB altered.
 
SQL> drop pluggable database SBYPDB;
Pluggable database SBYPDB dropped.

And plug it back:

SQL> create pluggable database SBYPDB using '/tmp/tmp.xml';
Pluggable database SBYPDB created.
 
SQL> alter pluggable database SBYPDB open;
Pluggable database SBYPDB altered.

Here it is. This takes a bit longer than the snapshot solution but still ready to activate the ‘standby’ PDB without copying datafiles.

So what?

All the new 12cR2 multitenant features are available in all Editions, which is very good. Here with ALTER PLUGGABLE DATABASE … REFRESH we have log shipping and apply, for free in Standard Edition, at PDB level. And I’ve tested two ways to open this standby PDB in case of disaster recovery. I’m using only supported features here, but be careful that those features were not designed for this goal. The normal operations on refreshable clone require that the remote CDB is accessible. But there are workarounds here because you can describe/drop/plug or snapshot clone from a PDB that you can open read only.

 

Cet article 12cR2 PDB refresh as a poor-man standby? est apparu en premier sur Blog dbi services.

Install Apache Kafka on Linux

Fri, 2017-06-09 06:37

download

What is Apache Kafka ?

No, Kafka is not only the famous author (en.wikipedia.org/wiki/Franz_Kafka), it’s an open-source distributed pub-sub messaging system with powerful skills like scalability and fault tolerance. It’s also a stream processing platform (near real-time) for the streaming datasources. The design of Apache Kafka is strongly influenced by the commit logs. Apache Kafka was originally developed by Linkedin and was subsequently open sourced in early 2011.

The installation is pretty simple but need to be rigorous .

Binaries installation

    • Prerequisites
      Get a Linux server (I have chosen Centos 7.3.1611), it could run on a small config. (memory 1G min.)
      Connect as a sudo user or root
    • Update your system and reboot
      [root@osboxes ~]# yum update
      Loaded plugins: fastestmirror, langpacks
      Loading mirror speeds from cached hostfile
       * base: mirror.switch.ch
       * epel: mirror.uni-trier.de
       * extras: mirror.switch.ch
       * updates: mirror.switch.ch
      No packages marked for update
  • Install the latest openjdk and set your environment
    [root@osboxes ~]# yum install java-1.8.0-openjdk
    Loaded plugins: fastestmirror, langpacks
    Loading mirror speeds from cached hostfile
     * base: mirror.switch.ch
     * epel: mirror.imt-systems.com
     * extras: mirror.switch.ch
     * updates: mirror.switch.ch
    Package 1:java-1.8.0-openjdk-1.8.0.131-3.b12.el7_3.x86_64 already installed and latest version
    Nothing to do
    
    #Check it:
    [root@osboxes ~]# java -version
    openjdk version "1.8.0_131"
    OpenJDK Runtime Environment (build 1.8.0_131-b12)
    OpenJDK 64-Bit Server VM (build 25.131-b12, mixed mode)
    
    
    #Update your bash_profile:
    export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk
    export JRE_HOME=/usr/lib/jvm/jre
    # and source your profile:
    [root@osboxes ~]# . ./.bash_profile
    [root@osboxes ~]# echo $JAVA_HOME
    /usr/lib/jvm/jre-1.8.0-openjdk
    [root@osboxes ~]# echo $JRE_HOME
    /usr/lib/jvm/jre
    
  • The Confluent Platform is an open source platform that contains all the components you need
    to create a scalable data platform built around Apache Kafka.
    Confluent Open Source is freely downloadable.
    Install the public key from Confluent

    rpm --import http://packages.confluent.io/rpm/3.2/archive.key
  • Add the confluent.repo to your /etc/yum.repos.d with this content
    [Confluent.dist]
    name=Confluent repository (dist)
    baseurl=http://packages.confluent.io/rpm/3.2/7
    gpgcheck=1
    gpgkey=http://packages.confluent.io/rpm/3.2/archive.key
    enabled=1
    
    [Confluent]
    name=Confluent repository
    baseurl=http://packages.confluent.io/rpm/3.2
    gpgcheck=1
    gpgkey=http://packages.confluent.io/rpm/3.2/archive.key
    enabled=1
  • Clean your yum caches
    yum clean all
  • And finally install the open source version of Confluent
    yum install confluent-platform-oss-2.11
    Transaction Summary
    ============================================================================================================================================================================
    Install  1 Package (+11 Dependent packages)
    
    Total download size: 391 M
    Installed size: 446 M
    Is this ok [y/d/N]: y
    Downloading packages:
    (1/12): confluent-common-3.2.1-1.noarch.rpm                                                                                                          | 2.0 MB  00:00:06
    (2/12): confluent-camus-3.2.1-1.noarch.rpm                                                                                                           |  20 MB  00:00:28
    (3/12): confluent-kafka-connect-elasticsearch-3.2.1-1.noarch.rpm                                                                                     | 4.3 MB  00:00:06
    (4/12): confluent-kafka-2.11-0.10.2.1-1.noarch.rpm                                                                                                   |  38 MB  00:00:28
    (5/12): confluent-kafka-connect-jdbc-3.2.1-1.noarch.rpm                                                                                              | 6.0 MB  00:00:07
    (6/12): confluent-kafka-connect-hdfs-3.2.1-1.noarch.rpm                                                                                              |  91 MB  00:01:17
    (7/12): confluent-kafka-connect-s3-3.2.1-1.noarch.rpm                                                                                                |  92 MB  00:01:18
    (8/12): confluent-kafka-rest-3.2.1-1.noarch.rpm                                                                                                      |  16 MB  00:00:16
    (9/12): confluent-platform-oss-2.11-3.2.1-1.noarch.rpm                                                                                               | 6.7 kB  00:00:00
    (10/12): confluent-rest-utils-3.2.1-1.noarch.rpm                                                                                                     | 7.1 MB  00:00:06
    (11/12): confluent-schema-registry-3.2.1-1.noarch.rpm                                                                                                |  27 MB  00:00:23
    (12/12): confluent-kafka-connect-storage-common-3.2.1-1.noarch.rpm                                                                                   |  89 MB  00:01:08
    ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    Total                                                                                                                                       2.2 MB/s | 391 MB  00:03:00
    Running transaction check
    Running transaction test
    Transaction test succeeded
    Running transaction
      Installing : confluent-common-3.2.1-1.noarch                                                                                                                         1/12
      Installing : confluent-kafka-connect-storage-common-3.2.1-1.noarch                                                                                                   2/12
      Installing : confluent-rest-utils-3.2.1-1.noarch                                                                                                                     3/12
      Installing : confluent-kafka-rest-3.2.1-1.noarch                                                                                                                     4/12
      Installing : confluent-schema-registry-3.2.1-1.noarch                                                                                                                5/12
      Installing : confluent-kafka-connect-s3-3.2.1-1.noarch                                                                                                               6/12
      Installing : confluent-kafka-connect-elasticsearch-3.2.1-1.noarch                                                                                                    7/12
      Installing : confluent-kafka-connect-jdbc-3.2.1-1.noarch                                                                                                             8/12
      Installing : confluent-kafka-connect-hdfs-3.2.1-1.noarch                                                                                                             9/12
      Installing : confluent-kafka-2.11-0.10.2.1-1.noarch                                                                                                                 10/12
      Installing : confluent-camus-3.2.1-1.noarch                                                                                                                         11/12
      Installing : confluent-platform-oss-2.11-3.2.1-1.noarch                                                                                                             12/12
      Verifying  : confluent-kafka-connect-storage-common-3.2.1-1.noarch                                                                                                   1/12
      Verifying  : confluent-platform-oss-2.11-3.2.1-1.noarch                                                                                                              2/12
      Verifying  : confluent-rest-utils-3.2.1-1.noarch                                                                                                                     3/12
      Verifying  : confluent-kafka-connect-elasticsearch-3.2.1-1.noarch                                                                                                    4/12
      Verifying  : confluent-kafka-connect-s3-3.2.1-1.noarch                                                                                                               5/12
      Verifying  : confluent-kafka-rest-3.2.1-1.noarch                                                                                                                     6/12
      Verifying  : confluent-camus-3.2.1-1.noarch                                                                                                                          7/12
      Verifying  : confluent-kafka-connect-jdbc-3.2.1-1.noarch                                                                                                             8/12
      Verifying  : confluent-schema-registry-3.2.1-1.noarch                                                                                                                9/12
      Verifying  : confluent-kafka-2.11-0.10.2.1-1.noarch                                                                                                                 10/12
      Verifying  : confluent-kafka-connect-hdfs-3.2.1-1.noarch                                                                                                            11/12
      Verifying  : confluent-common-3.2.1-1.noarch                                                                                                                        12/12
    
    Installed:
      confluent-platform-oss-2.11.noarch 0:3.2.1-1
    
    Dependency Installed:
      confluent-camus.noarch 0:3.2.1-1                           confluent-common.noarch 0:3.2.1-1                           confluent-kafka-2.11.noarch 0:0.10.2.1-1
      confluent-kafka-connect-elasticsearch.noarch 0:3.2.1-1     confluent-kafka-connect-hdfs.noarch 0:3.2.1-1               confluent-kafka-connect-jdbc.noarch 0:3.2.1-1
      confluent-kafka-connect-s3.noarch 0:3.2.1-1                confluent-kafka-connect-storage-common.noarch 0:3.2.1-1     confluent-kafka-rest.noarch 0:3.2.1-1
      confluent-rest-utils.noarch 0:3.2.1-1                      confluent-schema-registry.noarch 0:3.2.1-1
    
    Complete!

Ok , the binaries are installed now. The next operation will be to configure and launch Zookeeper and Kafka itself !

    • First , take a look at the Zookeeper configuration :
      [root@osboxes kafka]# cat /etc/kafka/zookeeper.properties
      # Licensed to the Apache Software Foundation (ASF) under one or more
      # contributor license agreements.  See the NOTICE file distributed with
      # this work for additional information regarding copyright ownership.
      # The ASF licenses this file to You under the Apache License, Version 2.0
      # (the "License"); you may not use this file except in compliance with
      # the License.  You may obtain a copy of the License at
      #
      #    http://www.apache.org/licenses/LICENSE-2.0
      #
      # Unless required by applicable law or agreed to in writing, software
      # distributed under the License is distributed on an "AS IS" BASIS,
      # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      # See the License for the specific language governing permissions and
      # limitations under the License.
      # the directory where the snapshot is stored.
      dataDir=/var/lib/zookeeper
      # the port at which the clients will connect
      clientPort=2181
      # disable the per-ip limit on the number of connections since this is a non-production config
      maxClientCnxns=0
    • Don’t change the configuration file (the default values are okay to start with)  and launch Zookeeper
      /usr/bin/zookeeper-server-start /etc/kafka/zookeeper.properties
      ...
      [2017-06-08 14:05:02,051] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory)
    • Keep the session with Zookeeper and open a new terminal for the Kafka part
      /usr/bin/kafka-server-start /etc/kafka/server.properties
      ...
      [2017-06-08 14:11:31,333] INFO Kafka version : 0.10.2.1-cp1 (org.apache.kafka.common.utils.AppInfoParser)
      [2017-06-08 14:11:31,334] INFO Kafka commitId : 80ff5014b9e74a45 (org.apache.kafka.common.utils.AppInfoParser)
      [2017-06-08 14:11:31,335] INFO [Kafka Server 0], started (kafka.server.KafkaServer)
      [2017-06-08 14:11:31,350] INFO Waiting 10062 ms for the monitored broker to finish starting up... (io.confluent.support.
      metrics.MetricsReporter)
      [2017-06-08 14:11:41,413] INFO Monitored broker is now ready (io.confluent.support.metrics.MetricsReporter)
      [2017-06-08 14:11:41,413] INFO Starting metrics collection from monitored broker... (io.confluent.support.metrics.Metric
      sReporter)
    • Like Zookeeper , let the Kafka Terminal open and launch a new session for the topic creation.

 Topic creation

  • Messages in Kafka are categorized into Topics, it’s like a db table or a directory in a file system.
    At first , we are going to create a new topic.

    [root@osboxes ~]# /usr/bin/kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic dbi
    Created topic "dbi".
  • Check if the topic has been effectively created
    [root@osboxes ~]# /usr/bin/kafka-topics --list --zookeeper localhost:2181
    dbi
  • 
    

    Nice , we can now produce some messages using the topic “dbi”

    [root@osboxes ~]# kafka-console-producer --broker-list localhost:9092 --topic dbi
    be passionate
    be successful
    be responsible
    be sharing
  • Open a new terminal and act like a consumer with the console
    /usr/bin/kafka-console-consumer --zookeeper localhost:2181 --topic dbi --from-beginning
    be passionate
    be successful
    be responsible
    be sharing
  • Et voilà ! the messages produced with the producer appeared now in the consumer windows. You can type a new message in the producer console , it will display immediately in the other terminal.If you want to stop all the consoles , you can press Ctrl-C.

Now the most difficult thing is still to be done, configure Kafka with multiple producers / consumers within a complex broker topology.

cluster_architecture

 

Cet article Install Apache Kafka on Linux est apparu en premier sur Blog dbi services.

SQL Server 2017 high availability and Resumable Online Indexes

Mon, 2017-06-05 13:19

Ensuring data availability is an important part of the high availability design. SQL Server AlwaysOn features including SQL Server FCIs and availability groups address some aspects of the problem but we may also rely on the online operations features to maximize data availability. Indeed, for some high-critical workloads, offline operations during the maintenance phase are not permitted and may contribute to call into question the entire architecture.

But have you ever faced an outage from your primary replica leading to a failover to the secondary replica during the database maintenance timeline? Yes, your architecture is designed to behave in this way for sure but let’s say you were in the middle of a rebuild index operation concerning a big table and everything is rolled back now. In this case, you will probably have to wait the next windows maintenance time to initiate another rebuild operation, but from the start …

Fortunately, the new Resumable Online Indexes feature from SQL Server 2017 will come to the rescue and seems promising to address such situation. Some limitations are still around and well-documented in the Microsoft documentation but I guess Microsoft will work hard to remove them in the future.

Let’s play a little bit with the Resumable Online Indexes on my AG environment that includes two replicas with cluster_type = NONE meaning we don’t rely on an underlying orchestrator to handle automatic failover of the AG resources. In this context, it will be sufficient to demonstrate how Resumable Online Indexes work.

blog 123 - 01 - resumable onlline index - scenario_

My initial configuration is as follows:

blog 123 - 02 - resumable onlline index - ag win

I used the AdventureWorks2016 database with a bigTransactionHistory table generated from the Adam Machanic script.

select 
	o.name as table_name,
	i.name as index_name,
	p.index_id,
	au.type_desc,
	au.used_pages,
	p.rows
from 
	sys.allocation_units as au
join 
	sys.partitions as p on au.container_id = p.hobt_id
join 
	sys.objects as o on o.object_id = p.object_id
left join 
	sys.indexes as i on i.object_id = p.object_id
						and i.index_id = p.index_id
where p.object_id = object_id('bigTransactionHistory');

 

blog 123 - 03 - resumable onlline index - bigTransactionHistory

Let’s then initiate a rebuild of the pk_bigTransactionHistory index by using the new RESUMABLE option as follows:

ALTER INDEX pk_bigTransactionHistory 
ON bigTransactionHistory 
REBUILD
WITH (ONLINE = ON, RESUMABLE = ON);
GO

 

By using this option, I made the rebuild phase “resumable” and I’m now able to perform additional actions like pausing or aborting the index rebuild operation. I may also use MAX_DURATION parameter to setup the duration (in minutes) of the operation before to pause it. We may easily identify resumable sessions which are running on the SQL Server instance by using the sys.dm_exec_requests DMV and new is_resumable column.

blog 123 - 04 - resumable onlline index - sys_dm_exec_requests_is_resumable

Let’s initiate a manual failover while the index is rebuilt by SQL Server on the primary replica. Then during the outage of the first primary replica, I faced the following error message:

blog 123 - 05 - resumable onlline index - error during failover

A typical error message regarding the situation … but the most interesting part comes now. If we go through the new sys.index_resumable_operations DMV, we may notice our rebuild index operation is paused.

SELECT 
	iro.last_max_dop_used,
	iro.sql_text,
	iro.start_time,
	iro.last_pause_time,
	iro.total_execution_time,
	iro.page_count, 
	iro.percent_complete
FROM  
	sys.index_resumable_operations as iro
JOIN
	sys.objects AS o ON o.object_id = iro.object_id

 

blog 123 - 06 - resumable onlline index - resume op dmv

From the above output, we may extract interesting column values:

  • percent_complete : the percent complete value at the moment of pausing the operation
  • page_count = size of the second index structure at the moment of pausing the operation
  • last_pause_time = it is self-explanatory
  • last_max_dop_used = max dop value used during the last rebuild operation. It is interesting to notice we may change this value for the same operation between pausing sessions.

Before resuming the index operation, let’s have a brief look at the transaction log used space. I asked myself some questions about the transaction behavior when the index operation is paused: Does a pause state have an impact on the transaction log? Do the VLFs touched by this operation remain active until the index is completely rebuilt? Let’s answer to this question by the following test.

A first look at the transaction log space used says that the transaction log contains records that must be backed-up.

use AdventureWorks2016;

select 
	total_log_size_in_bytes / 1024 / 1024 / 1024 as total_log_size_GB,
	used_log_space_in_bytes / 1024 / 1024  as used_log_space_MB,
	log_space_in_bytes_since_last_backup / 1024 / 1024 as log_space_MB_since_last_backup,
	used_log_space_in_percent
from sys.dm_db_log_space_usage;
go

 

blog 123 - 07 - resumable onlline index - tlog

According to the previous result, let’s perform a backup log …

backup log AdventureWorks2016
to disk = 'NUL' 
with stats = 10;
go

blog 123 - 08 - resumable onlline index - tlog

Well, pausing an index operation doesn’t rely on a remaining active transaction that may impact the transaction log retention but after a few thoughts, it seems to be obvious it works in this way.

What about storage? The resumable option is available only with online index operations meaning SQL Server must maintain the corresponding hidden structures as long as it is necessary to rebuild the underlying index. According to the BOL:

No extra resources are required for resumable index rebuild except for◦Additional space required to keep the index being built, including the time when index is being paused

Obviously, maintaining such structure may have a huge impact on your workload regarding your context. Out of curiosity, I went through the system view to see those hidden structures as follows:

select 
	o.name as table_name,
	i.name as index_name,
	p.index_id,
	au.type_desc,
	SUM(au.used_pages) as total_used_pages,
	SUM(p.rows) as total_rows,
	COUNT(*) as nb_partitions
from 
	sys.allocation_units as au
join 
	sys.partitions as p on au.container_id = p.hobt_id
join 
	sys.objects as o on o.object_id = p.object_id
left join 
	sys.indexes as i on i.object_id = p.object_id
						and i.index_id = p.index_id
where 
	p.object_id = object_id('bigTransactionHistory')
group by 
	o.name, i.name, p.index_id, au.type_desc
order by 
	o.name, p.index_id;
go

 

blog 123 - 09 - resumable onlline index - hidden index structure

In regard to the second record, a simple math – 9261 * 100. / 190150 – confirms the structure is only 4.8 percent in size of the underlying cluster index. We may retrieve this result from the first output of the new sys.index_resumble_index system view. For the third one, my guess is it corresponds to the temporary mapping index used by SQL Server to determine which records to delete in the new indexes that are being built when rows in the underlying table are updated or deleted.

Here is an execution plan sample of an update query against the bigTransactionHistory table. We may notice DELETE / UPDATE operations to a “hidden” nonclustered index related to this special index highlighted in red.

blog 123 - 10 - resumable onlline index - execution plan

Go ahead and let’s resume the pending index operation by using the new RESUME option. We may also add the MAX_DURATION option to guarantee the rebuild operation will go back to a pause state if we consider that reaching the maintenance windows limit cannot hurt the daily business workload in terms of resource for instance. An option we will definitely consider to add it with the next version of our DMK maintenance tool for SQL Server.

ALTER INDEX pk_bigTransactionHistory 
ON bigTransactionHistory RESUME WITH (MAX_DURATION = 120 MINUTES);
GO

In this blog, we just have surfaced this new maintenance option and its capabilities. Other articles will probably come in the future.

Stay tuned!

 

 

Cet article SQL Server 2017 high availability and Resumable Online Indexes est apparu en premier sur Blog dbi services.

Oracle 12.2 – How to rotate the 12.2 listener log (DIAG_ADR_ENABLED_LISTENER = OFF)

Mon, 2017-06-05 06:49

The listener log file contains a lot of very useful information, like the program which was used for the connection, the IP address where the connection is coming from, the OS user which was used on the client  and many many more.

05-JUN-2017 12:36:19 * service_update * DBIT122 * 0
05-JUN-2017 12:36:19 * (CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE1_DGMGRL)(UR=A)(CID=(PROGRAM=sqlplus@dbidg01)(HOST=dbidg01)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.201)(PORT=42619)) * establish * DBIT122_SITE1_DGMGRL * 0
05-JUN-2017 12:36:19 * (CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE1_DGMGRL)(UR=A)(CID=(PROGRAM=sqlplus@dbidg01)(HOST=dbidg01)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.201)(PORT=42620)) * establish * DBIT122_SITE1_DGMGRL * 0

However, it does not contain only successful information; it also shows when connections have been rejected because of TCP.VALIDNODE_CHECKING or any type of TNS errors. So, why not using it for auditing? And what about the performance overhead of listener logging.

Let’s start first with the performance overhead. I am doing 4 types of tests here.

1. Performance with no listener logging (100 connections)
2. Performance with a small listener.log (100 connections)
3. Performance with big listener.log, close to 4G (100 connections)
4. Performance with a full listener.log, exactly 4G (100 connections)

 

Test 1: Performance with no listener logging (100 connections)
oracle@dbidg01:/home/oracle/ [DBIT122] time ./performance_listener.sh
SQLPlus count 1
SQLPlus count 2
...
...
SQLPlus count 99
SQLPlus count 100

real    0m3.360s
user    0m1.065s
sys     0m0.495s
Test 2: Performance with a small listener.log (100 connections)
oracle@dbidg01:/home/oracle/ [DBIT122] time ./performance_listener.sh
SQLPlus count 1
SQLPlus count 2
...
...
SQLPlus count 99
SQLPlus count 100

real    0m3.401s
user    0m1.049s
sys     0m0.511s
Test 3: Performance with big listener.log, close to 4G (100 connections)
oracle@dbidg01:/home/oracle/tmp/ [DBIT122] time ./performance_listener.sh
SQLPlus count 1
SQLPlus count 2
SQLPlus count 3
...
...
SQLPlus count 99
SQLPlus count 100

real    0m3.766s
user    0m1.110s
sys     0m0.522s
Test 4: Performance with a full listener.log, exactly 4G (100 connections)
oracle@dbidg01:/home/oracle/tmp/ [DBIT122] time ./performance_listener.sh
SQLPlus count 1
SQLPlus count 2
SQLPlus count 3
...
...
SQLPlus count 99
SQLPlus count 100

real    0m3.430s
user    0m1.068s
sys     0m0.501s

As you can see in the results, sqlplus connections without listener logging are the fastest one, and the bigger the file gets, the slower the connections are. But wait a second. What’s going on with test 4? As soon as the listener log is full, connections are faster again. The reason for test 4 being faster again, is that the listener is not logging anymore. As soon as it reaches the 4G limit, which is still the case with Oracle 12.2, the listener does not crash like in some older versions beforehand, but the logs are going to /dev/null. So, I will lose all my auditing information for those ones after the 4G limit was reached.

How do we overcome this issue? The answer is to rotate the listener log. But how do we do it? We can’t just simply move the old listener log away and create an empty new one, because the file descriptor is still open and you would create a huge mess.

We could stop the listener, rotate the listener log and start the listener again. A little better, but for the time the listener is stopped, no connection will be possible which is also not a good idea.

From my point of view, the best solution is to stop listener logging online as soon as it hits 1G, rotate the listener log and start listener logging again, like in the following example:

lsnrctl <<-EOF
set current_listener $ListenerName
set log_status off
EOF

mv ${ListenerLogFile} ${ListenerLogFile}.${Date}

lsnrctl <<-EOF
set current_listener $ListenerName
set log_status on
EOF

Now you have the advantage, that the log rotation is an online operation, and you don’t create any mess with open file descriptors. And if you rotate the listener log before it reaches 1G, it is also less likely to run into performance issues.

Conclusion

Take care of your listener log, so that it does not hit the 4G file size limit. You might lose very important information which will not be logged anymore. And do the listener log rotation correctly. ;-)

 

Cet article Oracle 12.2 – How to rotate the 12.2 listener log (DIAG_ADR_ENABLED_LISTENER = OFF) est apparu en premier sur Blog dbi services.

12cR2 auditing all users with a role granted

Sun, 2017-06-04 11:03

12.1 introduced Unified Auditing where you define policies and then enable them. As with the traditional audit, you enable them for all users or for specific users. The unified auditing adds a syntax to audit all users except some listed ones. 12.2 adds a syntax to audit a group of users, defined by the role granted. This is the best way to enable a policy for a group of users, including those created later.

I create a simple policy, to audit logon and DBA role usage:

SQL> create audit policy DEMO_POLICY actions logon, roles DBA;
Audit POLICY created.

I create a new DBA user, USER1

SQL> create user USER1 identified by covfefe quota unlimited on USERS;
User USER1 created.
SQL> grant DBA to USER1;
Grant succeeded.

I want to enable the policy for this user because I want to audit all DBAs

SQL> audit policy DEMO_POLICY by USER1;
Audit succeeded.

I remove Audit records for this demo

SQL> exec dbms_audit_mgmt.clean_audit_trail(audit_trail_type=>dbms_audit_mgmt.audit_trail_unified,use_last_arch_timestamp=>false);
PL/SQL procedure successfully completed.

Let’s connect with this user and see what is audited:

SQL> connect USER1/covfefe@//localhost/PDB1
Connected.
 
SQL> select audit_type,os_username,userhost,terminal,dbusername,action_name,unified_audit_policies,system_privilege_used,event_timestamp
2 from unified_audit_trail where unified_audit_policies='DEMO_POLICY' order by event_timestamp;
 
AUDIT_TYPE OS_USERNAME USERHOST TERMINAL DBUSERNAME ACTION_NAME UNIFIED_AUDIT_POLICIES SYSTEM_PRIVILEGE_USED EVENT_TIMESTAMP
---------- ----------- -------- -------- ---------- ----------- ---------------------- --------------------- ---------------
Standard oracle VM104 pts/0 USER1 LOGON DEMO_POLICY CREATE SESSION 04-JUN-17 04.22.51.865094000 PM
Standard oracle VM104 pts/0 USER1 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.22.51.948187000 PM

The logon and the select on dictionary table (possible here thanks to the DBA role) has been audited because the policy is enabled for this user.

We have a new DBA and we create a new user for him:

SQL> create user USER2 identified by covfefe quota unlimited on USERS;
User USER2 created.
SQL> grant DBA to USER2;
Grant succeeded.

He connects and check what is audited:

SQL> connect USER2/covfefe@//localhost/PDB1
Connected.
SQL> select audit_type,os_username,userhost,terminal,dbusername,action_name,unified_audit_policies,system_privilege_used,event_timestamp
2 from unified_audit_trail where unified_audit_policies='DEMO_POLICY' order by event_timestamp;
 
AUDIT_TYPE OS_USERNAME USERHOST TERMINAL DBUSERNAME ACTION_NAME UNIFIED_AUDIT_POLICIES SYSTEM_PRIVILEGE_USED EVENT_TIMESTAMP
---------- ----------- -------- -------- ---------- ----------- ---------------------- --------------------- ---------------
Standard oracle VM104 pts/0 USER1 LOGON DEMO_POLICY CREATE SESSION 04-JUN-17 04.22.51.865094000 PM
Standard oracle VM104 pts/0 USER1 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.22.51.948187000 PM
Standard oracle VM104 pts/0 USER1 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.22.52.132814000 PM

Nothing is audited for this user. The DBA role usage is audited, but only for USER1.

Of course, we can add an audit statement for each user created for a DBA:

SQL> audit policy DEMO_POLICY by USER2;
Audit succeeded.

Then his new activity is audited:

SQL> connect USER2/covfefe@//localhost/PDB1
Connected.
SQL> select audit_type,os_username,userhost,terminal,dbusername,action_name,unified_audit_policies,system_privilege_used,event_timestamp
2 from unified_audit_trail where unified_audit_policies='DEMO_POLICY' order by event_timestamp;
 
AUDIT_TYPE OS_USERNAME USERHOST TERMINAL DBUSERNAME ACTION_NAME UNIFIED_AUDIT_POLICIES SYSTEM_PRIVILEGE_USED EVENT_TIMESTAMP
---------- ----------- -------- -------- ---------- ----------- ---------------------- --------------------- ---------------
Standard oracle VM104 pts/0 USER1 LOGON DEMO_POLICY CREATE SESSION 04-JUN-17 04.22.51.865094000 PM
Standard oracle VM104 pts/0 USER1 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.22.51.948187000 PM
Standard oracle VM104 pts/0 USER1 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.22.52.132814000 PM
Standard oracle VM104 pts/0 USER2 LOGON DEMO_POLICY CREATE SESSION 04-JUN-17 04.22.52.338928000 PM

But for security reason, we would like to be sure that any new user having the DBA role granted is audited.
Let’s try something else

SQL> noaudit policy DEMO_POLICY by USER1,USER2;
Noaudit succeeded.

We can simply audit all users:

SQL> audit policy DEMO_POLICY;
Audit succeeded.

But this is too much. Some applications constantly logon and logoff and we don’t want to have that in the audit trail.

SQL> noaudit policy DEMO_POLICY;
Noaudit succeeded.

We can still enable the policy for all users, and exempt those users we don’t want:

SQL> audit policy DEMO_POLICY except DEMO;
Audit succeeded.

Here is what is enabled, and this will audot all new users:

SQL> select * from audit_unified_enabled_policies;
 
USER_NAME POLICY_NAME ENABLED_OPT ENABLED_OPTION ENTITY_NAME ENTITY_TYPE SUCCESS FAILURE
--------- ----------- ----------- -------------- ----------- ----------- ------- -------
DEMO DEMO_POLICY EXCEPT EXCEPT USER DEMO USER YES YES
ALL USERS ORA_SECURECONFIG BY BY USER ALL USERS USER YES YES
ALL USERS ORA_LOGON_FAILURES BY BY USER ALL USERS USER NO YES

But once again, this is not what we want.

SQL> noaudit policy DEMO_POLICY by DEMO;
Noaudit succeeded.
 
SQL> select * from audit_unified_enabled_policies;
 
USER_NAME POLICY_NAME ENABLED_OPT ENABLED_OPTION ENTITY_NAME ENTITY_TYPE SUCCESS FAILURE
--------- ----------- ----------- -------------- ----------- ----------- ------- -------
ALL USERS ORA_SECURECONFIG BY BY USER ALL USERS USER YES YES
ALL USERS ORA_LOGON_FAILURES BY BY USER ALL USERS USER NO YES

Audit all users to whom roles are granted directly

In 12cR2 we have the possibility to do exactly what we want: audit all users having the DBA role granted:

SQL> audit policy DEMO_POLICY by users with granted roles DBA;
Audit succeeded.

This enables the audit for all users for whom the DBA role has been directly granted:

SQL> select * from audit_unified_enabled_policies;
 
USER_NAME POLICY_NAME ENABLED_OPT ENABLED_OPTION ENTITY_NAME ENTITY_TYPE SUCCESS FAILURE
--------- ----------- ----------- -------------- ----------- ----------- ------- -------
DEMO_POLICY INVALID BY GRANTED ROLE DBA ROLE YES YES
ALL USERS ORA_SECURECONFIG BY BY USER ALL USERS USER YES YES
ALL USERS ORA_LOGON_FAILURES BY BY USER ALL USERS USER NO YES

The important thing is that a newly created user will be audited as long as he has the DBA role directly granted:

SQL> create user USER3 identified by covfefe quota unlimited on USERS;
User USER3 created.
SQL> grant DBA to USER3;
Grant succeeded.
 
SQL> connect USER3/covfefe@//localhost/PDB1
Connected.
SQL> select audit_type,os_username,userhost,terminal,dbusername,action_name,unified_audit_policies,system_privilege_used,event_timestamp
2 from unified_audit_trail where unified_audit_policies='DEMO_POLICY' order by event_timestamp;
 
AUDIT_TYPE OS_USERNAME USERHOST TERMINAL DBUSERNAME ACTION_NAME UNIFIED_AUDIT_POLICIES SYSTEM_PRIVILEGE_USED EVENT_TIMESTAMP
---------- ----------- -------- -------- ---------- ----------- ---------------------- --------------------- ---------------
Standard oracle VM104 pts/0 USER1 LOGON DEMO_POLICY CREATE SESSION 04-JUN-17 04.29.17.915217000 PM
Standard oracle VM104 pts/0 USER1 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.29.17.988151000 PM
Standard oracle VM104 pts/0 USER1 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.29.18.117258000 PM
Standard oracle VM104 pts/0 USER2 LOGON DEMO_POLICY CREATE SESSION 04-JUN-17 04.29.18.322716000 PM
Standard oracle VM104 pts/0 USER2 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.29.18.345351000 PM
Standard oracle VM104 pts/0 USER2 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.29.18.415117000 PM
Standard oracle VM104 pts/0 USER2 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.29.18.439656000 PM
Standard oracle VM104 pts/0 USER2 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.29.18.455274000 PM
Standard oracle VM104 pts/0 USER3 LOGON DEMO_POLICY CREATE SESSION 04-JUN-17 04.29.18.507496000 PM

This policy applies to all users having the DBA role, and gives the possibility to audit more than their DBA role usage: here I audit all login from users having the DBA role.

So what?

We don’t use roles only to group privileges to grant. A role is usually granted to define groups of users: DBAs, Application user, Read-only application users, etc. The Unified Auditing can define complex policies, combining the audit of actions, privileges, and roles. The 12.2 syntax allows enabling the policy to a specific group of users.

 

Cet article 12cR2 auditing all users with a role granted est apparu en premier sur Blog dbi services.

Prochaine édition des 24 HOP 2017 francophone

Tue, 2017-05-30 06:28

La prochaine édition du 24 Hours of PASS 2017 edition francophone se déroulera les 29-30 juin prochain.

Pour rappel le format est simple: 24 webinars gratuits répartis sur 2 jours de 07:00 à 18h00 GMT et en Français. La seule obligation: s’inscrire aux sessions auxquelles vous assisterez. Cela vous permettra également de récupérer l’enregistrement vidéo si vous voulez la visionner à nouveau par la suite.

Cette année il y en aura encore pour tous les goûts. Du monitoring, de la performance, de l’Azure, de la BI, du BigData et machine learning, de la modélisation, de la haute disponibilité, de l’open source et des nouveautés concernant la prochaine version de SQL Server!

Pour ma part j’aurai le privilège de présenter une session concernant les nouvelles possibilités en terme de haute disponibilité avec SQL Server dans un monde mixte (Windows et Linux) et un monde “full Linux”.

 

24HOP%20Website%20Banner%20French

Au plaisir de vous y retrouver!

 

 

Cet article Prochaine édition des 24 HOP 2017 francophone est apparu en premier sur Blog dbi services.

New release model for JUL17 (or Oracle 17.3): RU and RUR

Tue, 2017-05-30 04:58
Updated June 5th

When reading about new version numbering in SQL Developer, I took the risk to change the title and guess the future version number for Oracle Database: Oracle 17.3 for the July (Q3) of 2017. Of course, this is just a guess.

Updated June 10th

Confirming the previous guess, we start to see some bugs planned to be fixed in version 18.1 which is probably the January 2018 Release Update.
Capture18.1

News from DOAGDB17 – May 30th

Oracle Database software comes in versions: 10g in 2004, 11g in 2007, 12c in 2013
In between, there are releases: 10gR2 in 2005, 11gR2 in 2009, 12cR2 in 2017
In between, there are Patch Sets: the latest 11gR2 is 11.2.0.4 (2 years after 11.2.0.3) and 12cR1 is 12.1.0.2 (one year after 12.1.0.1)
Those are full install: full Oracle Home. It can be in-place or into a new Oracle Home but it is a full install. There are a lot of changes in the system dictionary and you run catupgrd.sql to update it. It takes from 30 minutes to 1 hour on average depending on the components and the system.

Their primary goal is to release features. You should test them carefully. For example, the In-Memory option came in the first Patch Set of 12cR1

This will change in 2017 with annual feature releases. Well, this looks like a rename of Patch Set.

All releases are supported several years, with fixes (patches) provided for encountered issues: security, wrong result, hanging, crash, etc. Rather than installing one-off patches, and requesting merges for them, Oracle supplies some bundle patches: merged together, tested as a whole, cumulative, with a quarterly release frequency. Depending on the content and the platform, they are called Bundle Patches (in Windows), CPU (only security fixes), SPU (same as CPU but renamed to SPU), PSU (Patch Set Update), Proactive Bundle Patch (a bit more than in the PSU)…
The names have changed, the versioning number as well as they now include the date of release.
You apply patches into the Oracle Home with OPatch utility and into the database dictionary with the new datapatch utility.

Their primary goal is to get stability: fix issues with a low risk of regression. The minimum recommended is in the PSU, more fixes are in the ProactiveBP for known bugs.

This will change in 2017 with quarterly Release Updates. Well, this looks like a rename of PSU to RUR (Release Update Revision) and a rename of ProactiveBP as RU (Release Update).

The goal is to reduce the need to apply one-off patches on top of PSUs.

Here is all what I know about it, as presented by Andy Mendelsohn at DOAG Datenbank 2017 keynote:

IMG_3976

It is not common to have new announcements in the last fiscal year quarter. Thanks DOAG for this keynote.

 

Cet article New release model for JUL17 (or Oracle 17.3): RU and RUR est apparu en premier sur Blog dbi services.

Pages