Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 7 hours 2 min ago

12cR2 Single-Tenant: Multitenant Features for All Editions

Mon, 2016-11-14 09:00

Now that 12.2 is there, in the Oracle Public Cloud Service, I can share the slides of the presentation I made for Oracle Open World:

I’ll give the same session in French, In Geneva on November 23rd at Oracle Switzerland. Ask me if you want an invitation.

The basic idea is that non-CDB is deprecated, and not available in the Oracle Public Cloud. If you don’t purchase the Multitenant Option, then you will use ‘Single-Tenant’. And in 12.2 there are interesting features coming with it. Don’t fear it. Learn it and benefit from it.

CaptureSingleTenant

In addition to that, I’ll detail

  • The 12.2 new security feature coming with multitenant: at DOAG 2016
  • The internals of multitenant architecture: at UKOUG TECH16

And don’t hesitate to come at the dbi services booth for questions and/or demos about Multitenant.
There’s also the book I co-authored: Oracle Database 12c Release 2 Multitenant (Oracle Press) which should be available within a few weeks.

 

Cet article 12cR2 Single-Tenant: Multitenant Features for All Editions est apparu en premier sur Blog dbi services.

12cR2 new index usage tracking

Sun, 2016-11-13 14:15

A common question is: how to know which indexes are not used, so that I can drop them. If you tried to use index monitoring you probably have seen the limits of it which make it difficult to use. It has been improved in 12.2 so let’s check if it helps to release the stress of performance regression when we drop an index… or not.

I’ll check two views here. Here is what documentation says about them:

  • DBA_INDEX_USAGE displays cumulative statistics for each index.
  • V$INDEX_USAGE_INFO keeps track of index usage since the last flush. A flush occurs every 15 minutes.
    After each flush, ACTIVE_ELEM_COUNT is reset to 0 and LAST_FLUSH_TIME is updated to the current time.

The documentation about V$INDEX_USAGE_INFO show a column INDEX_STATS_COLLECTION_TYPE where description explains that by default the statistics are collected based on sampling (only a few of the executions are considered when collecting the statistics). The type of collection that collects the statistics for each execution may have a performance overhead.

SAMPLED

I’ve found an undocumented to control this collection, which defaults to ‘SAMPLED’ and I’ll set it to ‘ALL’ to get deterministic test case:
17:53:51 SQL> alter session set "_iut_stat_collection_type"=ALL;
Session altered.

So this is the first problem with how reliable index usage tracking is. If your boss is running a report once a month which needs a index, you may miss this execution and think that this index is unused and decide to drop it. And you will have a regression. Do you want to take the risk on a sample monitoring?

Execution using index

On the SCOTT schema I’m running a query that uses the index PK_DEPT

17:53:51 SQL> set autotrace on explain
Autotrace Enabled
Displays the execution plan only.
 
17:53:51 SQL> select * from emp join dept using(deptno) where ename like 'K%';
 
DEPTNO EMPNO ENAME JOB MGR HIREDATE SAL COMM DNAME LOC
10 7839 KING PRESIDENT 17-nov 00:00:00 5000 ACCOUNTING NEW YORK
 
Explain Plan
-----------------------------------------------------------
Plan hash value: 3625962092
 
----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 117 | 3 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | 1 | 117 | 3 (0)| 00:00:01 |
| 2 | NESTED LOOPS | | 1 | 117 | 3 (0)| 00:00:01 |
|* 3 | TABLE ACCESS FULL | EMP | 1 | 87 | 2 (0)| 00:00:01 |
|* 4 | INDEX UNIQUE SCAN | PK_DEPT | 1 | | 0 (0)| 00:00:01 |
| 5 | TABLE ACCESS BY INDEX ROWID| DEPT | 1 | 30 | 1 (0)| 00:00:01 |
----------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
3 - filter("EMP"."ENAME" LIKE 'K%')
4 - access("EMP"."DEPTNO"="DEPT"."DEPTNO")
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- this is an adaptive plan
 
17:53:52 SQL> set autotrace off
Autotrace Disabled

When I look at the index usage tracking views, I don’t see this usage and the reason is that the last flush is from before the execution:

17:53:52 SQL> select * from v$index_usage_info;
INDEX_STATS_ENABLED INDEX_STATS_COLLECTION_TYPE ACTIVE_ELEM_COUNT ALLOC_ELEM_COUNT MAX_ELEM_COUNT FLUSH_COUNT TOTAL_FLUSH_DURATION LAST_FLUSH_TIME STATUS_MSG CON_ID
1 0 2 3 30000 8 30790 13-NOV-16 05.48.12.218000000 PM 3
 
17:53:52 SQL> select * from dba_index_usage where owner='SCOTT';
 
no rows selected

The statistics are gathered in memory and are flushed to the dictionary every 15 minutes. For the moment, I’ve not found how to flush them manually, so I just wait 900 seconds:


17:53:52 SQL> host sleep 900
 
18:10:32 SQL> select * from v$index_usage_info;
INDEX_STATS_ENABLED INDEX_STATS_COLLECTION_TYPE ACTIVE_ELEM_COUNT ALLOC_ELEM_COUNT MAX_ELEM_COUNT FLUSH_COUNT TOTAL_FLUSH_DURATION LAST_FLUSH_TIME STATUS_MSG CON_ID
1 0 2 3 30000 9 45898 13-NOV-16 06.03.13.344000000 PM 3
 
18:10:32 SQL> select * from dba_index_usage where owner='SCOTT';
OBJECT_ID NAME OWNER TOTAL_ACCESS_COUNT TOTAL_EXEC_COUNT TOTAL_ROWS_RETURNED BUCKET_0_ACCESS_COUNT BUCKET_1_ACCESS_COUNT BUCKET_2_10_ACCESS_COUNT BUCKET_2_10_ROWS_RETURNED BUCKET_11_100_ACCESS_COUNT BUCKET_11_100_ROWS_RETURNED BUCKET_101_1000_ACCESS_COUNT BUCKET_101_1000_ROWS_RETURNED BUCKET_1000_PLUS_ACCESS_COUNT BUCKET_1000_PLUS_ROWS_RETURNED LAST_USED
73723 PK_DEPT SCOTT 1 1 1 0 1 0 0 0 0 0 0 0 0 13-nov 18:03:13

Here is my index usage recorded. On execution. One row returned from the index.

DBMS_STATS

One drawback of index monitoring was that the statistics gathering was setting the monitoring to ‘YES’. Let’s see if it’s better in 12.2:


18:10:32 SQL> exec dbms_stats.gather_index_stats('SCOTT','PK_DEPT');
PL/SQL procedure successfully completed.

Again, waiting 15 minutes to get it flushed (and check LAST_FLUSH_TIME):


18:10:32 SQL> host sleep 900
 
18:27:12 SQL> select * from v$index_usage_info;
INDEX_STATS_ENABLED INDEX_STATS_COLLECTION_TYPE ACTIVE_ELEM_COUNT ALLOC_ELEM_COUNT MAX_ELEM_COUNT F LUSH_COUNT TOTAL_FLUSH_DURATION LAST_FLUSH_TIME STATUS_MSG CON_ID
1 0 1 3 30000 1 0 48136 13-NOV-16 06.18.13.748000000 PM 3
 
18:27:12 SQL> select * from dba_index_usage where owner='SCOTT';
OBJECT_ID NAME OWNER TOTAL_ACCESS_COUNT TOTAL_EXEC_COUNT TOTAL_ROWS_RETURNED BUCKET_0_ACCESS_CO UNT BUCKET_1_ACCESS_COUNT BUCKET_2_10_ACCESS_COUNT BUCKET_2_10_ROWS_RETURNED BUCKET_11_100_ACCESS_CO UNT BUCKET_11_100_ROWS_RETURNED BUCKET_101_1000_ACCESS_COUNT BUCKET_101_1000_ROWS_RETURNED BUCKET_10 00_PLUS_ACCESS_COUNT BUCKET_1000_PLUS_ROWS_RETURNED LAST_USED
73723 PK_DEPT SCOTT 2 2 5 0 1 1 4 0 0 0 0 0 0 13-nov 18:18:13

It seems that the index tracking usage has been incremented here. Total rows returned incremented by 4 which is the number of rows in DEPT, read by dbms_stats.
This will be very difficult to use to detect unused index because we can expect that even unused indexes have statistics gathering on them.

Index on Foreign Key to avoid table locks

There’s another risk we have when we drop an index. It may not be used for access, but to avoid a TM Share lock on a child table when deleting rows from the referenced table. This is again something that was not monitored. When the parent table has few rows, like some lookup tables, the index on the foreign key will probably not be used to access to the child rows, or to check that there are no child rows when you delete a parent one. A full scan will be faster. But an index on it is still required to avoid to lock the whole table when we delete rows from the parent.

Let’s create such an index.


18:27:12 SQL> create index FK_EMP on EMP(DEPTNO);
Index FK_EMP created.

I’ll delete DEPTNO=50 and I can verify that checking that there are no child rows is done without the need of the index:


SQL_ID 1v3zkdftt0vv7, child number 0
-------------------------------------
select * from emp where deptno=50
 
Plan hash value: 3956160932
 
------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 0 |00:00:00.01 | 3 |
|* 1 | TABLE ACCESS FULL| EMP | 1 | 1 | 0 |00:00:00.01 | 3 |
------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
1 - filter("DEPTNO"=50)

Let’s delete the parent row and see if the index is used or not.


19:19:47 SQL> delete from DEPT where deptno='50';
0 rows deleted.
19:19:47 SQL> commit;
Commit complete.

This do not lock the EMP table because of the presence of the index FK_EMP. If the index were not there, a TM Share lock would have been acquired, which prevent concurreny DML on EMP table (at least).


19:19:48 SQL> host sleep 900
 
19:34:48 SQL> select * from v$index_usage_info;
INDEX_STATS_ENABLED INDEX_STATS_COLLECTION_TYPE ACTIVE_ELEM_COUNT ALLOC_ELEM_COUNT MAX_ELEM_COUNT FLUSH_COUNT TOTAL_FLUSH_DURATION LAST_FLUSH_TIME STATUS_MSG CON_ID
1 0 0 3 30000 12 48152 13-NOV-16 07.24.11.086000000 PM 3
 
19:34:48 SQL> select * from dba_index_usage where owner='SCOTT';
OBJECT_ID NAME OWNER TOTAL_ACCESS_COUNT TOTAL_EXEC_COUNT TOTAL_ROWS_RETURNED BUCKET_0_ACCESS_COUNT BUCKET_1_ACCESS_COUNT BUCKET_2_10_ACCESS_COUNT BUCKET_2_10_ROWS_RETURNED BUCKET_11_100_ACCESS_COUNT BUCKET_11_100_ROWS_RETURNED BUCKET_101_1000_ACCESS_COUNT BUCKET_101_1000_ROWS_RETURNED BUCKET_1000_PLUS_ACCESS_COUNT BUCKET_1000_PLUS_ROWS_RETURNED LAST_USED
73723 PK_DEPT SCOTT 2 2 5 0 1 1 4 0 0 0 0 0 0 13-nov 18:18:13

No additional index usage has been detected. Do you take the risk to drop the index? Probably not. Even making the index invisible do not lower the risk. You may check DBA_TAB_MODIFICATIONS to know if the parent table is subject of deletes, but what if some transactions are updating the referenced key? This is also a case of TM Share lock, and this happens more that we think (for example when Hibernate updates all columns even those that do not change).

So what?

The new index usage tracking in 12.2 is very nice to get statistics on index usage, better than a simple ‘YES/NO’ flag as we have before. But detecting which index is not used and can be safely dropped is still something complex and that requires the application knowledge and comprehensive non-regression testing.
There is nothing yet that can tell you than all would have been the same if an index were not there.

 

Cet article 12cR2 new index usage tracking est apparu en premier sur Blog dbi services.

12cR2 has new SQL*Plus features

Fri, 2016-11-11 15:30

12cR2 is there. What’s new in SQL*Plus? For sure, you can’t expect lot of things from it. The new command line is the SQL*Developer sqlcl which aims to be 100% compatible with SQL*Plus with lot of more features. However, a few little things came here: default editor, command line history and easy row/LOB prefetch and statement caching.

_EDITOR

Yes, it seems that the default editor is ‘vi’ instead of ‘ed’, finally. This is a great improvement. Of course, you can set the VISUAL environment variable in your system. But when you come to another environment (which consultants do), this default will save lot of “define _editor=vi” keystroke.

The environment variables EDITOR and VISUAL are not set:

SQL> host set | grep -E "(^EDITOR|^VISUAL)"
 
SQL>

but the _EDITOR in sqlplus is set to ‘vi':

SQL> define
DEFINE _DATE = "11-NOV-16" (CHAR)
DEFINE _CONNECT_IDENTIFIER = "CDB1" (CHAR)
DEFINE _USER = "SYS" (CHAR)
DEFINE _PRIVILEGE = "AS SYSDBA" (CHAR)
DEFINE _SQLPLUS_RELEASE = "1202000100" (CHAR)
DEFINE _EDITOR = "vi" (CHAR)
DEFINE _O_VERSION = "Oracle Database 12c EE Extreme Perf Release 12.2.0.1.0 - 64bit Production" (CHAR)
DEFINE _O_RELEASE = "1202000100" (CHAR)
DEFINE 1 = "sqlplus" (CHAR)
DEFINE _RC = "1" (CHAR)

Here is the default. For sure, vi is better than ‘ed’. ‘ed’ was the line editor from the era of 2400 baud network.

Command history

Yes. In 2016 SQL*Plus has a command line history. Do you need it? Probably not. If you are on Windows, you can navigate with arrow-up and arrow-down in any command line program. If you are on Linux, you have probably installed rlwrap. And finally, if you want to do something friendly on command line, you probably use sqlcl.

However, in 12cR2 a very basic history has been introduced to SQL*Plus.
You have to enable it:

SQL> show HISTORY
history is OFF
SQL> set HISTORY on
SQL> show history
history is ON and set to "100"

so the default is 100 lines, but you can increase it:

SQL> set HISTORY 999999
SP2-0267: history option 999999 out of range (0 through 100000)
SQL> set HISTORY 100000

what can you do with it?

SQL> help HISTORY
 
HISTORY
-------
 
Stores, lists, executes, edits of the commands
entered during the current SQL*Plus session.
 
HIST[ORY] [N {RUN | EDIT | DEL[ETE]}] | [CLEAR]  
N is the entry number listed in the history list.
Use this number to recall, edit or delete the command.
 
Example:
HIST 3 RUN - will run the 3rd entry from the list.
 
HIST[ORY] without any option will list all entries in the list.

Here are some examples:

SQL> show history
history is OFF
SQL> set history on
SQL> show history
history is ON and set to "100"
SQL> prompt 1
1
SQL> prompt 2
2
SQL> history
1 show history
2 prompt 1
3 prompt 2
 
SQL> history list
1 show history
2 prompt 1
3 prompt 2
 
SQL> history 2 run
1
SQL> history 2 edit
 
SQL> history 2 delete
SQL> history
1 show history
2 prompt 2
3 prompt 1
 
SQL> history clear
SQL> history
SP2-1651: History list is empty.

As you see, it’s not the most user friendly. But for the basic DBA task that you do on a server you may find it safer than up-arrow. Imagine that a ‘shutdown immediate’ is in the history. Do you want to take the risk to run it because of some network latency and you run the line above the one you wanted? Or do you prefer to be sure tho have read the command befor

SET LOBPREF[ETCH], SET ROWPREF[ETCH], and SET STATEMENTC[ACHE].

Here are important performance improvements:

SQL> show lobprefetch
lobprefetch 0
SQL> show rowprefetch
rowprefetch 1
SQL> show statementcache
statementcache is 0

Those are things that you can to on OCI or JDBC and that you can easily do now in SQL*Plus: prefetch rows and LOBs to avoid fetch roundtrips and use statement caching to avoid parse calls.

I’ll probably blog about prefetch in a future blog, so for the moment, here is a quick demo of statement caching.

By default, statement caching is off. I run 3 times the same query:

select current_timestamp,name,value from v$mystat join v$statname using(statistic#) where name like 'parse %' or name like '%cursor cache%';
 
CURRENT_TIMESTAMP NAME VALUE
----------------------------------- ----------------------------------- ----------
11-NOV-16 05.00.41.370333 PM +00:00 session cursor cache hits 15
11-NOV-16 05.00.41.370333 PM +00:00 session cursor cache count 4
11-NOV-16 05.00.41.370333 PM +00:00 parse time cpu 0
11-NOV-16 05.00.41.370333 PM +00:00 parse time elapsed 0
11-NOV-16 05.00.41.370333 PM +00:00 parse count (total) 6
11-NOV-16 05.00.41.370333 PM +00:00 parse count (hard) 0
11-NOV-16 05.00.41.370333 PM +00:00 parse count (failures) 0
11-NOV-16 05.00.41.370333 PM +00:00 parse count (describe) 0
 
8 rows selected.
 
SQL> select current_timestamp,name,value from v$mystat join v$statname using(statistic#) where name like 'parse %' or name like '%cursor cache%';
 
CURRENT_TIMESTAMP NAME VALUE
----------------------------------- ----------------------------------- ----------
11-NOV-16 05.00.41.373429 PM +00:00 session cursor cache hits 22
11-NOV-16 05.00.41.373429 PM +00:00 session cursor cache count 4
11-NOV-16 05.00.41.373429 PM +00:00 parse time cpu 0
11-NOV-16 05.00.41.373429 PM +00:00 parse time elapsed 0
11-NOV-16 05.00.41.373429 PM +00:00 parse count (total) 7
11-NOV-16 05.00.41.373429 PM +00:00 parse count (hard) 0
11-NOV-16 05.00.41.373429 PM +00:00 parse count (failures) 0
11-NOV-16 05.00.41.373429 PM +00:00 parse count (describe) 0
 
8 rows selected.
 
SQL> select current_timestamp,name,value from v$mystat join v$statname using(statistic#) where name like 'parse %' or name like '%cursor cache%';
 
CURRENT_TIMESTAMP NAME VALUE
----------------------------------- ----------------------------------- ----------
11-NOV-16 05.00.41.375993 PM +00:00 session cursor cache hits 29
11-NOV-16 05.00.41.375993 PM +00:00 session cursor cache count 4
11-NOV-16 05.00.41.375993 PM +00:00 parse time cpu 0
11-NOV-16 05.00.41.375993 PM +00:00 parse time elapsed 0
11-NOV-16 05.00.41.375993 PM +00:00 parse count (total) 8
11-NOV-16 05.00.41.375993 PM +00:00 parse count (hard) 0
11-NOV-16 05.00.41.375993 PM +00:00 parse count (failures) 0
11-NOV-16 05.00.41.375993 PM +00:00 parse count (describe) 0
 
8 rows selected.

You can see that each one had its parse call. Of course, it’s not a hard parse because cursor is shared. It’s not even a soft parse thanks to session cursor cache. But it’s still a parse call.

Let’s set statement caching to one and run the query again 3 times:

set statementcache 1
 
SQL> select current_timestamp,name,value from v$mystat join v$statname using(statistic#) where name like 'parse %' or name like '%cursor cache%';
 
CURRENT_TIMESTAMP NAME VALUE
----------------------------------- ----------------------------------- ----------
11-NOV-16 05.00.41.378937 PM +00:00 session cursor cache hits 36
11-NOV-16 05.00.41.378937 PM +00:00 session cursor cache count 4
11-NOV-16 05.00.41.378937 PM +00:00 parse time cpu 0
11-NOV-16 05.00.41.378937 PM +00:00 parse time elapsed 0
11-NOV-16 05.00.41.378937 PM +00:00 parse count (total) 9
11-NOV-16 05.00.41.378937 PM +00:00 parse count (hard) 0
11-NOV-16 05.00.41.378937 PM +00:00 parse count (failures) 0
11-NOV-16 05.00.41.378937 PM +00:00 parse count (describe) 0
 
8 rows selected.
 
SQL> select current_timestamp,name,value from v$mystat join v$statname using(statistic#) where name like 'parse %' or name like '%cursor cache%';
 
CURRENT_TIMESTAMP NAME VALUE
----------------------------------- ----------------------------------- ----------
11-NOV-16 05.00.41.381403 PM +00:00 session cursor cache hits 42
11-NOV-16 05.00.41.381403 PM +00:00 session cursor cache count 4
11-NOV-16 05.00.41.381403 PM +00:00 parse time cpu 0
11-NOV-16 05.00.41.381403 PM +00:00 parse time elapsed 0
11-NOV-16 05.00.41.381403 PM +00:00 parse count (total) 9
11-NOV-16 05.00.41.381403 PM +00:00 parse count (hard) 0
11-NOV-16 05.00.41.381403 PM +00:00 parse count (failures) 0
11-NOV-16 05.00.41.381403 PM +00:00 parse count (describe) 0
 
8 rows selected.
 
SQL> select current_timestamp,name,value from v$mystat join v$statname using(statistic#) where name like 'parse %' or name like '%cursor cache%';
 
CURRENT_TIMESTAMP NAME VALUE
----------------------------------- ----------------------------------- ----------
11-NOV-16 05.00.41.383844 PM +00:00 session cursor cache hits 48
11-NOV-16 05.00.41.383844 PM +00:00 session cursor cache count 4
11-NOV-16 05.00.41.383844 PM +00:00 parse time cpu 0
11-NOV-16 05.00.41.383844 PM +00:00 parse time elapsed 0
11-NOV-16 05.00.41.383844 PM +00:00 parse count (total) 9
11-NOV-16 05.00.41.383844 PM +00:00 parse count (hard) 0
11-NOV-16 05.00.41.383844 PM +00:00 parse count (failures) 0
11-NOV-16 05.00.41.383844 PM +00:00 parse count (describe) 0
 
8 rows selected.

One more parse call only. The cursor was cached at client side.

How many statements can you cache?

SQL> set statementcache 999999
SP2-0267: statementcache option 999999 out of range (0 through 32767)

from 1 to 32767. The value 0 disable statement caching.

set statementcache 32767

Not yet in 12.2 ?

If you did not upgrade yet to 12.2 you have a way to use statement caching. You can set it in oraaccess.xml which can enable those optimizations for all OCI clients.

sqlplus -F

Those performance settings can be set to default values with the ‘-F’ argument.
Let set which settings are different:

[oracle@OPC122 ~]$ sqlplus -s / as sysdba <<< "store set a.txt replace"
Wrote file a.txt
[oracle@OPC122 ~]$ sqlplus -s -F / as sysdba <<< "store set b.txt replace"
Wrote file b.txt
[oracle@OPC122 ~]$ diff a.txt b.txt
3c3
set arraysize 100
31c31
set lobprefetch 16384
46c46
set rowprefetch 2
59c59
set statementcache 20

Those settings avoid roundtrips and unnecessary work. Documentation says that PAGESIZE set to higher value but I don’t see it here and anyway, it’s about formatting output and not about performance.

VARIABLE

You may use SQL*Plus to test queries with bind variables. Here is what you do before 12.2:

SQL> variable text char
SQL> exec :text:='X'
 
PL/SQL procedure successfully completed.
 
SQL> select * from DUAL where DUMMY=:text;
 
D
-
X

You can now simply:

SQL> variable text char='X'
SQL> select * from DUAL where DUMMY=:text;
 
D
-
X

SQLcl the SQLDeveloper command line

Since 11g SQLDeveloper is shipped in ORACLE_HOME and in 12.2 it includes SQLcl, the SQLDeveloper command line that is fully compatible with SQL*Plus scripts.
The version we have on the DBCS lacks the executable flag and the right JAVA_HOME:

[oracle@SE222 ~]$ /u01/app/oracle/product/12.2.0/dbhome_1/sqldeveloper/sqlcl/bin/sql / as sysdba
-bash: /u01/app/oracle/product/12.2.0/dbhome_1/sqldeveloper/sqlcl/bin/sql: Permission denied
[oracle@SE222 ~]$
[oracle@SE222 ~]$ bash /u01/app/oracle/product/12.2.0/dbhome_1/sqldeveloper/sqlcl/bin/sql / as sysdba
 
SQLcl: Release 12.2.0.1.0 RC on Fri Nov 11 21:16:48 2016
 
Copyright (c) 1982, 2016, Oracle. All rights reserved.
 
USER =
URL = jdbc:oracle:oci8:@
Error Message = No suitable driver found for jdbc:oracle:oci8:@
USER =
URL = jdbc:oracle:thin:@127.0.0.1:1521:CDB2
Error Message = No suitable driver found for jdbc:oracle:thin:@127.0.0.1:1521:CDB2
USER =
URL = jdbc:oracle:thin:@localhost:1521/orcl
Error Message = No suitable driver found for jdbc:oracle:thin:@localhost:1521/orcl
Username? (RETRYING) ('/ as sysdba'?)

I’ve defined the following alias:

alias sql='JAVA_HOME=$ORACLE_HOME/jdk bash $ORACLE_HOME/sqldeveloper/sqlcl/bin/sql'

and I’m ready to run it:

[oracle@SE222 ~]$ sql / as sysdba
 
SQLcl: Release 12.2.0.1.0 RC on Fri Nov 11 21:20:15 2016
 
Copyright (c) 1982, 2016, Oracle. All rights reserved.
 
Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
 
SQL>

I like SQLcl except one thing – it’s in java and is long to start:

[oracle@SE222 ~]$ time sql /nolog
real 0m2.184s
user 0m3.054s
sys 0m0.149s

2 seconds is long when you run it frequently. Compare with sqlplus:

[oracle@SE222 ~]$ time sqlplus /nolog
real 0m0.015s
user 0m0.008s
sys 0m0.006s

 

Cet article 12cR2 has new SQL*Plus features est apparu en premier sur Blog dbi services.

Documentum – Cannot delete document due to VD/assembly references

Fri, 2016-11-11 02:46

I went into an error that you might already have once, so here is a quick guide on how to fix it.

When deleting a document from D2 I had a popup with an error like: Cannot delete file because it is still referenced in a virtual document or an assembly. Well indeed the document that I wanted to delete was previously in a VD, but for some reasons I couldn’t find the VD by the search engine as it was deleted, but not its old versions.

To summarize, my document was still referenced by the old versions of a deleted VD. So here’s the way I fixed it:

First, I needed to know which VD referenced it, for this simply use the table dmr_containment:

select * from dmr_containment where component_id='<DOCUMENT_ID>';

You should see a list of ids like:

r_object_id: ID of the current row (we will need it)
parent_id: ID of the VD that references our document
component_id: ID of our document
order_no: Index order of all documents in this VD

The problem with old versions VD is that it is set as immutable so you can’t remove the document from the VD through DQL and API. You first have to set it mutable.

So here is the condensed version of API to remove a component in an immutable VD:

fetch,c,<PARRENT_ID>
set,c,l,r_immutable_flag
F
save,c,l
removepart,c,l,<R_OBJECT_ID>
save,c,l
fetch,c,l
set,c,l,r_immutable_flag
T
save,c,l

If you have a lot of entries to remove, you may want a scripted version in DQMan:

First select all entries:

select * from dmr_containment where component_id='<DOCUMENT_ID>';

Select all rows, right click and generate the script:

fetch,c,{parent_id}
set,c,l,r_immutable_flag
F
save,c,l
removepart,c,l,{r_object_id}
save,c,l
set,c,l,r_immutable_flag
T
save,c,l

Verify that each entries have been deleted and now you should be able to remove your document without errors.

 

Cet article Documentum – Cannot delete document due to VD/assembly references est apparu en premier sur Blog dbi services.

Oracle 12cR2: MAX_PDBS

Fri, 2016-11-11 00:43

Oracle database 12.2 is there on the Database Cloud Service, in multitenant. In EE High Performance or Extreme Performance, you have the multitenant option: you can create 4096 pluggable database (instead of 252 in 12.2). If you are in lower services, you can create only one user PDB (not counting application root and proxy PDB). If you are in Standard Edition, it’s simple: it is a hard limit. If you are in simple Enterprise Edition without option, then you have a way to be sure you stay under the limit: MAX_PDBS parameters.

Containers and Plugable Databases

A CDB is a container (CON_ID=0) that contains containers:

  • CDB$ROOT (CON_ID=1)
  • PDB$SEED (CON_ID=2)
  • User created PDBs (CON_ID between 3 and 4098)

Here is how I show it:
CaptureCON_ID

MAX_PDBS

In 12.1 you have no supported way to prevent creating more than one PDB. In 12.2 you have a parameter, MAX_PDBS, which is documented as the maximum number of user created pluggable database. You you can expect it to have the maximum of 4096 but it’s actually 4098 and this is the default value:

SQL> show parameter max_pdbs
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
max_pdbs integer 4098

So to be sure, let’s create many pluggable databases.

I have one pluggable database, PDB1, opened in read-only:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ ONLY NO

And use the following script to clone them:
for i in {1..5000} ; do echo "connect / as sysdba"; echo "create pluggable database pdb$i$RANDOM from pdb1 snapshot copy;" ; echo 'select max(con_id),count(*) from dba_pdbs;' ; echo "host df -h /u01 /u02" ; done | sqlplus / as sysdba
until it fails with:

SQL> create pluggable database pdb49613971 from pdb1 snapshot copy
*
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

Note that I use clonedb=true snapshot copy because I don’t want to fill up my filesystem:

SQL> show parameter clonedb
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
clonedb boolean TRUE
clonedb_dir string /u02/oradata/CDB1
 
SQL> host du -h /u02/oradata/CDB1/CDB1_bitmap.dbf
31M /u02/oradata/CDB1/CDB1_bitmap.dbf

As you see I’ve put the bitmap file outside of $ORACLE_HOME/dbs because in 12.2 we have a parameter for that. So many new features…
In addition to that I had to increase sga, processes and db_files.

Here I have my 4097 PDBs

SQL> select max(con_iount(*) from dba_pdbs;
 
MAX(CON_ID) COUNT(*)
----------- ----------
4098 4097

which includes PDB$SEED. This means 4098 containers inside of my CDB:

SQL> select max(con_id),count(*) from v$containers;
 
MAX(CON_ID) COUNT(*)
----------- ----------
4098 4098

SQL> set pagesize 1000 linesize 1000
select min(con_id),max(con_id),count(*),substr(listagg(name,',' on overflow truncate)within group(order by con_id),1,30) from v$containers;SQL>
 
MIN(CON_ID) MAX(CON_ID) COUNT(*) SUBSTR(LISTAGG(NAME,','ONOVERFLOWTRUNCATE)WITHINGROUP(ORDERBYCON_ID),1,30)
----------- ----------- ---------- ------------------------------------------------------------------------------------------------------------------------
1 4098 4098 CDB$ROOT,PDB$SEED,PDB1,PDB2105

So basically you can’t reach the MAX_PDBS default with user created PDBs.

But…

What is really cool with ‘cloud first’ is that we can test it, all on the same platform, probably hit bugs that will be fixed before the on-premises version. This is a great way to ensure that the version is stable when we will put production on it.

I have one PDB:

SQL*Plus: Release 12.2.0.1.0 Production on Thu Nov 10 12:12:25 2016
 
Copyright (c) 1982, 2016, Oracle. All rights reserved.
 
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
 
12:12:25 SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 MOUNTED

I drop it:

12:12:26 SQL> drop pluggable database pdb1 including datafiles;
Pluggable database dropped.

I set MAX_PDBS to one:

 
12:12:44 SQL> show parameter max_pdbs
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
max_pdbs integer 4098
 
12:13:24 SQL> alter system set max_pdbs=1 scope=memory;
System altered.
 
12:13:45 SQL> show parameter max_pdbs
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
max_pdbs integer 1

And then try to re-create my PDB:

12:13:54 SQL> create pluggable database PDB1 admin user pdbadmin identified by oracle;
create pluggable database PDB1 admin user pdbadmin identified by oracle
*
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

This is not what I expected. Let’s try to increase MAX_PDBS to two, even if I’m sure to have only one user PDB:


12:14:07 SQL> alter system set max_pdbs=2 scope=memory;
System altered.
 
12:14:18 SQL> create pluggable database PDB1 admin user pdbadmin identified by oracle;
Pluggable database created.

Ok. Let’s drop it and re-create it again:

12:15:20 SQL> drop pluggable database PDB1 including datafiles;
Pluggable database dropped.
 
12:16:02 SQL> create pluggable database PDB1 admin user pdbadmin identified by oracle;
create pluggable database PDB1 admin user pdbadmin identified by oracle
*
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

That’s bad. It seems that the previously dropped PDBs are still counted:

12:16:07 SQL> alter system set max_pdbs=3 scope=memory;
System altered.
 
12:16:17 SQL> create pluggable database PDB1 admin user pdbadmin identified by oracle;
Pluggable database created.
 
12:17:10 SQL> show pdbs;
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 MOUNTED
12:18:14 SQL> drop pluggable database PDB1 including datafiles;
Pluggable database dropped.
 
12:18:28 SQL> create pluggable database PDB1 admin user pdbadmin identified by oracle;
create pluggable database PDB1 admin user pdbadmin identified by oracle
*
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

Probably a small bug there. Some counters not reset maybe.

I’ve dropped one PDB from the CDB where I reached the limit of 4096:


SQL> select count(*) from dba_pdbs where con_id>2;
 
COUNT(*)
----------
4095

I can set MAX_PDBS to 4095 if I and to prevent creating a new one:

SQL> alter system set max_pdbs=4095;
System altered.

What if I want to set it lower than the number of PDBs I have? An error message would be nice, but probably not this one:

SQL> alter system set max_pdbs=4094;
alter system set max_pdbs=4094
*
ERROR at line 1:
ORA-32017: failure in updating SPFILE
ORA-65331: DDL on a data link table is outside an application action.

Anyway, now that MAX_PDBS is set to 4095 I can’t create another one:

SQL> create pluggable database PDB2 from PDB1;
create pluggable database PDB2 from PDB1
*
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

which is the goal of this parameter and confirms that it counts the user created PDBs and not the all containers.

Here it seems that I can re-create my last PDB when I increase the MAX_PDBS:

SQL> alter system set max_pdbs=4096;
System altered.
SQL> create pluggable database PDB2 from PDB1;
Pluggable database created.

By the way, here is how the multitenant feature usage is detected:

SQL> select name feature_name,version,detected_usages,aux_count
from dba_feature_usage_statistics
where name like '%Pluggable%' or name like '%Multitenant%';
 
FEATURE_NAME
--------------------------------------------------------------------------------
VERSION DETECTED_USAGES AUX_COUNT
----------------- --------------- ----------
Oracle Multitenant
12.2.0.1.0 3 4096

The detected usage just means that I’m in a CDB. The AUX_COUNT tells me if I require the multitenant option. But that’s for a future blog post.

 

Cet article Oracle 12cR2: MAX_PDBS est apparu en premier sur Blog dbi services.

Oracle 12cR2: Pluggable database relocation

Thu, 2016-11-10 23:19

Here is, in my opinion, the most beautiful feature of the multitenant architecture. You know how I love Transportable Tablespaces. But here:

  • No need to put the source in read/only
  • No need to export/import the metadata logically
  • No need for any option: available even in Standard Edition

Standard Edition

I am in Standard Edition here in both source and target, no option required for this:

SQL*Plus: Release 12.2.0.1.0 Production on Thu Nov 10 13:40:05 2016
Copyright (c) 1982, 2016, Oracle. All rights reserved.
 
Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production

Source: PDB1 on CDB1

On server opc1 I have a container database CDB1 with one pluggable database PDB1 where I create a new table:

23:40:20 (opc1)CDB1 SQL>alter session set container=PDB1;
Session altered.
23:40:20 (opc1)CDB1 SQL>create table DEMO as select current_timestamp insert_timestamp,instance_name from v$instance;
Table created.
23:40:21 (opc1)CDB1 SQL>insert into DEMO select current_timestamp,instance_name from v$instance;
1 row created.
23:40:21 (opc1)CDB1 SQL>select * from DEMO;
 
INSERT_TIMESTAMP INSTANCE_NAME
----------------------------------- ----------------
10-NOV-16 11.40.20.902761 PM +00:00 CDB1
10-NOV-16 11.40.21.966815 PM +00:00 CDB1

Export encryption key

I’m in Oracle Public Cloud where tablespaces are encrypted. To ship a pluggable database I must export the keys. Here is the query to get them:

23:40:23 (opc1)CDB1 SQL>select key_id from v$encryption_keys where creator_pdbname='PDB1';
 
KEY_ID
------------------------------------------------------------------------------
AWlnBaUXG0/gv4evS9Ywu8EAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

And I can filter with this query to export it:

23:40:23 (opc1)CDB1 SQL>administer key management export encryption keys with secret "oracle" to '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d" with identifier in (select key_id from v$encryption_keys where creator_pdbname='PDB1');
administer key management export encryption keys with secret "oracle" to '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d" with identifier in (select key_id from v$encryption_keys where creator_pdbname='PDB1')
*
ERROR at line 1:
ORA-28417: password-based keystore is not open

I can’t do that with auto-login wallet.

23:40:23 (opc1)CDB1 SQL>select wrl_type,wrl_parameter,wallet_type from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER WALLET_TY
-------- -------------------------------------- ---------
FILE /u01/app/oracle/admin/CDB1/tde_wallet/ AUTOLOGIN

Let’s open the wallet with password:

23:40:23 (opc1)CDB1 SQL>administer key management set keystore close;
keystore altered.
23:40:23 (opc1)CDB1 SQL>administer key management set keystore open identified by "Ach1z0#d";
keystore altered.
23:40:23 (opc1)CDB1 SQL>select wrl_type,wrl_parameter,wallet_type from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER WALLET_TY
-------- -------------------------------------- ---------
FILE /u01/app/oracle/admin/CDB1/tde_wallet/ PASSWORD

and re-try my export:

23:40:23 (opc1)CDB1 SQL>administer key management export encryption keys with secret "oracle" to '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d" with identifier in (select key_id from v$encryption_keys where creator_pdbname='PDB1');
keystore altered.

This file must be copied to the destination server. I did it with scp. You can also use dbms_file_transfer as you will need a database link anyway for the remote clone.

Import encryption key

On the destination server, where I have no CDB (I’m limited to one PDB here without the multitenant option)

23:40:31 (opc2)CDB2 SQL>show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO

I have to import the encryption key:

23:40:31 (opc2)CDB2 SQL>administer key management set keystore open identified by "Ach1z0#d";
keystore altered.
 
23:40:31 (opc2)CDB2 SQL>administer key management import encryption keys with secret "oracle" from '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d";
keystore altered.

I’m now ready to relocate my PDB as I’m sure I’ll be ready to open it.

Database link

The remote clone is done through a DB link. I’ve a TNS entry named CDB1:

23:40:31 (opc2)CDB2 SQL>select dbms_tns.resolve_tnsname('CDB1') from dual;
 
DBMS_TNS.RESOLVE_TNSNAME('CDB1')
--------------------------------------------------------------------------------
(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=opc1)(PORT=1521))(CONNECT_DAT
A=(SERVER=DEDICATED)(SERVICE_NAME=CDB1.opcoct.oraclecloud.internal)(CID=(PROGRAM
=oracle)(HOST=SE222.compute-opcoct.oraclecloud.internal)(USER=oracle))))
 
23:40:31 (opc2)CDB2 SQL>create database link CDB1 connect to C##DBA identified by oracle using 'CDB1';
Database link created.

DML on source

In order to show that the source doesn’t have to be read only as in previous release, I’m running the following inserts every 5 minutes:

23:40:44 (opc1)CDB1 SQL>commit;
Commit complete.
23:40:44 (opc1)CDB1 SQL>insert into DEMO select current_timestamp,instance_name from v$instance;
1 row created.
23:40:44 (opc1)CDB1 SQL>select * from DEMO;
 
INSERT_TIMESTAMP INSTANCE_NAME
----------------------------------- ----------------
10-NOV-16 11.40.20.902761 PM +00:00 CDB1
10-NOV-16 11.40.21.966815 PM +00:00 CDB1
10-NOV-16 11.40.29.136529 PM +00:00 CDB1
10-NOV-16 11.40.34.214467 PM +00:00 CDB1
10-NOV-16 11.40.39.304515 PM +00:00 CDB1
10-NOV-16 11.40.44.376796 PM +00:00 CDB1
6 rows selected.

PDB remote clone

Here is the syntax.
I need to provide the masterkey of the source wallet.
The RELOCATE is this new feature where the source PDB will be relocated to the destination when the clone is opened.

23:40:48 (opc2)CDB2 SQL>create pluggable database PDB1 from PDB1@CDB1 keystore identified by "Ach1z0#d" relocate;
Pluggable database created.
23:41:08 (opc2)CDB2 SQL>

It took some time, shipping the datafiles through the DB link, but this is online.
I was still inserting during this time:

23:41:04 (opc1)CDB1 SQL>select * from DEMO;
 
INSERT_TIMESTAMP INSTANCE_NAME
----------------------------------- ----------------
10-NOV-16 11.40.20.902761 PM +00:00 CDB1
10-NOV-16 11.40.21.966815 PM +00:00 CDB1
10-NOV-16 11.40.29.136529 PM +00:00 CDB1
10-NOV-16 11.40.34.214467 PM +00:00 CDB1
10-NOV-16 11.40.39.304515 PM +00:00 CDB1
10-NOV-16 11.40.44.376796 PM +00:00 CDB1
10-NOV-16 11.40.49.454661 PM +00:00 CDB1
10-NOV-16 11.40.54.532699 PM +00:00 CDB1
10-NOV-16 11.40.59.614745 PM +00:00 CDB1
10-NOV-16 11.41.04.692784 PM +00:00 CDB1
 
10 rows selected.

Note that you need to be in ARCHIVELOG and LOCAL UNDO to be able to do this because syncronisation will be made by media recovery when we open the clone.

Open the clone

Now, the theory is that when we open the clone, DML is quiesced on source during the recovery of the target and sessions can continue on the target once opened.

23:41:09 (opc2)CDB2 SQL>alter pluggable database PDB1 open;
alter pluggable database PDB1 open
*
ERROR at line 1:
ORA-00060: deadlock detected while waiting for resource
23:41:26 (opc2)CDB2 SQL>

Bad luck. Every time I tested this scenario, the first open after the relocate fails in deadlock and the session on the source crashes:

23:41:09 (opc1)CDB1 SQL>select * from DEMO;
 
INSERT_TIMESTAMP INSTANCE_NAME
----------------------------------- ----------------
10-NOV-16 11.40.20.902761 PM +00:00 CDB1
10-NOV-16 11.40.21.966815 PM +00:00 CDB1
10-NOV-16 11.40.29.136529 PM +00:00 CDB1
10-NOV-16 11.40.34.214467 PM +00:00 CDB1
10-NOV-16 11.40.39.304515 PM +00:00 CDB1
10-NOV-16 11.40.44.376796 PM +00:00 CDB1
10-NOV-16 11.40.49.454661 PM +00:00 CDB1
10-NOV-16 11.40.54.532699 PM +00:00 CDB1
10-NOV-16 11.40.59.614745 PM +00:00 CDB1
10-NOV-16 11.41.04.692784 PM +00:00 CDB1
10-NOV-16 11.41.09.773300 PM +00:00 CDB1
 
11 rows selected.
 
23:41:14 (opc1)CDB1 SQL> commit;
ERROR:
ORA-03114: not connected to ORACLE

It’s a good occasion to look at the traces.
We can see some messages about the recovery:

*** 2016-11-10T23:41:12.660402+00:00 (PDB1(3))
Media Recovery Log /u03/app/oracle/fast_recovery_area/CDB1/foreign_archivelog/PDB1/2016_11_10/o1_mf_1_24_2025109931_.arc
Log read is SYNCHRONOUS though disk_asynch_io is enabled!

Those FOREIGN ARCHIVED LOG is a new type of file that you will see in the FRA in 12.2.

So I lost my session on source and now if I try again it works:

23:42:20 (opc2)CDB2 SQL>alter pluggable database PDB1 open;
Pluggable database altered.
23:42:24 (opc2)CDB2 SQL>select * from DEMO;
 
INSERT_TIMESTAMP INSTANCE_NAME
----------------------------------- ----------------
10-NOV-16 11.40.20.902761 PM +00:00 CDB1
10-NOV-16 11.40.21.966815 PM +00:00 CDB1
10-NOV-16 11.40.29.136529 PM +00:00 CDB1
10-NOV-16 11.40.34.214467 PM +00:00 CDB1
10-NOV-16 11.40.39.304515 PM +00:00 CDB1
10-NOV-16 11.40.44.376796 PM +00:00 CDB1
10-NOV-16 11.40.49.454661 PM +00:00 CDB1
10-NOV-16 11.40.54.532699 PM +00:00 CDB1
10-NOV-16 11.40.59.614745 PM +00:00 CDB1
10-NOV-16 11.41.04.692784 PM +00:00 CDB1
 
10 rows selected.

All the inserts that were commited on the source are there.
Even with this deadlock bug (SR 3-13618219421), it’s the easiest and fastest way to migrate a database, with the minimum of downtime. Especially in Standard Edition where transportable tablespaces import is not enabled.
Without the deadlock bug, the sessions on the source are supposed to be still running , only paused during the recovery, and then continue on the destination.

 

Cet article Oracle 12cR2: Pluggable database relocation est apparu en premier sur Blog dbi services.

Documentum story – Disable PDF automatic generation but allow jpeg

Thu, 2016-11-10 00:00

We had a request by a customer where he wanted the pdf automatic rendition to be disabled but keep allowing the jpeg renditions to be used as thumbnails for D2. The pdf is generated by an eternal tool so it wasn’t needed here. Here is what we did:

Login to the ADTS server and edit the following file:
<CTS_HOME>\config\CTSServerService.xml

For the repository named “DOCBASE1″, comment the QueueProcessorContext tag as follow and only if it include the dm_autorender_win31 value:

<!-- Start of comment to be added
<QueueProcessorContext DocbaseName="DOCBASE1">
    <CTSServer AttributeName="queueItemName" AttributeValue="dm_autorender_win31"/>
    ...
</QueueProcessorContext>
       End of comment to be added -->

Then restart the services

After the restart of the Content Transformation Services, wait for a few minutes and open the CTS log file <CTS_HOME>\logs\CTS_log.txt.

On the CTS log file:
– Ensure that there are no important errors and/or exceptions
– Search for the message “Product ADTS version <CTS_VERSION> is installed on repository: DOCBASE1″

Open the file <CTS_HOME>\logs\Polling_log.txt in order to verify that the changes have been applied successfully.

In this file, find all “DOCBASE1″ related CTSLegacyQueueProcessor log entries by searching for the following text:

DOCBASE1 CTSLegacyQueueProcessor

There should not be any matching text in the log file.

 

Note that you can use the jpeg renditions in the D2 4.5 Preview widget in order to display a kind of preview directly in the D2 interface.

 

Cet article Documentum story – Disable PDF automatic generation but allow jpeg est apparu en premier sur Blog dbi services.

Oracle 12c – DB_UNKNOWN in ASM

Wed, 2016-11-09 04:06

Have you ever noticed a DB_UNKNOWN directory in your ASM structure? It usually happens in combination with spfile creation in ASM, or with RMAN spfile restores to ASM.

The correct location is +DATA/<SID>/PARAMETERFILE/SPFILE.<#>.<#>, and an ASM alias from +DATA/<SID>/ pointing to it.

But sometimes, the spfile ends up in +DATA/DB_UNKNOWN/PARAMETERFILE/SPFILE.<#>.<#>

Technically no issue. The spfile in the DB_UNKNOWN directory is perfectly ok and can be used. However, you might need to adjust your init<SID>.ora in case you have a config like the following

oracle@oel001:/u00/app/oracle/product/12.1.0.2/dbs/ [OCM121] cat initOCM121.ora
SPFILE='+data/DB_UNKNOWN/PARAMETERFILE/SPFILE.293.927371209'

Maybe you have a 4 node RAC, then you need to adjust it on every node. Maybe you have a cluster resource with a spfile entry. Then you need to adjust that one as well. And besides that, to what database does the DB_UNKNOWN belong to? Imagine you have 20 DB’s running and you need to find out, which database has something in the DB_UNKNOWN directory, in case there are more entries.

No … it is not a good situation. It has to be corrected. But how?

First of all, let’s create a situation that ends up with a DB_UNKNOWN directory.

It is quite easy to do. Typically, with spfile restores or with a “create spfile from pfile”

  1. Shutdown the DB
  2. Startup RMAN dummy instance
  3. Restore the spfile to pfile
  4. Shutdown the Instance
  5. Adjust the pfile
  6. Create the spfile from pfile while the DB is shutdown

Here is an example with 12cR1 (12.1.0.2). I am jumping directly to the RMAN restore, because RMAN dummy instance was already explained in http://blog.dbi-services.com/oracle-12c-when-the-rman-dummy-instance-does-not-start-up/

Ok. Let’s check the current location of the spfile of the cluster resource.

oracle@oel001:/home/oracle/ [OCM121] srvctl config database -d OCM121 | grep -i spfile
Spfile: +DATA/OCM121/spfileOCM121.ora

Now we can run the RMAN restore of the spfile to pfile. Restoring it to a pfile first has the advantage, that we can take a look at all settings and maybe adjust them, before we put it back into production.

run {
restore spfile to pfile '/tmp/initOCM121.ora' for db_unique_name='OCM121' from
'+fra/OCM121/AUTOBACKUP/2016_10_29/s_926511850.517.926511853';
}

Starting restore at 08-NOV-2016 11:01:04
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=364 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=12 device type=DISK
channel ORA_DISK_2: skipped, AUTOBACKUP already found
channel ORA_DISK_1: restoring spfile from AUTOBACKUP +fra/OCM121/AUTOBACKUP/2016_10_29/s_926511850.517.926511853
channel ORA_DISK_1: SPFILE restore from AUTOBACKUP complete
Finished restore at 08-NOV-2016 11:01:14

The pfile was successfully created. Now we can correct some settings in the pfile if we want and then create a spfile again.

oracle@oel001:/home/oracle/ [OCM121] ls -l /tmp/initOCM121.ora
-rw-r--r-- 1 oracle asmadmin 1777 Nov  8 11:01 /tmp/initOCM121.ora

Ok. Let’s create the new spfile while the DB is shutdown.

oracle@oel001:/home/oracle/ [OCM121] sqh

SQL*Plus: Release 12.1.0.2.0 Production on Tue Nov 8 11:03:56 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> create spfile='+DATA' from pfile='/tmp/initOCM121.ora';

File created.

Oppssss … and now it happened. The directory DB_UNKNOWN is created. While the database is shutdown, Oracle does not know the DB_NAME and so, it has to create a placeholder directory to save the spfile.

ASMCMD> pwd
+data
ASMCMD> ls -l
Type  Redund  Striped  Time             Sys  Name
                                        Y    CDB121/
                                        Y    DB_UNKNOWN/
                                        Y    OCM121/

ASMCMD> pwd
+data/DB_UNKNOWN/PARAMETERFILE
ASMCMD> ls -l
Type           Redund  Striped  Time             Sys  Name
PARAMETERFILE  UNPROT  COARSE   NOV 08 11:00:00  Y    SPFILE.293.927371209

However, this is not the configuration that we want. To correct it, cleanup the DB_UNKNOWN entries, and start your DB into the nomount state and execute then the spfile from pfile command again.

SQL> startup nomount pfile=/tmp/initOCM121.ora
ORACLE instance started.

Total System Global Area 1610612736 bytes
Fixed Size                  2924928 bytes
Variable Size             436211328 bytes
Database Buffers         1157627904 bytes
Redo Buffers               13848576 bytes

SQL> create spfile='+DATA' from pfile='/tmp/initOCM121.ora';

File created.

And here we go. The spfile is the correct location.

ASMCMD> pwd
+data/OCM121/PARAMETERFILE
ASMCMD> ls -l
Type           Redund  Striped  Time             Sys  Name
PARAMETERFILE  UNPROT  COARSE   NOV 08 11:00:00  Y    spfile.291.927372029

The only thing missing is the ASM alias. That one has to be created manually afterwards.

ASMCMD> cd +data/OCM121
ASMCMD> mkalias +data/OCM121/PARAMETERFILE/spfile.291.927372029 spfileOCM121.ora
ASMCMD>
Conclusion

It makes a big difference if you create your spfile in the nomount state or while the database is shutdown. You might end up with a totally different directory structure in ASM. With 12.1.0.2 and 11.2.0.4 the nomount state is enough to end up in the correct location. In earlier versions you might need to startup mount to have the same effect.

Cheers,
William

 

 

 

Cet article Oracle 12c – DB_UNKNOWN in ASM est apparu en premier sur Blog dbi services.

Oracle 12cR2 multitenant containers in SQL_TRACE

Tue, 2016-11-08 14:21

In multitenant you session can switch between containers. For example, since 12.1, a common user can switch explicitly between CDB$ROOT and any PDB with the ‘ALTER SYSTEM SET CONTAINER’. Any user connected to a PDB will also have it session switching implicitely when querying through metadata links and data links (new name for object links). In 12.1 there are no ways to trace this. This is fixed in 12.2

I set sql_trace and get the tracefile name:

SQL> select value tracefile from v$diag_info where name='Default Trace File';
 
TRACEFILE
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb1/CDB1/trace/CDB1_ora_6307.trc
 
SQL> alter session set sql_trace=true;
Session altered.

The container ID is CON_ID=1 because I’m connected to the root:


SQL> host grep "^\*\*\*" &tracefile
 
*** 2016-09-04T16:08:17.968360+02:00 (CDB$ROOT(1))
*** SESSION ID:(14.25101) 2016-09-04T16:08:17.968399+02:00
*** CLIENT ID:() 2016-09-04T16:08:17.968405+02:00
*** SERVICE NAME:(SYS$USERS) 2016-09-04T16:08:17.968410+02:00
*** MODULE NAME:(sqlplus@VM115 (TNS V1-V3)) 2016-09-04T16:08:17.968415+02:00
*** ACTION NAME:() 2016-09-04T16:08:17.968420+02:00
*** CLIENT DRIVER:(SQL*PLUS) 2016-09-04T16:08:17.968425+02:00
*** CONTAINER ID:(1) 2016-09-04T16:08:17.968430+02:00

In 12.1 you had no more information about the container in the trace file. This is improved in 12.2

Explicit ALTER SYSTEM SET CONTAINER

I’ll run a simple query, then change to container PDB (which is CON_ID=3 here) and run again a query:

SQL> select * from dual;
 
D
-
X
 
SQL> alter session set container=PDB;
Session altered.
 
SQL> select * from dual;
 
D
-
X

The lines with starting with ‘***’ followed by a timestamp are not new. But now we also have the container name (here CON_NAME=PDB) and container ID (CON_ID=3):

SQL> host grep "^\*\*\*" &tracefile
 
*** 2016-09-04T16:09:54.397448+02:00 (PDB(3))
*** CONTAINER ID:(3) 2016-09-04T16:09:54.397527+02:00

You get those line for each ALTER SESSION SET CONTAINER and you have the CON_NAME and CON_ID of the PDB: (PDB(3))

Implicit switch though data link

I’m still in PDB and I’ll query a data link view: DBA_PDBS. Data link views (previously called ‘object link’ views) query data from the CDB$ROOT even when you are in a PDB. DBA_PDBS show information from pluggable databases, which are stored in CDB$ROOT (because they must be available before the PDB is opened).

SQL> select count(*) from dba_pdbs;
 
COUNT(*)
----------
1
 

The execution of the query had to switch to CDB$ROOT (CON_ID=1) to get the rows and switch back to PDB (CON_ID=3):


SQL> host grep "^\*\*\*" &tracefile
 
*** 2016-09-04T16:09:54.406379+02:00 (CDB$ROOT(1))
*** 2016-09-04T16:09:54.406676+02:00 (PDB(3))

If you look at the detail you will see that my query is parsed in my container:

=====================
PARSING IN CURSOR #139807307349184 len=29 dep=0 uid=0 oct=3 lid=0 tim=203051393258 hv=2380449338 ad='896cae38' sqlid='3cngtnf6y5jju'
select count(*) from dba_pdbs
END OF STMT
PARSE #139807307349184:c=0,e=53,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=2,plh=1333657383,tim=203051393256

I think the following is to check that the table behind the data link view are valid in the PDB even if we don’t want to query them. This is only a parse call:

=====================
PARSING IN CURSOR #139807307295488 len=46 dep=1 uid=0 oct=3 lid=0 tim=203051393450 hv=1756598280 ad='7b5dfd58' sqlid='5ucyn75nb7408'
SELECT * FROM NO_OBJECT_LINK("SYS"."DBA_PDBS")
END OF STMT
PARSE #139807307295488:c=0,e=26,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=810534000,tim=203051393449
CLOSE #139807307295488:c=0,e=7,dep=1,type=1,tim=203051393490

Then when I execute my query:

EXEC #139807307349184:c=0,e=246,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=2,plh=1333657383,tim=203051393539

my session switches to root:

*** 2016-09-04T16:09:54.406379+02:00 (CDB$ROOT(1))

and the recursive query is parsed and executed in CDB$ROOT:
=====================
PARSING IN CURSOR #139807307379504 len=170 dep=1 uid=0 oct=3 lid=0 tim=203051393687 hv=1291428476 ad='895c6940' sqlid='g34kja56gm8mw'
SELECT /*+ NO_STATEMENT_QUEUING RESULT_CACHE (SYSOBJ=TRUE) */ CON_ID FROM NO_OBJECT_LINK("SYS"."DBA_PDBS") "DBA_PDBS" WHERE "DBA_PDBS"."CON_ID"=0 OR "DBA_PDBS"."CON_ID"=3
END OF STMT
PARSE #139807307379504:c=0,e=44,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=2042216988,tim=203051393685
EXEC #139807307379504:c=0,e=48,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=2042216988,tim=203051393790
FETCH #139807307379504:c=0,e=20,p=0,cr=0,cu=0,mis=0,r=1,dep=1,og=4,plh=2042216988,tim=203051393826
STAT #139807307379504 id=1 cnt=1 pid=0 pos=1 obj=0 op='RESULT CACHE 8p3h095ufc042f32tf05b23qf3 (cr=0 pr=0 pw=0 str=1 time=18 us)'
STAT #139807307379504 id=2 cnt=0 pid=1 pos=1 obj=0 op='NESTED LOOPS (cr=0 pr=0 pw=0 str=0 time=0 us cost=2 size=16 card=1)'
STAT #139807307379504 id=3 cnt=0 pid=2 pos=1 obj=161 op='TABLE ACCESS BY INDEX ROWID CONTAINER$ (cr=0 pr=0 pw=0 str=0 time=0 us cost=1 size=11 card=1)'
STAT #139807307379504 id=4 cnt=0 pid=3 pos=1 obj=163 op='INDEX UNIQUE SCAN I_CONTAINER2 (cr=0 pr=0 pw=0 str=0 time=0 us cost=0 size=0 card=1)'
STAT #139807307379504 id=5 cnt=0 pid=2 pos=2 obj=36 op='INDEX RANGE SCAN I_OBJ1 (cr=0 pr=0 pw=0 str=0 time=0 us cost=1 size=5 card=1)'
CLOSE #139807307379504:c=0,e=4,dep=1,type=1,tim=203051393959

You note that result cache is used for optimization and query is run with NO_OBJECT_LINK() to prevent further data links if any.

Then, my session switches back to my PDB:

*** 2016-09-04T16:09:54.406676+02:00 (PDB(3))

and execution of my query finishes:

FETCH #139807307349184:c=0,e=375,p=0,cr=0,cu=0,mis=0,r=1,dep=0,og=2,plh=1333657383,tim=203051393981
STAT #139807307349184 id=1 cnt=1 pid=0 pos=1 obj=0 op='SORT AGGREGATE (cr=0 pr=0 pw=0 str=1 time=544 us)'
STAT #139807307349184 id=2 cnt=1 pid=1 pos=1 obj=0 op='DATA LINK FULL DBA_PDBS (cr=0 pr=0 pw=0 str=1 time=525 us cost=1 size=1300 card=100)'
FETCH #139807307349184:c=0,e=1,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=1333657383,tim=203051394259
CLOSE #139807307349184:c=0,e=10,dep=0,type=1,tim=203051397922

you see that the execution plan is explicit: ‘DATA LINK FULL’ in 12.2 (it was FIXED TABLE FULL X$OBLNK$ in 12.1)

_diag_cdb_logging

This new behaviour is controlled by an underscore parameter:

SQL> alter system set "_diag_cdb_logging"=thisIsMyWayToGetHelp;
alter system set "_diag_cdb_logging"=thisIsMyWayToGetHelp
*
ERROR at line 1:
ORA-00096: invalid value THISISMYWAYTOGETHELP for parameter _diag_cdb_logging,
must be from among long, short, off

By default on 12.2 the parameter is set to SHORT and writes the traces as above.
SQL> alter system set "_diag_cdb_logging"=SHORT;

If you set it to OFF, you have same behavior as in 12.1: a ‘*** CONTAINER ID:’ line is displayed for explicit SET CONTAINER but no more information.

When set to LONG you get the CON_UID which may be useful for traces that cover plug/unplug operations:

SQL> select con_id,name,dbid,con_uid,guid from v$containers;

CON_ID NAME DBID CON_UID GUID
---------- -------- ---------- ---------- --------------------------------
1 CDB$ROOT 893728006 1 3817ED090B9766FDE0534440E40ABD67
2 PDB$SEED 1943618461 1943618461 3A29D20830E760B7E053734EA8C047BB
3 PDB 4128224117 4128224117 3A2C965DE81E15A8E053734EA8C023AC
 
SQL> host grep "^\*\*\*" &tracefile
*** 2016-09-04T16:50:43.462870+02:00 (PDB(3/4128224117))
*** CONTAINER ID:(3) 2016-09-04T16:50:43.463067+02:00
*** 2016-09-04T16:50:43.493035+02:00 (CDB$ROOT(1/1))
*** 2016-09-04T16:50:43.495053+02:00 (PDB(3/4128224117))

If you want more information about CON_ID, CON_UID, GUID, and a lot more about multitenant, the Oracle Database 12c Release 2 Multitenant (Oracle Press) 1st Edition by Anton Els (Author), Vit Spinka (Author), Franck Pachot (Author) goes into all details.

 

Cet article Oracle 12cR2 multitenant containers in SQL_TRACE est apparu en premier sur Blog dbi services.

Oracle 12cR2 multitenant: Local UNDO

Tue, 2016-11-08 12:58

Pluggable Databases are supposed to be isolated, containing the whole of user data and metadata. This is the definition of dictionary separation coming with multitenant architecture: only system data and metadata are at CDB level. User data and metadata are in separate tablespaces belonging to the PDB. And this is what makes the unplug/plug available: because PDB tablespaces contain everything, you can transport their datafiles from one CDB to another.
However, if they are so isolated, can you explain why

  • You cannot flashback a PDB?
  • You need an auxiliary instance for PDB Point-In-Time recovery?
  • You need to put the PDB read-only before cloning it?


There is something that is not contained in your PDB but is at CDB level, and which contains user data. The UNDO tablespace is shared:

CaptureLocalUndo001

You cannot flashback a PDB because doing so requires to rollback the ongoing transactions at the time you flashback. Information was in UNDO tablespace at that time, but is not there anymore.

It’s the same idea with Point-In-Time recovery of PDB. You need to restore the UNDO tablespace to get those UNDO records from the Point-In-Time. But you cannot restore it in place because it’s shared with other PDBs that need current information. This is why you need an auxiliary instance for PDBPITR in 12.1

To clone a PDB cannot be done with ongoing transactions because their UNDO is not in the PDB. This is why it can be done only when the PDB is read-only.

12.2 Local UNDO

In 12.2 you can choose to have one UNDO tablespace per PDB, in local undo mode, which is the default in DBCA:

CaptureLocalUndo000

With local undo PDBs are truly isolated even when opened with ongoing transactions:

CaptureLocalUndo002

Look at the ‘RB segs’ column from RMAN report schema:

[oracle@OPC122 ~]$ rman target /
 
Recovery Manager: Release 12.2.0.1.0 - Production on Tue Nov 8 18:53:46 2016
 
Copyright (c) 1982, 2016, Oracle and/or its affiliates. All rights reserved.
 
connected to target database: CDB1 (DBID=901060295)
 
RMAN> report schema;
 
using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB1
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 880 SYSTEM YES /u02/app/oracle/oradata/CDB1/system01.dbf
3 710 SYSAUX NO /u02/app/oracle/oradata/CDB1/sysaux01.dbf
4 215 UNDOTBS1 YES /u02/app/oracle/oradata/CDB1/undotbs01.dbf
5 270 PDB$SEED:SYSTEM NO /u02/app/oracle/oradata/CDB1/pdbseed/system01.dbf
6 560 PDB$SEED:SYSAUX NO /u02/app/oracle/oradata/CDB1/pdbseed/sysaux01.dbf
7 5 USERS NO /u02/app/oracle/oradata/CDB1/users01.dbf
8 180 PDB$SEED:UNDOTBS1 NO /u02/app/oracle/oradata/CDB1/pdbseed/undotbs01.dbf
9 270 PDB1:SYSTEM YES /u02/app/oracle/oradata/CDB1/PDB1/system01.dbf
10 590 PDB1:SYSAUX NO /u02/app/oracle/oradata/CDB1/PDB1/sysaux01.dbf
11 180 PDB1:UNDOTBS1 YES /u02/app/oracle/oradata/CDB1/PDB1/undotbs01.dbf
12 5 PDB1:USERS NO /u02/app/oracle/oradata/CDB1/PDB1/users01.dbf
 
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 33 TEMP 32767 /u04/app/oracle/oradata/temp/temp01.dbf
2 64 PDB$SEED:TEMP 32767 /u04/app/oracle/oradata/temp/temp012016-10-04_11-34-07-330-AM.dbf
3 100 PDB1:TEMP 100 /u04/app/oracle/oradata/CDB1/PDB1/temp012016-10-04_11-34-07-330-AM.dbf

You have an UNDO tablespace in ROOT, in PDB$SEED and in each user PDB.

If you have a database in shared undo mode, you can move to local undo mode while in ‘startup migrate’. PDBs when opened will have an UNDO tablespace created. You can also create an UNDO tablespace in PDB$SEED.

Yes, in 12.2, you can open the PDB$SEED read/write for this purpose:


18:55:59 SQL> alter pluggable database PDB$SEED open read write force;
 
Pluggable database altered.
 
18:56:18 SQL> show pdbs;
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ WRITE NO
3 PDB1 READ WRITE NO
18:56:23 SQL> alter pluggable database PDB$SEED open read only force;
 
Pluggable database altered.

But remember this is only allowed for local undo migration.

The recommandation is to run in local undo mode, even in Single-Tenant.

More about it in the 12cR2 Multitenant book:

 

Cet article Oracle 12cR2 multitenant: Local UNDO est apparu en premier sur Blog dbi services.

Oracle 12cR2 PL/SQL new feature: TNSPING from the database

Tue, 2016-11-08 11:55

Database links are resolved with the server TNS_ADMIN configuration (sqlnet.ora and tnsnames.ora). You can use tnsping to check the resolution, but it supposes that you are on the server and have set the same environment as the one which started the database.
In 12.2 you have a new package to check that: DBMS_TNS. It’s the kind of little new features that make our life easier.

The easy way to verify a connection string is to use tnsping. Here is an example with an EZCONNECT resolution:

[oracle@SE122 ~]$ tnsping //10.196.234.38/CDB1.opcoct.oraclecloud.internal
TNS Ping Utility for Linux: Version 12.2.0.1.0 - Production on 08-NOV-2016 17:45:34
Copyright (c) 1997, 2016, Oracle. All rights reserved.
Used parameter files:
/u01/app/oracle/product/12.2.0/dbhome_1/network/admin/sqlnet.ora
Used EZCONNECT adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=CDB1.opcoct.oraclecloud.internal))(ADDRESS=(PROTOCOL=TCP)(HOST=10.196.234.38)(PORT=1521)))
OK (0 msec)

The full connection description is displayed here before contacting the listener.

This resolution is valid only with a specific TNS configuration (which is here /u01/app/oracle/product/12.2.0/dbhome_1/network/admin). However, you may have different configurations (using the TNS_ADMIN environment variable) and if it’s not set consistently, you may have different results.
Basically:

  • When you connect locally to the server (no SQL*Net, no listener), the Oracle session inherits the client environment
  • When you connect remotely to a service statically registered on the listener, the Oracle session inherits the environment which started the listener
  • When you connect remotely to a service dynamically registered on the listener, the Oracle session inherits the environment which started the database
DBMS_TNS

So here is this new package:

SQL> desc dbms_tns
FUNCTION RESOLVE_TNSNAME RETURNS VARCHAR2
Argument Name Type In/Out Default?
------------------------------ ----------------------- ------ --------
TNS_NAME VARCHAR2 IN

And you can run it when connected to the database to see how the name is resolved:

SQL> select dbms_tns.resolve_tnsname('&_connect_identifier') from dual;
old 1: select dbms_tns.resolve_tnsname('&_connect_identifier') from dual
new 1: select dbms_tns.resolve_tnsname('//10.196.234.38/CDB1.opcoct.oraclecloud.internal') from dual
 
DBMS_TNS.RESOLVE_TNSNAME('//10.196.234.38/CDB1.OPCOCT.ORACLECLOUD.INTERNAL')
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=CDB1.opcoct.oraclecloud.internal)(CID=(PROGRAM=oracle)(HOST=SE122.compute-opcoct.oraclecloud.internal)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=10.196.234.38)(PORT=1521)))

The resolution is done without attempting to contact the listener. This ip address do not exist on my network:

select dbms_tns.resolve_tnsname('//10.1.1.1/XX') from dual;
 
DBMS_TNS.RESOLVE_TNSNAME('//10.1.1.1/XX')
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=XX)(CID=(PROGRAM=oracle)(HOST=SE122.compute-opcoct.oraclecloud.internal)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=10.1.1.1)(PORT=1521)))

As you can see, the client identification is send here (PROGRAM and HOST).

Demo

I’ll use this new feature to prove my assumption above about which environment is used when connecting locally or through dynamic or static service.

I create 3 directories with different names for the SERVICE_NAME in order to see which one is used:


mkdir -p /tmp/tns_lsnr ; echo "NAMES.DIRECTORY_PATH=TNSNAMES" > /tmp/tns_lsnr/sqlnet.ora ; echo "XXX=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_lsnr))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))" > /tmp/tns_lsnr/tnsnames.ora
mkdir -p /tmp/tns_sess ; echo "NAMES.DIRECTORY_PATH=TNSNAMES" > /tmp/tns_sess/sqlnet.ora ; echo "XXX=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_sess))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))" > /tmp/tns_sess/tnsnames.ora
mkdir -p /tmp/tns_inst; echo "NAMES.DIRECTORY_PATH=TNSNAMES" > /tmp/tns_inst/sqlnet.ora ; echo "XXX=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_inst))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))" > /tmp/tns_inst/tnsnames.ora

In addition, I’ll need a listener configuration with a static service, let’s call it STATIC:


cat > /tmp/tns_lsnr/listener.ora <<END
LISTENER=(DESCRIPTION_LIST=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=$HOSTNAME)(PORT=1521))))
SID_LIST_LISTENER=(SID_LIST=(SID_DESC=(ORACLE_HOME=$ORACLE_HOME)(GLOBAL_DBNAME=STATIC)(SID_NAME=CDB1)))
END

Here’s a summary of the different configurations:


$ tail /tmp/tns*/*
 
==> /tmp/tns_inst/sqlnet.ora <==
NAMES.DIRECTORY_PATH=TNSNAMES
====> /tmp/tns_inst/tnsnames.ora <==
XXX=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_inst))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))
====> /tmp/tns_lsnr/listener.ora <==
LISTENER=(DESCRIPTION_LIST=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=SE122.compute-opcoct.oraclecloud.internal)(PORT=1521))))
SID_LIST_LISTENER=(SID_LIST=(SID_DESC=(ORACLE_HOME=/u01/app/oracle/product/122EE)(GLOBAL_DBNAME=STATIC)(SID_NAME=CDB1)))
====> /tmp/tns_lsnr/sqlnet.ora <==
NAMES.DIRECTORY_PATH=TNSNAMES
====> /tmp/tns_lsnr/tnsnames.ora <==
XXX=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_lsnr))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))
====> /tmp/tns_sess/sqlnet.ora <==
NAMES.DIRECTORY_PATH=TNSNAMES
====> /tmp/tns_sess/tnsnames.ora <==
XXX=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_sess))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))

I start the listener and the instance with their own environment, and set the session one to another:


export TNS_ADMIN=/tmp/tns_lsnr ; lsnrctl start
export TNS_ADMIN=/tmp/tns_inst ; sqlplus / as sysdba <<< startup
export TNS_ADMIN=/tmp/tns_sess

Now it’s time to use this new DBMS_TNS when connecting locally, through the dynamic service (CDB1) and through the static service (STATIC):


SQL> connect system/oracle
Connected.
 
SQL> select dbms_tns.resolve_tnsname('XXX') from dual;
 
DBMS_TNS.RESOLVE_TNSNAME('XXX')
-----------------------------------------------------------------------------------------------------------------------------------------------------------
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_sess)(CID=(PROGRAM=oracle)(HOST=SE122.compute-opcoct.oraclecloud.internal)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))

When connected locally the TNS_ADMIN from my shell environment running sqlplus is used.


SQL> connect system/oracle@//localhost/CDB1
Connected.
 
SQL> select dbms_tns.resolve_tnsname('XXX') from dual;
 
DBMS_TNS.RESOLVE_TNSNAME('XXX')
-----------------------------------------------------------------------------------------------------------------------------------------------------------
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_inst)(CID=(PROGRAM=oracle)(HOST=SE122.compute-opcoct.oraclecloud.internal)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))

When connected to dynamic service, the TNS_ADMIN used to startup the instance is used.


SQL> connect system/oracle@//localhost/STATIC
Connected.
 
SQL> select dbms_tns.resolve_tnsname('XXX') from dual;
 
DBMS_TNS.RESOLVE_TNSNAME('XXX')
-----------------------------------------------------------------------------------------------------------------------------------------------------------
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_lsnr)(CID=(PROGRAM=oracle)(HOST=SE122.compute-opcoct.oraclecloud.internal)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))

When connected to static service, the TNS_ADMIN used to startup the listener is used.

So what?

You should use a consistent environment setting in order to be sure that all sessions will use the same name resolution. But if you have a doubt about it, DBMS_TNS can help to troubleshoot. It’s better than DBMS_SYSTEM.GET_ENV as it does the name resolution rather than just showing the environment variables.

Want to know quickly where all database links are going? Here it is:

SQL> select username,dbms_tns.resolve_tnsname(host) from cdb_db_links;

 

Cet article Oracle 12cR2 PL/SQL new feature: TNSPING from the database est apparu en premier sur Blog dbi services.

Oracle 12cR2 : Partitioning improvements – auto-list partitioning

Tue, 2016-11-08 09:16

The second release of Oracle Database 12c (12cR2) comes with new improvements regarding partitioning.
In this blog post I’ll show one of them : auto-list partitioning.
Auto-list partitioning is an extension of list partitioning. It enable the automatic creation of partitions for new values inserted into the partitioned table.
auto-list-part
Image : “Oracle Partitioning in Oracle Database 12c Release 2″ – Hermann Bär

Let’s have a look at the syntax. As you can see below, you only have to specify the new “Automatic” keyword after the “Partition by list” statement :
SQL> CREATE TABLE city(
city_zip NUMBER(5),
city_name VARCHAR2(30)
)
PARTITION BY LIST (city_name) AUTOMATIC
(PARTITION p_delemont VALUES ('Delemont'));


Table created.
SQL>

The auto-list partitioned table is created with only one partition, explicitly created at the table creation :
SQL> SELECT partition_name, high_value, partition_position FROM dba_tab_partitions WHERE table_name = 'CITY';

PARTITION_NAME HIGH_VALUE PARTITION_POSITION
-------------------- ------------------------------ ------------------
P_DELEMONT 'Delemont' 1

Let’s insert new cities in the table… :
SQL> INSERT INTO city VALUES (2800, 'Delemont');
1 row created.

SQL> INSERT INTO city VALUES (4001, 'Basel');
1 row created.

SQL> INSERT INTO city VALUES (8001, 'Zurich');
1 row created.

SQL> INSERT INTO city VALUES (1000, null);
1 row created.

Great ! Every data insertion succeed without the famous partitions error message : “ORA-14400: inserted partition key does not map to any partition.
And if we check the partitions…
SQL> SELECT partition_name, high_value, partition_position FROM dba_tab_partitions WHERE table_name = 'CITY';

PARTITION_NAME HIGH_VALUE PARTITION_POSITION
-------------------- -------------------- ------------------
P_DELEMONT 'Delemont' 1
SYS_P5004 'Basel' 2
SYS_P5005 'Zurich' 3
SYS_P5006 '' 4

…one partition has been automatically created for each values. Take note that a partition has also been created to store the “null” values.

Now, if you want to evolve a list partitioned table to an auto-list partitioned table, you can do it easily and in one shot :
SQL> ALTER TABLE cars SET AUTOMATIC;;
ALTER TABLE cars SET AUTOMATIC
*
ERROR at line 1:
ORA-14852: SET [SUB]PARTITIONING AUTOMATIC is not legal on this table.

Oops, this error occurs because the table contains a DEFAULT partition. As you can imagine, it’s now not necessary to have one, so you MUST drop it before moving to auto-list partitioning (take care of the data stored inside it ! ;-) ):
SQL> ALTER TABLE cars DROP PARTITION (P_DEFAULT);

Table altered.

SQL> ALTER TABLE cars SET AUTOMATIC;

Table altered.

The table is now auto-list partitioned :
SQL> SELECT partitioning_type, autolist, partition_count FROM dba_part_tables WHERE table_name = 'CITY';

PARTITIONING_TYPE AUT PARTITION_COUNT
------------------------------ --- ---------------
LIST YES 4

Conclusion :
From my point of view, this partitioning improvement is a really good one. Indeed, if your list-partitioned table has a big quantity of distinct values, creating and managing partitions for each of them could become an onerous work. But now with 12cR2 you can forget this task : partition creation for new values is automated !

 

Cet article Oracle 12cR2 : Partitioning improvements – auto-list partitioning est apparu en premier sur Blog dbi services.

EDB Postgres Advanced Server 9.6 BETA released

Tue, 2016-11-08 05:25

Yesterday EnterpriseDB released the beta of the next version of Postgres Advanced Server. You can sign up for the beta program here and download the software here.

As usual this release includes all the features of the community release of PostgreSQL (version 9.6) plus the EDB additions such as Oracle compatibility. The release notes are here.

 

Cet article EDB Postgres Advanced Server 9.6 BETA released est apparu en premier sur Blog dbi services.

Oracle Public Cloud: create a database from command line

Mon, 2016-11-07 15:11

You love the Oracle Public Cloud with its simple Web interface? Great. But what if you want to automate a database creation from command line?
Easy with the CURL and REST API.

JSON

First, you need to create a JSON file with all the information for your service.
Everything is documented: https://apicatalog.oraclecloud.com/ui/views/apicollection/oracle-public/database/1.1/serviceinstances

$ cat createrequestbody.json
 
{
"description": "Mon Nov 7 21:03:39 CET 2016",
"edition": "EE_HP",
"level": "PAAS",
"serviceName": "CDB122",
"shape": "oc3",
"subscriptionType": "MONTHLY",
"version": "12.2.0.1",
"vmPublicKeyText": "ssh-dss AAAAB3NzaC1kc3MAAACBAMrw5Au0hHP1BT/W3gcSg+Fwq36LdfzroubjS6g8RSvcaeltk1O/uQwJV73MCsBDgs4PaAuDekZTW5w6kN8ESd6r6BGLm/sETHNiRzOWWap3ds18iiaLJWcMbKRlZUWLdfhGemryWZaQIFrSNkfE5YkFz4V4m5d4EwKpLzIthKh3AAAAFQDtjTsaF7nydePPJPDqYERu8QlcMQAAAIBjl8NxEmE7nud7b4xuLkuJKnwlf2urHrOZGcQapNUZAjuehe6/8VhPB4GebZb52GlyYOuELDP6e9PXyFRxTfTPff22JE5tPM8vTjCmFEKhBspl43YurJxwvDtvgTNKk5Zp5MBXMDjQ8KNHXlpnRrfh45acHI8gs0KlH51+e7j+6AAAAIA/Q8rVC4g+MBepJGKed2ar0JzralZo7Q8vsZfQ889Y3wkaBJl2/SRaaW1JNmkB20eZIEbRkh9e/ex07ryKg65dgUzU4/2dE2CSxplG0vSf/xp7hYr/bJzR1SZXMKbAdZ2wg+SGaTlKWAAS9xhvKGw1jVWdVgacYJOPl343bMKkuw==",
"parameters": [
{
"type": "db",
"usableStorage": "15",
"adminPassword": "P4ss#ord",
"sid": "CDB122",
"pdbName": "PDB1",
"failoverDatabase": "no",
"backupDestination": "NONE"
}
] }

You can see that you have exactly the same information as from the GUI.

Create Instance

Then, you run the following CURL command (having the cacert.pem certificate in the current directory:

$ curl --include --request POST --cacert ./cacert.pem --user myuser@oracle.com:P4ss#ord --header "X-ID-TENANT-NAME:opcoct" --header "Content-Type:application/json" --data @createrequestbody.json https://dbcs.emea.oraclecloud.com/paas/service/dbcs/api/v1.1/instances/opcoct
 
HTTP/2 202
server: Oracle-Application-Server-11g
location: https://dbcs.emea.oraclecloud.com:443/paas/service/dbcs/api/v1.1/instances/opcoct/status/create/job/2738110
content-language: en
access-control-allow-origin: *
access-control-allow-headers: Content-Type, api_key, Authorization
retry-after: 60
access-control-allow-methods: GET, POST, DELETE, PUT, OPTIONS, HEAD
x-oracle-dms-ecid: 005GBi63mCP3n315RvWByd0003Ri0004Zg
x-oracle-dms-ecid: 005GBi63mCP3n315RvWByd0003Ri0004Zg
service-uri: https://dbcs.emea.oraclecloud.com:443/paas/service/dbcs/api/v1.1/instances/opcoct/CDB122
x-frame-options: DENY
content-type: application/json
vary: user-agent
date: Mon, 07 Nov 2016 20:03:59 GMT

Here “opcoct” is my identity domain id. You find it in the header X-ID-TENANT-NAME and the URL.
The myuser@oracle.com:P4ss#ord is the user and password in the domain.

From the GUI you can see that the creation has started:

CaptureDBAASREST01

DBaaS instance information

Here is the information for the database service


$ curl --include --request GET --cacert ./cacert.pem --user myuser@oracle.com:P4ss#ord --header "X-ID-TENANT-NAME:opcoct" https://dbcs.emea.oraclecloud.com/paas/service/dbcs/api/v1.1/instances/opcoct/CDB122
 
HTTP/2 200
server: Oracle-Application-Server-11g
content-language: en
service-uri: https://dbcs.emea.oraclecloud.com:443/paas/service/dbcs/api/v1.1/instances/opcoct/CDB122
access-control-allow-headers: Content-Type, api_key, Authorization
access-control-allow-methods: GET, POST, DELETE, PUT, OPTIONS, HEAD
x-oracle-dms-ecid: 005GBiK7U4I3z015Rvl3id00071a0000yo
x-oracle-dms-ecid: 005GBiK7U4I3z015Rvl3id00071a0000yo
access-control-allow-origin: *
x-frame-options: DENY
content-type: application/json
vary: user-agent
date: Mon, 07 Nov 2016 20:07:52 GMT
content-length: 1244
 
{
"service_name": "CDB122",
"version": "12.2.0.1",
"status": "In Progress",
"description": "Mon Nov 7 21:03:39 CET 2016",
"identity_domain": "opcoct",
"creation_time": "2016-11-07T20:03:59.524+0000",
"last_modified_time": "2016-11-07T20:03:59.505+0000",
"created_by": "myuser@oracle.com",
"sm_plugin_version": "16.4.3-541",
"service_uri": "https:\/\/dbcs.emea.oraclecloud.com:443\/paas\/service\/dbcs\/api\/v1.1\/instances\/opcoct\/CDB122",
"num_nodes": 1,
"level": "PAAS",
"edition": "EE_HP",
"shape": "oc3",
"subscriptionType": "MONTHLY",
"creation_job_id": "2738110",
"num_ip_reservations": 1,
"backup_destination": "NONE",
"failover_database": false,
"rac_database": false,
"sid": "CDB122",
"pdbName": "PDB1",
"demoPdb": "",
"listenerPort": 1521,
"timezone": "UTC",
"is_clone": false,
"clone_supported_version": "16.3.1",
"active_jobs": [
{
"active_job_operation": "start-db-compute-resources",
"active_job_id": 2738113,
"active_job_messages": [] },
{
"active_job_operation": "create-dbaas-service",
"active_job_id": 2738110,
"active_job_messages": [] }
],
"compute_site_name": "EM003_Z19",
"jaas_instances_using_service": ""
}

The status is ‘in progress’. Let’s look at the compute service.

Compute instance information

From the compute service, you can see if there is already an IP address assigned here


$ curl --include --request GET --cacert ./cacert.pem --user myuser@oracle.com:P4ss#ord --header "X-ID-TENANT-NAME:opcoct" https://dbcs.emea.oraclecloud.com/paas/service/dbcs/api/v1.1/instances/opcoct/CDB122/servers
 
HTTP/2 200
server: Oracle-Application-Server-11g
content-language: en
access-control-allow-headers: Content-Type, api_key, Authorization
access-control-allow-methods: GET, POST, DELETE, PUT, OPTIONS, HEAD
x-oracle-dms-ecid: 005GBiMizXo3z015Rvl3id00071a0004p_
x-oracle-dms-ecid: 005GBiMizXo3z015Rvl3id00071a0004p_
access-control-allow-origin: *
x-frame-options: DENY
content-type: application/json
vary: user-agent
date: Mon, 07 Nov 2016 20:08:35 GMT
content-length: 430
 
[{
"status": "Running",
"creation_job_id": "2738110",
"creation_time": "2016-11-07T20:03:59.524+0000",
"created_by": "myuser@oracle.com",
"shape": "oc3",
"sid": "CDB122",
"pdbName": "PDB1",
"listenerPort": 1521,
"connect_descriptor": "CDB122:1521\/PDB1",
"connect_descriptor_with_public_ip": "null:1521\/PDB1",
"initialPrimary": true,
"storageAllocated": 142336,
"reservedIP": "",
"hostname": "CDB122"
}]

No IP address yet. I have the job id (2738110) so that I can check it later.

Job information


$ curl --include --request GET --cacert ./cacert.pem --user myuser@oracle.com:P4ss#ord --header "X-ID-TENANT-NAME:opcoct" https://dbcs.emea.oraclecloud.com/paas/service/dbcs/api/v1.1/instances/opcoct/status/create/job/2738110
 
HTTP/2 202
server: Oracle-Application-Server-11g
location: https://dbcs.emea.oraclecloud.com:443/paas/service/dbcs/api/v1.1/instances/opcoct/status/create/job/2738110
content-language: en
access-control-allow-origin: *
access-control-allow-headers: Content-Type, api_key, Authorization
retry-after: 60
access-control-allow-methods: GET, POST, DELETE, PUT, OPTIONS, HEAD
x-oracle-dms-ecid: 005GBiOeMbz3n315RvWByd0003Ri00048d
x-oracle-dms-ecid: 005GBiOeMbz3n315RvWByd0003Ri00048d
service-uri: https://dbcs.emea.oraclecloud.com:443/paas/service/dbcs/api/v1.1/instances/opcoct/CDB122
x-frame-options: DENY
content-type: application/json
vary: user-agent
date: Mon, 07 Nov 2016 20:09:08 GMT
 
{
"service_name": "CDB122",
"version": "12.2.0.1",
"status": "In Progress",
"description": "Mon Nov 7 21:03:39 CET 2016",
"identity_domain": "opcoct",
"creation_time": "2016-11-07T20:03:59.524+0000",
"last_modified_time": "2016-11-07T20:03:59.505+0000",
"created_by": "myuser@oracle.com",
"sm_plugin_version": "16.4.3-541",
"service_uri": "https:\/\/dbcs.emea.oraclecloud.com:443\/paas\/service\/dbcs\/api\/v1.1\/instances\/opcoct\/CDB122",
"message": ["Starting Compute resources..."],
"job_start_date": "Mon Nov 07 20:04:01 GMT 2016",
"job_status": "InProgress",
"job_operation": "create-dbaas-service",
"job_request_params": {
"edition": "EE_HP",
"vmPublicKeyText": "ssh-dss AAAAB3NzaC1kc3MAAACBAMrw5Au0hHP1BT/W3gcSg+Fwq36LdfzroubjS6g8RSvcaeltk1O/uQwJV73MCsBDgs4PaAuDekZTW5w6kN8ESd6r6BGLm/sETHNiRzOWWap3ds18iiaLJWcMbKRlZUWLdfhGemryWZaQIFrSNkfE5YkFz4V4m5d4EwKpLzIthKh3AAAAFQDtjTsaF7nydePPJPDqYERu8QlcMQAAAIBjl8NxEmE7nud7b4xuLkuJKnwlf2urHrOZGcQapNUZAjuehe6/8VhPB4GebZb52GlyYOuELDP6e9PXyFRxTfTPff22JE5tPM8vTjCmFEKhBspl43YurJxwvDtvgTNKk5Zp5MBXMDjQ8KNHXlpnRrfh45acHI8gs0KlH51+e7j+6AAAAIA/Q8rVC4g+MBepJGKed2ar0JzralZo7Q8vsZfQ889Y3wkaBJl2/SRaaW1JNmkB20eZIEbRkh9e/ex07ryKg65dgUzU4/2dE2CSxplG0vSf/xp7hYr/bJzR1SZXMKbAdZ2wg+SGaTlKWAAS9xhvKGw1jVWdVgacYJOPl343bMKkuw==",
"count": "2",
"provisioningTimeout": "180",
"subscriptionType": "MONTHLY",
"createStorageContainerIfMissing": "false",
"dbConsolePort": "1158",
"listenerPort": "1521",
"serviceName": "CDB122",
"namespace": "dbaas",
"version": "12.2.0.1",
"timezone": "UTC",
"pdbName": "PDB1",
"level": "PAAS",
"tenant": "opcoct",
"serviceInstance": "CDB122",
"description": "Mon Nov 7 21:03:39 CET 2016",
"failoverDatabase": "false",
"emExpressPort": "5500",
"ncharset": "AL16UTF16",
"trial": "false",
"sid": "CDB122",
"noRollback": "false",
"operationName": "create-dbaas-service",
"goldenGate": "false",
"backupDestination": "NONE",
"ibkup": "false",
"charset": "AL32UTF8",
"serviceVersion": "12.2.0.1",
"shape": "oc3",
"identity_domain_id": "opcoct",
"serviceType": "dbaas",
"usableStorage": "15",
"disasterRecovery": "false",
"server_base_uri": "https:\/\/dbcs.emea.oraclecloud.com:443\/paas\/service\/dbcs\/",
"computeSiteName": "EM003_Z19",
"isRac": "false"
}
}

REST Endpoint

Here, my test is on the EMEA datacenter and this is why the URL starts with https://dbcs.emea.oraclecloud.com
If you don’t know, you can check on My Cloud Services where you have all information:
CaptureDBAASREST02

 

Cet article Oracle Public Cloud: create a database from command line est apparu en premier sur Blog dbi services.

Documentum story – Migrate repository cryptography from crypto key to the crypto keystore

Fri, 2016-11-04 11:45

In this blog post I will explain the migration of the crypto key (aek.key) to the crypto keystore (lockbox) for an upgraded repository. In fact, during customer activities, I has been involved in the migration of a repository hosted on a content server 6.7 SP2 to a new infrastructure. A content server 7.2 P05 was on the new infrastructure where the lockbox was already used by other repositories. The pre-requisite to achieve this migration is to have an already configured repository using the lockbox which will help to re-encrypt the password of the database owner stored later in the dbpasswd.txt file. The repository for which we wanted to migrate the crypto key was properly upgraded to the content server 7.2 P05 using the Content Server Configuration Program keeping the existing aek.key from the old system. Our goal was to align the cryptography mechanism to use the lockbox.

Please note that the above instructions cannot be used if the file store is encrypted with the Trusted Content Services.

We will also recommend removing LDAP objects and re-create them later so the password can be properly re-encrypted and DER et PEM file re-created under $DOCUMENTUM/dba/secure/ldapdb.

First of all we need to check which encryption algorithm is currently used by retrieving the r_crypto_mode from the docbase config

1> select r_crypto_mode from dm_docbase_config
2> GO
r_crypto_mode
----------------------------------------------------------------
3DES_RSA1024_SHA256
(1 row affected)

Before shutting down the repository we will retrieve a list of all users that are configured with an inline password. We will have to reset their password as the encryption mechanism will be changed.

Query to retrieve inline password users:

select user_login_name from dm_user where user_source = 'inline password';

We can now shutdown the repository:

$DOCUMENTUM/dba/dm_shutdown_repo1

It’s a best practice to backup the aek.key and the lockbox files located under the $DOCUMENTUM/dba/secure folder:

cd $DOCUMENTUM/dba
tar -czvf secure-bck-MigAek-$(date +%Y%m%d).tgz secure

It’s also a good practice to backup all files that will be updated by this procedure:

cd $DOCUMENTUM/dba/config/repo1
cp server.ini server.ini.bck-MigAek-$(date +%Y%m%d)
cp dbpasswd.txt dbpasswd.txt.bck-MigAek-$(date +%Y%m%d)

We will now connect to the Database schema to remove all encryption values. During the next startup of the repository, encryption values will be properly regenerated automatically.

sqlplus repo1/passwd@db1

Once connected you can issue the following SQL instructions :

update dm_docbase_config_s set i_crypto_key = ' ', i_ticket_crypto_key = ' ';

delete from dmi_vstamp_s where i_application = 'dm_docbase_config_crypto_key_init';
delete from dmi_vstamp_s where i_application = 'dm_docbase_config_ticket_crypto_key_init';

delete dm_sysobject_s where r_object_id = (select r_object_id from dm_public_key_certificate_s where key_type = 1);
delete dm_sysobject_r where r_object_id = (select r_object_id from dm_public_key_certificate_s where key_type = 1);
delete dm_public_key_certificate_s where key_type = 1;

delete dm_sysobject_s where r_object_id = (select r_object_id from dm_cryptographic_key_s where key_type = 1);
delete dm_sysobject_r where r_object_id = (select r_object_id from dm_cryptographic_key_s where key_type = 1);
delete dm_cryptographic_key_s where key_type = 1;
commit;

The repository initialization file can now be updated to move from the crypto key (aek.key) to the the crypto keystore (lockbox):

vi $DOCUMENTUM/dba/config/repo1/server.ini

Inside this file, you should comment the following lines:

crypto_keystore=Local
crypto_mode=3DES_RSA1024_SHA256
crypto_keyname=aek.key

Then uncomment/add the following lines:

crypto_mode = AES256_RSA1024_SHA256
crypto_keystore = Local
crypto_lockbox = lockbox.lb
crypto_keyname = CSaek

Please make sure that the crypto_mode in the first section is the same as the one retrieved before stopping the repository. The crypto_mode of the second section (new one) can contain the value that you want. The strongest mode available is “AES256_RSA1024_SHA256″ so that’s probably the value that you will want to set. In our case, we faced an issue because of the old repository and we had to keep the same crypto_mode as the original which was “3DES_RSA1024_SHA256″. Therefore at the end, our server.ini file contained the following:

crypto_mode = 3DES_RSA1024_SHA256
crypto_keystore = Local
crypto_lockbox = lockbox.lb
crypto_keyname = CSaek

Once this is done, we now need to encrypt the database owner of the repository repo1. But how can we do this since the repository isn’t started? Simple: we need an already available repository on this content server (it’s a prerequisite actually) which is already configured to use the lockbox. We used the global registry for that purpose:

> iapi gr_dbi

Please enter a user (dmadmin):
Please enter password for dmadmin:


EMC Documentum iapi - Interactive API interface
(c) Copyright EMC Corp., 1992 - 2015
All rights reserved.
Client Library Release 7.2.0050.0084


Connecting to Server using docbase gr_dbi
[DM_SESSION_I_SESSION_START]info: "Session 010f583d80095f8a started for user dmadmin."


Connected to Documentum Server running Release 7.2.0050.0214 Linux64.Oracle
Session id is s0
API> initcrypto,c
...
OK
API> encrypttext,c,xxxx
...
DM_ENCR_TEXT_V2=AAAAEJpJA5bVkJGghYFqrik3kxJ0gaWIRNvhVmZA586sBuFx7NqKnbKSpdUYf/BvJgn10OQpoZXL1T7Y2L+RmwZRUPkWqsv139zbU7u2vw7UYKX3

We can now set the encrypted password to the dbpasswd.txt as follow:

vi /app/dctm/server/dba/config/repo1/dbpasswd.txt

Add the following content inside this file:

DM_ENCR_TEXT_V2=AAAAEPAcQcFYDlYJ26cqrQ17oC0yXek7E+1g83e069TNkYXjwoRd5zXpZmCm4NdrNYAJDLJ4iEKEzrzFlAuGUWw25KXs/98L0AI7VJx2DLUZNkjX

When this has been done, the repository is now ready to start:

$DOCUMENTUM/dba/dm_start_repo1

Check the log files and validate that the repository has been properly started. Something similar should be displayed:

2016-09-05T09:11:28.907641 21003[21003] 0000000000000000 [DM_SERVER_I_START_SERVER]info: "Docbase repo1 opened"

2016-09-05T09:11:28.907790 21003[21003] 0000000000000000 [DM_SERVER_I_SERVER]info: "Setting exception handlers to catch all interrupts"

2016-09-05T09:11:28.907820 21003[21003] 0000000000000000 [DM_SERVER_I_START]info: "Starting server using service name: repo1"

2016-09-05T09:11:29.402450 21003[21003] 0000000000000000 [DM_SERVER_I_LAUNCH_MTHDSVR]info: "Launching Method Server succeeded."

2016-09-05T09:11:29.420505 21003[21003] 0000000000000000 [DM_SERVER_I_LISTENING]info: "The server is listening on network address (Service Name: repo1, Host Name: content_server_01 :V4 IP)"

2016-09-05T09:11:29.420579 21003[21003] 0000000000000000 [DM_SERVER_I_IPV6_DISABLED]info: "The server can not listen on IPv6 address because the operating system does not support IPv6"

2016-09-05T09:11:29.426518 21003[21003] 0000000000000000 [DM_SERVER_I_LISTENING]info: "The server is listening on network address (Service Name: repo1_s, Host Name: content_server_01:V4 IP)"

2016-09-05T09:11:29.429969 21003[21003] 0000000000000000 [DM_SERVER_I_IPV6_DISABLED]info: "The server can not listen on IPv6 address because the operating system does not support IPv6"

2016-09-05T09:11:31.162134 21003[21003] 0000000000000000 [DM_WORKFLOW_I_AGENT_START]info: "Workflow agent master (pid : 21060, session 010095b980000007) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2016-09-05T09:11:31.163095 21003[21003] 0000000000000000 [DM_WORKFLOW_I_AGENT_START]info: "Workflow agent worker (pid : 21061, session 010095b98000000a) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2016-09-05T09:11:32.165584 21003[21003] 0000000000000000 [DM_WORKFLOW_I_AGENT_START]info: "Workflow agent worker (pid : 21074, session 010095b98000000b) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2016-09-05T09:11:33.167476 21003[21003] 0000000000000000 [DM_WORKFLOW_I_AGENT_START]info: "Workflow agent worker (pid : 21087, session 010095b98000000c) is started sucessfully."
2016-09-05T09:11:34.167673 21003[21003] 0000000000000000 [DM_SERVER_I_START]info: "Sending Initial Docbroker check-point "

2016-09-05T09:11:34.188560 21003[21003] 0000000000000000 [DM_MQ_I_DAEMON_START]info: "Message queue daemon (pid : 21105, session 010095b980000456) is started sucessfully."
2016-09-05T09:11:34.488373 21102[21102] 010095b980000003 [DM_DOCBROKER_I_PROJECTING]info: "Sending information to Docbroker located on host (content_server_01) with port (1490). Information: (Config(repo1), Proximity(1), Status(Open), Dormancy Status(Active))."
Mon Sep 05 09:12:28 2016 [INFORMATION] [AGENTEXEC 21546] Detected during program initialization: Version: 7.2.0050.0214 Linux64
Mon Sep 05 09:12:32 2016 [INFORMATION] [AGENTEXEC 21546] Detected during program initialization: Agent Exec connected to server repo1: [DM_SESSION_I_SESSION_START]info: "Session 010095b980502d02 started for user dmadmin."

 

The migration of the encryption key to the encryption keystore is nearly completed. It remains 2 tasks which are the password reset of the internal Documentum users such as the dm_bof_registry, dmc_wdk_preferences_owner and the dmc_wdk_presets_owner users and the password reset of the application specific users (the list created just before stopping the repository) but this will not be described in this blog.

As said before, if the LDAP configuration is required, it is better to remove the LDAP objects before the migration and recreate them now that the repository is running properly using the new encryption mechanisms.

All the steps and information to migrate the encryption mechanisms to the lockbox have been provided and I hope that it can be useful for you.

 

Cet article Documentum story – Migrate repository cryptography from crypto key to the crypto keystore est apparu en premier sur Blog dbi services.

SQL Server AlwaysOn availability groups and statistic issues on secondaries

Fri, 2016-11-04 11:23

I would like to share with you an interesting issue you may face while using SQL Server AlwaysOn availability groups and secondary read-only replicas. For those who use secondary read-only replicas as reporting servers, keep reading this blog post because it is about update statistics behavior on the secondary replicas and as you may know cardinality estimation accuracy is an important part of the queries performance in this case.

So a couple of days ago, I had an interesting discussion with one of my MVP French friend about an availability group issue he faced at a customer shop and related to the above topic. Without going into details now, he was surprised to see a difference between the primary replica and one secondary about last update statistic dates as well as rows sampled value for specific statistics. The concerned environment runs with SQL Server 2014 SP2.

First of all, let’s say that having different updated statistic dates between a primary and a secondary is part of a normal process. Indeed, changes related to statistic objects are replicated over the time from the primary to secondaries but it is not uncommon to encounter situation where data from the primary is updated until reaching the update statistic threshold value and to notice auto update statistics triggered by SQL Server on the secondary replica from Reporting queries. But what is more surprising is that this behavior noticed by my friend was exacerbated in his specific context with large tables.

Let’s demonstrate with an example. I was able to easily reproduce the issue on my environment but I was also interested in testing behaviors from different versions of SQL Server in response to this specific issue. In fact, I tried to figure out if the problem concerned only a specific build of SQL Server – SQL Server 2014 SP2 in this case – or if the problem concerns all the SQL Server versions.

 

blog 108 - 0 - demo archi

 

Let’s use the AdventureWorks2012 database with the bigTransactionHistory table that contains roughly 34 million of rows (343910073 rows).

Let’s say statistic information you will see later in this blog post came from each secondary that runs on a specific SQL Server version (respectively 2012, 2014 and 2016) by using the following T-SQL script on each secondary replica.

use AdventureWorks2012;

select 
	object_name(s.object_id) as table_name,
	s.name as stat_name,
	s.is_temporary,
	ds.last_updated,
	ds.modification_counter,
	ds.rows,
	ds.rows_sampled,
	CAST(ds.modification_counter * 100. / ds.rows AS DECIMAL(5,2)) AS modif_ratio,
	ds.steps
from sys.stats as s (nolock)
cross apply sys.dm_db_stats_properties(s.object_id, s.stats_id) as ds
where s.object_id = object_id('dbo.bigTransactionHistory');
go

 

Let’s begin with the starting scenario where I inserted approximatively 20% of the initial data in the bigTransactionHistory table as you may notice below. During the test we will focus only on the idx_cl_big_TransactionHistoryTransactionDate statistic related to the clustered index on the bigTransactionHistory table.

blog 108 - 1 - stats after modif and before update fullscan

Let’s update then the idx_cl_big_TransactionHistoryTransactionDate statistic with FULLSCAN from the primary replica. This operation may be part of a maintenance plan on daily / monthly basis depending on your scenario. Here a picture of statistic information from each secondary:

blog 108 - 2 - stats after modif and after update fullscan

Regarding this first output, we may notice that the modification counter from thesys.dm_db_stats_properties DMF did not drop to zero. To be honest I expected a value equal to 0 here. In addition, executing a Reporting query from each secondary did not have effect in this case. The reporting query is as follows and it is designed to use specifically the idx_cl_big_TransactionHistoryTransactionDate statistic.

use AdventureWorks2012;


select count(*)
from dbo.bigTransactionHistory
where TransactionDate between '20060101' and '20080101';
go

 

Keep going and let’s continue by inserting a new bunch of data (approximatively 10% more). After running a new update statistics operation with FULLSCAN from the primary (let’s say we are again in the situation where a maintenance plan comes into play) here the corresponding statistic information output from each secondary:

blog 108 - 3 - stats after insert data and after update fullscan

As expected, the modification of the rows counter value increased up to 24% but once again we may only notice that running update statistics with FULLSCAN on the primary doesn’t reset correctly the modification rows counter on the secondary regardless the SQL Server version. Let’s run the Reporting query from each secondary and let’s have a look at the statistic information output

blog 108 - 4 - stats after insert data and after update fullscan and read data from secondary

Well, it seems that some update statistics stuff came into play but surprisingly we get different results between versions. Let’s focus first on SQL Server 2012 (first line) where an auto update statistics operation was triggered by SQL Server. Thus the last_updated column value was updated, the concerned statistic is now defined as temporary on the concerned secondary replica and the rows sampled value is different from the previous step (423928 vs 45774517). This is because SQL Server used a default sampling algorithm in this case which does not correspond to that using in the previous step with FULLSCAN method.

Then if we move quickly to the last two lines (respectively SQL Server 2014 and 2016), we may notice only one change that concerns the is_temporary column and no changes concerning either the last update statistic date, the modification counter value or sampled rows. At this point, I’m not sure to understand the reason. Is it a metadata issue? Is it a normal behavior? Well, I will go back there to update this section if I get further information.

Let’s continue by performing the same previous tasks (insert a bunch of data and then update statistics with FULLSCAN from the primary). The statistic output from each secondary is as follows:

 

blog 108 - 7 - stats after update fullscan and 2 insert

I’m sure you are beginning to understand what’s happening here. The update statistic with FULLSCAN from the primary replica seems to never reset the modification counter on each secondary. This is a big problem here because if we execute the Reporting query on each secondary we have now good chance to invalidate quickly what has been done by the update statistics operation with FULLSCAN from the primary. In our context, the main concern is the sampled rows value that can lead to inaccurate cardinality estimations. Let’s run again the Reporting query from each secondary and let’s have a look at the corresponding statistics information output

blog 108 - 8 - stats after update fullscan and 2 insert and read data from secondary

Got it! This time, each concerned statistic switched to temporary and the modification counter was reset to 0. Note also that a default sampling was used in this case in contrast to previous picture that concerned update statistic with FULLSCAN. And if I continue ever and ever in this way I will face every time the same behavior for all versions of SQL Server since 2012.

Let’s summarize the situation: in common scenarios, we would probably not pay attention to this specific issue because cardinality estimation will not be affected and statistics concerned by Reporting queries on the secondaries are fewer in order of magnitude. But my friend encountered a specific scenario with large tables where the issue is magnified.

As workaround, you may consider to exclude the concerned statistic(s) from the auto update statistics algorithm (NORECOMPUTE option). You may also vote for the following connect item opened by my friend if it makes sense for you!

Hope it helps!

 

 

 

Cet article SQL Server AlwaysOn availability groups and statistic issues on secondaries est apparu en premier sur Blog dbi services.

Dbvisit replicate when you flashback the target

Thu, 2016-11-03 16:00

I had a quick question today about Dbvisit replicate as a solution to feed an Operational Data Store. The need is to run some end-of-month jobs on data from a specific state. The best way is to stop the APPLY at that fixed state for the duration of the job. Then no need for flashback query and no risk of ORA-1555. And what if we know this state only afterwards? Easy if the target is in Enterprise Edition, running in FLASHBACK ON. Then how to continue the APPLY? It’s easy with Dbvisit replicate. Here is an example.

One solution would be to create a restore point before doing the flashback in order to be able to get forth to the state before the flashback and re-start the APPLY where it was stopped. But if we do that, we need to open the database READ ONLY for running the job on it.
The other solution is to be able to re-start the apply at the point where the database has been flashed back. Do you think we have to get the right SCN, reset some configuration with it, etc? No. There’s something very simple and practical with Dbvisit replicate:

  • The MINE configuration is stored in the source database
  • The APPLY configuration is stored in the destination database

When you install Dbvisit for the first time, you may wonder why there is not only one repository. But this architecture makes it easy as the metadata is stored with the related data.

In my case, if I flashback the target database, the APPLY repository is flashed back as well, so the APPLY continues exactly at the right point

But just in case let’s test it.
I use my old #repattack VMs, which explains why the dbvrep version is old, and why the time is on New Zealand time zone. Here is a running replication:

/ Dbvisit Replicate 2.7.06.4485(MAX edition) - Evaluation License expires in 30 days
MINE IS running. Currently at plog 392 and SCN 6648759 (11/04/2016 09:14:08).
APPLY IS running. Currently at plog 392 and SCN 6648742 (11/04/2016 09:14:06).
Progress of replication dbvrep_XE:MINE->APPLY: total/this execution
--------------------------------------------------------------------------------------------------------------------------------------------
REPOE.CUSTOMERS: 100% Mine:210/210 Unrecov:0/0 Applied:210/210 Conflicts:0/0 Last:04/11/2016 09:09:13/OK
REPOE.ADDRESSES: 100% Mine:210/210 Unrecov:0/0 Applied:210/210 Conflicts:0/0 Last:04/11/2016 09:09:13/OK
REPOE.CARD_DETAILS: 100% Mine:195/195 Unrecov:0/0 Applied:195/195 Conflicts:0/0 Last:04/11/2016 09:09:13/OK
REPOE.ORDER_ITEMS: 99% Mine:1553/1553 Unrecov:0/0 Applied:1550/1550 Conflicts:0/0 Last:04/11/2016 09:09:13/OK
REPOE.ORDERS: 99% Mine:1245/1245 Unrecov:0/0 Applied:1243/1243 Conflicts:0/0 Last:04/11/2016 09:09:13/OK
REPOE.INVENTORIES: 99% Mine:1523/1523 Unrecov:0/0 Applied:1521/1521 Conflicts:0/0 Last:04/11/2016 09:09:13/OK
REPOE.LOGON: 99% Mine:1493/1493 Unrecov:0/0 Applied:1491/1491 Conflicts:0/0 Last:04/11/2016 09:09:13/OK
--------------------------------------------------------------------------------------------------------------------------------------------

I want to flashback to a quarter before, at 09:00, and because I’m in Oracle XE I’ll restore rather than flashback:

[oracle@target ~]$ rman target /
 
Recovery Manager: Release 11.2.0.2.0 - Production on Fri Nov 4 09:10:40 2016
 
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
 
connected to target database: XE (DBID=2736105154)
 
RMAN> startup force mount;
 
Oracle instance started
database mounted
...

RMAN> restore database until time "timestamp'2016-11-04 09:00:00'";
 
Starting restore at 04-NOV-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=10 device type=DISK

RMAN> recover database until time "timestamp'2016-11-04 09:00:00'";
 
Starting recover at 04-NOV-16
using channel ORA_DISK_1
 
starting media recovery
...
media recovery complete, elapsed time: 00:02:17
Finished recover at 04-NOV-16
 
RMAN> sql "alter database open resetlogs";
sql statement: alter database open resetlogs

So the APPLY stopped because the destination was down, but the MINE continues:

- Dbvisit Replicate 2.7.06.4485(MAX edition) - Evaluation License expires in 30 days
MINE IS running. Currently at plog 392 and SCN 6654476 (11/04/2016 09:27:12).
Could not connect to APPLY process. Process not started or connection refused.
Progress of replication dbvrep_XE:MINE->APPLY: total/this execution
--------------------------------------------------------------------------------------------------------------------------------------------
REPOE.CUSTOMERS: ---% Mine:300/300 Unrecov:0/0 Applied:0/0 Conflicts:0/0 Last:--/--
REPOE.ADDRESSES: ---% Mine:300/300 Unrecov:0/0 Applied:0/0 Conflicts:0/0 Last:--/--
REPOE.CARD_DETAILS: ---% Mine:277/277 Unrecov:0/0 Applied:0/0 Conflicts:0/0 Last:--/--
REPOE.ORDER_ITEMS: ---% Mine:2178/2178 Unrecov:0/0 Applied:0/0 Conflicts:0/0 Last:--/--
REPOE.ORDERS: ---% Mine:1735/1735 Unrecov:0/0 Applied:0/0 Conflicts:0/0 Last:--/--
REPOE.INVENTORIES: ---% Mine:2129/2129 Unrecov:0/0 Applied:0/0 Conflicts:0/0 Last:--/--
REPOE.LOGON: ---% Mine:2109/2109 Unrecov:0/0 Applied:0/0 Conflicts:0/0 Last:--/--
--------------------------------------------------------------------------------------------------------------------------------------------

My database has been flashed back to its state at 09:00 and I can do what I want. Then I just re-start the APPLY:

[oracle@target dbvrep_XE]$ sh dbvrep_XE-run-target.sh
Initializing......done
DDC loaded from database (352 variables).
Dbvisit Replicate version 2.7.06.4485
Copyright (C) Dbvisit Software Limited. All rights reserved.
DDC file /u01/app/oracle/dbvrep_XE/dbvrep_XE-APPLY.ddc loaded.
Starting process APPLY...started

and nothing to do manually, it restarts from where it needs, applying the changes from 09:00

/ Dbvisit Replicate 2.7.06.4485(MAX edition) - Evaluation License expires in 30 days
MINE IS running. Currently at plog 392 and SCN 6655130 (11/04/2016 09:28:33).
APPLY IS running. Currently at plog 392 and SCN 6645037 (11/04/2016 09:05:53).
Progress of replication dbvrep_XE:MINE->APPLY: total/this execution
--------------------------------------------------------------------------------------------------------------------------------------------
REPOE.CUSTOMERS: 46% Mine:312/312 Unrecov:0/0 Applied:145/12 Conflicts:0/0 Last:04/11/2016 09:22:39/OK
REPOE.ADDRESSES: 46% Mine:312/312 Unrecov:0/0 Applied:145/12 Conflicts:0/0 Last:04/11/2016 09:22:39/OK
REPOE.CARD_DETAILS: 46% Mine:289/289 Unrecov:0/0 Applied:135/11 Conflicts:0/0 Last:04/11/2016 09:22:39/OK
REPOE.ORDER_ITEMS: 49% Mine:2247/2247 Unrecov:0/0 Applied:1105/80 Conflicts:0/0 Last:04/11/2016 09:22:39/OK
REPOE.ORDERS: 49% Mine:1793/1793 Unrecov:0/0 Applied:890/60 Conflicts:0/0 Last:04/11/2016 09:22:39/OK
REPOE.INVENTORIES: 49% Mine:2199/2199 Unrecov:0/0 Applied:1083/77 Conflicts:0/0 Last:04/11/2016 09:22:39/OK
REPOE.LOGON: 48% Mine:2183/2183 Unrecov:0/0 Applied:1055/75 Conflicts:0/0 Last:04/11/2016 09:22:39/OK
--------------------------------------------------------------------------------------------------------------------------------------------

This is quick as it has already been mined. The APPLY quickly resolved the gap:

- Dbvisit Replicate 2.7.06.4485(MAX edition) - Evaluation License expires in 30 days
MINE IS running. Currently at plog 392 and SCN 6655589 (11/04/2016 09:29:36).
APPLY IS running. Currently at plog 392 and SCN 6655567 (11/04/2016 09:29:34).
Progress of replication dbvrep_XE:MINE->APPLY: total/this execution
--------------------------------------------------------------------------------------------------------------------------------------------
REPOE.CUSTOMERS: 99% Mine:319/319 Unrecov:0/0 Applied:317/184 Conflicts:0/0 Last:04/11/2016 09:23:32/OK
REPOE.ADDRESSES: 99% Mine:319/319 Unrecov:0/0 Applied:317/184 Conflicts:0/0 Last:04/11/2016 09:23:32/OK
REPOE.CARD_DETAILS: 99% Mine:296/296 Unrecov:0/0 Applied:294/170 Conflicts:0/0 Last:04/11/2016 09:23:32/OK
REPOE.ORDER_ITEMS: 99% Mine:2291/2291 Unrecov:0/0 Applied:2289/1264 Conflicts:0/0 Last:04/11/2016 09:23:32/OK
REPOE.ORDERS: 100% Mine:1828/1828 Unrecov:0/0 Applied:1828/998 Conflicts:0/0 Last:04/11/2016 09:23:32/OK
REPOE.INVENTORIES: 100% Mine:2238/2238 Unrecov:0/0 Applied:2238/1232 Conflicts:0/0 Last:04/11/2016 09:23:32/OK
REPOE.LOGON: 99% Mine:2235/2235 Unrecov:0/0 Applied:2232/1252 Conflicts:0/0 Last:04/11/2016 09:23:32/OK
--------------------------------------------------------------------------------------------------------------------------------------------

Things are simple when they are well designed. The APPLY has to synchronize with the destination, so they have put the APPLY repository into the destination. No problem if the replication went further than the point in time you need to freeze. Any solution is possible as long ans the whole database is consistent: Flashback database, PITR, snapshots, etc.

 

Cet article Dbvisit replicate when you flashback the target est apparu en premier sur Blog dbi services.

Momentum16 – Day 3 – Documentum with Docker

Thu, 2016-11-03 13:42

The last day of the Momentum 16 in Barcelona has been a short one… Indeed there were presentations only in the morning and the event ended at 13:00 with the farewell lunch. So this will be a short blog too!

 

So first thing in the morning, I had the opportunity to attend a very good presentation about containerizing your On-Premise environment with Docker. This was actually the main purpose of this third day for me: learn more about Documentum with Docker. Dennis Buis (ING) and Mike Mohen (Dell EMC) presented us what has been done at ING and what is coming soon with the Documentum 7.3 release around Docker. Dell EMC will provide soon Docker images containing one of the numerous components that will be available: Content Server + DocBroker, JMS, D2 Client, D2 Config, DA, BPM, aso… The purpose of Documentum with Docker is to accelerate your deployment/upgrade processes by separating all components in simple containers. This is really a huge challenge because there is a lot of open questions BUT once it will be mature enough and once the processes and documentations will be complete enough, it will be really great. We will be able to build complete environments within a few days instead of weeks with the insurance that everything will always be setup in the same way because images are images! You can use them as many times as you want and you will always have the same result.

DockerMike Mohen presenting Docker @ Momentum 16

 

After this first session, I wanted to practice/play a little bit with Docker and therefore I participated in a hackathon simply named Using Docker with Documentum. In this hackathon, I was indeed able to play with deployment and upgrade processes of Documentum with Docker as well as creating a custom DA image from a Dockerfile. In addition to that, I also had to manage the deployment models, the security and data management/storage. For the latter, it is simply because the data should not be stored in containers… Indeed, if a container is upgraded, then all data will be lost and that’s not what you want ;).

 

The morning ended with a closing keynote with Beau Lotto (Neuro-Scientist and Founder of Lotto Lab) which was actually very funny. He explained and showed us, among other things, why the brain is an extraordinary thing and why the information is meaningless! I will let you think about that ;).

 

So in conclusion, the first day of the Momentum 16 was a little bit boring, the second one was a little bit better and finally the third one was too short but really interesting.

 

Cet article Momentum16 – Day 3 – Documentum with Docker est apparu en premier sur Blog dbi services.

12c online move datafile in same filesystem.

Wed, 2016-11-02 16:03

On Linux, when you move a datafile, with “mv” within the filesystem it’s just a rename. There is no copy. In 12c you can move a datafile online where oracle takes care of the move at OS level. But be careful. Even if you are in the same filesystem, moving a datafile online does a copy of the file.

Linux mv

I have a file, /u01/oradata/test1.txt and I move it to /u01/oradata/test2.txt within the same filesystem:

mv /u01/oradata/test1.txt /u01/oradata/test2.txt

Actually, I’m running it with strace, tracing file operations for these files:

strace -e trace=file mv /u01/oradata/test1.txt /u01/oradata/test2.txt

Then I can see clearly that there is no open() call but just a rename():

execve("/usr/bin/mv", ["mv", "/u01/oradata/test1.txt", "/u01/oradata/test2.txt"], [/* 29 vars */]) = 0
stat("/u01/oradata/test2.txt", 0x7ffcfa624270) = -1 ENOENT (No such file or directory)
lstat("/u01/oradata/test1.txt", {st_mode=S_IFREG|0644, st_size=0, ...}) = 0
lstat("/u01/oradata/test2.txt", 0x7ffcfa623f20) = -1 ENOENT (No such file or directory)
rename("/u01/oradata/test1.txt", "/u01/oradata/test2.txt") = 0

If I do the same to another filesystem:

strace -e trace=file mv /u01/oradata/test2.txt /u02/oradata/test2.txt

a rename() is attempted:

execve("/usr/bin/mv", ["mv", "/u01/oradata/test2.txt", "/u02/oradata/test2.txt"], [/* 29 vars */]) = 0
stat("/u02/oradata/test2.txt", 0x7fff1e2b3340) = -1 ENOENT (No such file or directory)
lstat("/u01/oradata/test2.txt", {st_mode=S_IFREG|0644, st_size=0, ...}) = 0
lstat("/u02/oradata/test2.txt", 0x7fff1e2b2ff0) = -1 ENOENT (No such file or directory)
rename("/u01/oradata/test2.txt", "/u02/oradata/test2.txt") = -1 EXDEV (Invalid cross-device link)

but because it’s another filesystem, the “mv” command will do like a “cp”, open the source in read and the target in write, creating it if not exist:

unlink("/u02/oradata/test2.txt") = -1 ENOENT (No such file or directory)
lgetxattr("/u01/oradata/test2.txt", "security.selinux", "unconfined_u:object_r:unlabeled_t:s0", 255) = 37
open("/u01/oradata/test2.txt", O_RDONLY|O_NOFOLLOW) = 3
open("/u02/oradata/test2.txt", O_WRONLY|O_CREAT|O_EXCL, 0600) = 4
newfstatat(AT_FDCWD, "/u01/oradata/test2.txt", {st_mode=S_IFREG|0644, st_size=0, ...}, AT_SYMLINK_NOFOLLOW) = 0
unlinkat(AT_FDCWD, "/u01/oradata/test2.txt", 0) = 0

Move datafile

Let’s do the same from the database where I’ve created the same datafile:

SQL> create tablespace DEMO datafile '/u01/oradata/test1.txt' size 5M;
Tablespace created.

I get my shadow process PID:

SQL> set define %
SQL> column spid new_value spid
SQL> select spid from v$process join v$session on paddr=addr where sid=sys_context('userenv','sid');
 
SPID
------------------------
7257
 
SQL> host ps -fp %spid
UID PID PPID C STIME TTY TIME CMD
oracle 7257 7256 93 21:35 ? 00:00:03 oracleCDB1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))

and run strace on it:
SQL> host strace -o /tmp/tmp.log -e trace=file -p %spid &
I move the file online to the same filesystem:

SQL> alter database move datafile '/u01/oradata/test1.txt' to '/u01/oradata/test2.txt';
Database altered.

and read the trace:
grep /test /tmp/tmp.log


stat("/u01/oradata/test1.txt", {st_mode=S_IFREG|0640, st_size=5251072, ...}) = 0
stat("/u01/oradata/test1.txt", {st_mode=S_IFREG|0640, st_size=5251072, ...}) = 0
open("/u01/oradata/test1.txt", O_RDONLY) = 11
stat("/u01/oradata/test1.txt", {st_mode=S_IFREG|0640, st_size=5251072, ...}) = 0
open("/u01/oradata/test1.txt", O_RDONLY) = 11
stat("/u01/oradata/test1.txt", {st_mode=S_IFREG|0640, st_size=5251072, ...}) = 0
statfs("/u01/oradata/test1.txt", {f_type="EXT2_SUPER_MAGIC", f_bsize=4096, f_blocks=3063586, f_bfree=1014477, f_bavail=853095, f_files=786432, f_ffree=672545, f_fsid={1366374386, -1275721753}, f_namelen=255, f_frsize=4096}) = 0
open("/u01/oradata/test1.txt", O_RDONLY) = 11
open("/u01/oradata/test1.txt", O_RDWR|O_DSYNC) = 11
stat("/u01/oradata/test1.txt", {st_mode=S_IFREG|0640, st_size=5251072, ...}) = 0
stat("/u01/oradata/test1.txt", {st_mode=S_IFREG|0640, st_size=5251072, ...}) = 0
statfs("/u01/oradata/test1.txt", {f_type="EXT2_SUPER_MAGIC", f_bsize=4096, f_blocks=3063586, f_bfree=1014477, f_bavail=853095, f_files=786432, f_ffree=672545, f_fsid={1366374386, -1275721753}, f_namelen=255, f_frsize=4096}) = 0
open("/u01/oradata/test1.txt", O_RDONLY) = 11
stat("/u01/oradata/test1.txt", {st_mode=S_IFREG|0640, st_size=5251072, ...}) = 0
stat("/u01/oradata/test1.txt", {st_mode=S_IFREG|0640, st_size=5251072, ...}) = 0
statfs("/u01/oradata/test1.txt", {f_type="EXT2_SUPER_MAGIC", f_bsize=4096, f_blocks=3063586, f_bfree=1014477, f_bavail=853095, f_files=786432, f_ffree=672545, f_fsid={1366374386, -1275721753}, f_namelen=255, f_frsize=4096}) = 0
open("/u01/oradata/test1.txt", O_RDONLY) = 11

At this point the source datafile is opened in read. It continues with:

open("/u01/oradata/test2.txt", O_RDONLY) = -1 ENOENT (No such file or directory)
stat("/u01/oradata/test2.txt", 0x7ffd0201e5d8) = -1 ENOENT (No such file or directory)
open("/u01/oradata/test2.txt", O_RDWR|O_CREAT|O_EXCL|O_SYNC, 0660) = 11

and now the destination datafile in write, created if not exist (existence had been tested before). Then it starts the copy:

stat("/u01/oradata/test2.txt", {st_mode=S_IFREG|0640, st_size=5251072, ...}) = 0
stat("/u01/oradata/test2.txt", {st_mode=S_IFREG|0640, st_size=5251072, ...}) = 0
open("/u01/oradata/test2.txt", O_RDONLY) = 11
stat("/u01/oradata/test2.txt", {st_mode=S_IFREG|0640, st_size=5251072, ...}) = 0
open("/u01/oradata/test2.txt", O_RDONLY) = 11
statfs("/u01/oradata/test2.txt", {f_type="EXT2_SUPER_MAGIC", f_bsize=4096, f_blocks=3063586, f_bfree=1014475, f_bavail=853093, f_files=786432, f_ffree=672544, f_fsid={1366374386, -1275721753}, f_namelen=255, f_frsize=4096}) = 0
open("/u01/oradata/test2.txt", O_RDONLY) = 11
open("/u01/oradata/test2.txt", O_RDWR) = 11
open("/u01/oradata/test2.txt", O_RDWR|O_DSYNC) = 11

at the end, because I didn’t use the ‘KEEP’ option, so the source file is deleted:

stat("/u01/oradata/test1.txt", {st_mode=S_IFREG|0640, st_size=5251072, ...}) = 0
unlink("/u01/oradata/test1.txt") = 0
stat("/u01/app/oracle/diag/rdbms/cdb1/CDB1/log/test", {st_mode=S_IFDIR|0750, st_size=4096, ...}) = 0

So what?

As any online move operation, you need two times the space during the copy. Here, the source file is still used for read and write until the move finishes, so this cannot be a rename. A rename would have to put the datafile offline even for a short time. If you can afford a small downtime, then you may prefer to use the offline rename (“mv” at OS level, ALTER DATABASE RENAME FILE at database level).

 

Cet article 12c online move datafile in same filesystem. est apparu en premier sur Blog dbi services.

Momentum16 – Day 2 – C6, Load Testing and REST API

Wed, 2016-11-02 13:27

So today is the second day of the Momentum 16 in Barcelona and as expected, this day contained many more technical presentations and discussions than the first day. I attended several interesting presentations and hackathon/hands on sessions as well so I haven’t really had time to rest today.

 

So first thing in the morning, I started with a two hours hackathon session related to how it is possible to augment Documentum reliability and performance. This hackathon was lead by Henrik Rexed from Neotys. This company is actually providing Neoload, which is a load and performance testing tool that realistically simulates user activity and monitors infrastructure behavior so you can eliminate bottlenecks in all your web and mobile applications. This Load Test solution isn’t related to Documentum so you can actually Load Test everything but they do have some experience with Documentum which makes it even better. In this session, Henrik presented us the methodology that should be used to load test and monitor each and every application starting during the project phase with Unit Testing and Unit Load to ends up with complete SLA verifications by Load Testing the whole application. One thing that it is important to note and remember is that Load Testing each and every component one by one is good but that’s not enough. Sometimes, your system will break only when all components are solicited all together and that’s what you should test in the end!

 

After that, I attended a session presented by Mirko Weller (Volkswagen) and Christian Meier (fme) related to how 60k users are accessing the Documentum Platform at Volkswagen. This wasn’t actually a technical presentation but I wanted to follow it because we, at dbi services, are also managing a similar Documentum Platform at one of our customers with so many users and environments/instances that I thought it would be good to compare a little bit with what others are doing and to exchange feelings and ideas about how to improve our Platform.

 

Finally at the end of the morning, I attended a session about how to provide the right interface to your users at the right time. Boris Carbonneill (C6) and Hugo Simon-Chautemps (Sanofi) presented us the “C6 Mobile” application, the “C6 Office” interface as well as their “any” solution. The purpose of these three products is to help your users accessing their documents whenever they want with the best possible user’s experience. I actually found these applications/UIs quite good. There is a good balance in the sizing, colors, features displayed, aso…

  • The C6 Mobile application can be used to access documents while traveling directly using your phone and it can be used to quickly review/approve your tasks.
  • The C6 Office is actually an add-in for Microsoft Office, PowerPoint and Excel that lets you create/update documents based on templates defined in D2 and rules/permissions/acls defined in D2-Config without having to launch an internet browser. What you have to do is just to start Office, connect to the repository and work directly inside Microsoft’s application. For the moment it is apparently not working for Outlook. Maybe a future version?
  • The any solution can be used in the Cloud to merge all kind of repositories coming from Documentum, SharePoint, OpenText (and more) and store all that in one place. This solution actually makes me thing to InfoArchive which is providing similar thing and some other/different features… You can synchronize documents from D2 to the Cloud to share them with external people for example or just do that so that you will have a duplicate copy in case it is needed (E.g.: copying IQs/OQs/DR/IT stuff to the Cloud might be a good idea in case there is a Disaster and your DMS isn’t there anymore, then you would still have enough to actually execute and document your Disaster Recovery!)

 

Beginning of the afternoon, I followed another two hours hackathon on the following subject: Developing Custom Applications with Documentum APIs. I’m not a Documentum developer but I wanted to do this hackathon so I can learn some stuff related to the Documentum REST API and it was a good session. There were three different levels for this hackathon: beginner, intermediate and advanced. I choose the intermediate one and I had to build an application using Angular and the Documentum REST API. That was actually pretty funny and I was able to play with the APIs calls, aso… Really cool.

 

This concludes the summary of the second day. See you tomorrow for the third and last one!

 

Cet article Momentum16 – Day 2 – C6, Load Testing and REST API est apparu en premier sur Blog dbi services.

Pages