Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 5 hours 4 min ago

Unstructed vs. structured

Sat, 2017-11-18 01:13

The title of this blog post was: “Tracing DBMS_RCVMAN for reclaimable archivelogs” until I started to write the conclusion…

In a previous post I mentioned that there’s a bug with archivelog deletion policy when you want to mention both the ‘BACKED UP … TIMES TO …’ and ‘APPLIED’ or ‘SHIPPED’ as conditions for archived logs to be reclaimable. I opened a SR, they didn’t even try to reproduce it (and I can guarantee you can reproduce it in 2 minutes on any currently supported version) so I traced it myself to understand the bug and suggest the fix.

I traced the DBMS_RCVMAN with Kernel Recovery Area function SQL Tracing:

SQL> alter session set events 'trace[kra_sql] disk high, memory disable';
SQL> dbms_backup_restore.refreshAgedFiles;
SQL> alter session set events 'trace[kra_sql] off';

I know refreshAgedFiles checks for reclaimable file in FRA since it was an old bug where we had to run it manually on databases in mount.

I compared the traces when changing the order of ‘APPLIED’ and ‘BACKED UP’ and found the following:

< *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: setRedoLogDeletionPolicy with policy = TO BACKED UP 1 TIMES TO DISK APPLIED ON ALL STANDBY
---
> *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: setRedoLogDeletionPolicy with policy = TO APPLIED ON ALL STANDBY BACKED UP 1 TIMES TO DISK
5340c5340
< *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: policy = TO BACKED UP 1 TIMES TO DISK APPLIED ON ALL STANDBY
---
> *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: policy = TO APPLIED ON ALL STANDBY BACKED UP 1 TIMES TO DISK
5343c5343
< *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: EXITING setRedoLogDeletionPolicy with policy = TO BACKED UP 1 TIMES TO DISK APPLIED ON ALL STANDBY with alldest = 1
---
> *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: EXITING setRedoLogDeletionPolicy with policy = TO APPLIED ON ALL STANDBY BACKED UP 1 TIMES TO DISK with alldest = 1
5350,5351c5350,5351
< *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: parseBackedUpOption devtype=DISK
< *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: parseBackedUpOption backed up conf - devtype=DISK , backups=1
---
> *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: parseBackedUpOption devtype=DISK
> *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: parseBackedUpOption backed up conf - devtype=DISK, backups=1
5363c5363
< *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: EXITING getBackedUpAl with TRUE
---
> *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: EXITING getBackedUpAl with key = 128 stamp = 958068130
5367c5367
< *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: EXITING getBackedUpFiles with: no_data_found
---
> *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: EXITING getBackedUpFiles

You see at the top the difference in the way I mentioned the deletion policy. You see at the bottom that the first one (starting with ‘BACKED UP’) didn’t find archivelogs being backed up (no_data_found). But the second one (starting with ‘APPLIED’) mentioned the sequence# 128.

But if you look carefully, you see another difference in the middle: the “devtype=DISK” has an additional space before the comma in the first case.

So I traced a bit further, including SQL_TRACE and I found that the deletion policy is just using some INSTR and SUBSTR parsing on the deletion policy text to find the policy, the backup times, and the device type. For sure, looking for backups with DEVICE_TYPE=’DISK ‘ instead of ‘DISK’ will not find anything and this is the reason for the bug: no archived logs backed up means no archived log reclaimable.

If you look closer at DBMS_RCVMAN you will find that the device type is extracted with SUBSTR(:1, 1, INSTR(:1, ‘ ‘)) when the device type is followed by a space, which is the reason of this additional space. The correct extraction should be SUBSTR(:1, 1, INSTR(:1, ‘ ‘)-1) and this is what I suggested on the SR.

So what?

Writing the conclusion made me change the title. Currently, a lot of people are advocating for unstructured data. Because it is easy (which rhymes with ‘lazy’). Store information as it comes and postpone the parsing to a more structured data type until you need to process it. This seems to be how the RMAN configuration is stored: as the text we entered. And it is parsed later with simple text function as INSTR(), SUBSTR(), and LIKE. But you can see how a little bug, such as reading an additional character, has big consequences. If you look at the archivelog deletion policy syntax, you have 50% chances to run into this bug on a Data Guard configuration. The Recovery Area will fill up and your database will be blocked. The controlfile grows. Or you noticed it before and you run a ‘delete archivelog’ statement without knowing the reason. You waste space, removing some recovery files from local storage, which could have been kept for longer. If the deletion policy was parsed immediately when entered, like SQL DDL or PL/SQL APIs, the issue would have been detected a long time ago. Structure and strong typing is the way to build robust applications.

 

Cet article Unstructed vs. structured est apparu en premier sur Blog dbi services.

CBO, FIRST_ROWS and VIEW misestimate

Thu, 2017-11-16 23:36

There are several bugs with the optimizer in FIRST_ROWS mode. Here is one I encountered during a 10.2.0.4 to 12.2.0.1 migration when a view had an ‘order by’ in its definition.

Here is the test case that reproduces the problem.

A big table:

SQL> create table DEMO1 (n constraint DEMO1_N primary key,x,y) as select 1/rownum,'x','y' from xmltable('1 to 1000000');
Table DEMO1 created.

with a view on it, and that view has an order by:

SQL> create view DEMOV as select * from DEMO1 order by n desc;
View DEMOV created.

and another table to join to:

SQL> create table DEMO2 (x constraint DEMO2_X primary key) as select dummy from dual;
Table DEMO2 created.

My query reads the view in a subquery, adds a call to a PL/SQL function, and joins the result with the other table:


SQL> explain plan for
select /*+ first_rows(10) */ *
from
( select v.*,dbms_random.value from DEMOV v)
where x in (select x from DEMO2)
order by n desc;
 
Explained.

You can see that I run it with FIRST_ROWS(10) because I actually want to fetch the top-10 rows when ordered by N. As N is a number and I have an index on it and there are no nulls (it is the primary key) I expect to read the first 10 entries from the index, call the function for each of them, then nested loop to the other tables.

In the situation I encountered it, this is what was done in 10g but when migrated to 12c the query was very long because it called the PL/SQL function for million of rows. Here is the plan in my example:


SQL> select * from dbms_xplan.display(format=>'+projection');
 
PLAN_TABLE_OUTPUT
-----------------
Plan hash value: 2046425878
 
--------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 21 | | 7 (0)| 00:00:01 |
| 1 | NESTED LOOPS SEMI | | 1 | 21 | | 7 (0)| 00:00:01 |
| 2 | VIEW | DEMOV | 902 | 17138 | | 7 (0)| 00:00:01 |
| 3 | SORT ORDER BY | | 968K| 17M| 29M| 6863 (1)| 00:00:01 |
| 4 | TABLE ACCESS FULL | DEMO1 | 968K| 17M| | 1170 (1)| 00:00:01 |
| 5 | VIEW PUSHED PREDICATE | VW_NSO_1 | 1 | 2 | | 0 (0)| 00:00:01 |
|* 6 | INDEX UNIQUE SCAN | DEMO2_X | 1 | 2 | | 0 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
6 - access("X"="V"."X")
 
Column Projection Information (identified by operation id):
-----------------------------------------------------------
 
1 - (#keys=0) "V"."N"[NUMBER,22], "V"."X"[CHARACTER,1], "V"."Y"[CHARACTER,1] 2 - "V"."N"[NUMBER,22], "V"."X"[CHARACTER,1], "V"."Y"[CHARACTER,1] 3 - (#keys=1) INTERNAL_FUNCTION("N")[22], "X"[CHARACTER,1], "Y"[CHARACTER,1] 4 - "N"[NUMBER,22], "X"[CHARACTER,1], "Y"[CHARACTER,1]

A full table scan of the big table, with a call to the PL/SQL function for each row and the sort operation on all rows. Then the Top-10 rows are filtered and the nested loop operates on that. But you see the problem here. The cost of the ‘full table scan’ and the ‘order by’ has been evaluated correctly, but the cost after the VIEW operation is minimized.

My interpretation (but it is just a quick guess) is that the the rowset is marked as ‘sorted’ and then the optimizer considers that the cost to get first rows is minimal (as if it were coming from an index). However, this just ignores the initial cost of getting this rowset.

I can force with a hint the plan that I want – index full scan to avoid a sort and get the top-10 rows quickly:

SQL> explain plan for
select /*+ first_rows(10) INDEX_DESC(@"SEL$3" "DEMO1"@"SEL$3" ("DEMO1"."N")) */ *
from
( select v.*,dbms_random.value from DEMOV v)
where x in (select x from DEMO2)
order by n desc;
 
Explained.

This plan is estimated with an higher cost than the previous one and this is why it was not chosen:

SQL> select * from dbms_xplan.display(format=>'+projection');
PLAN_TABLE_OUTPUT
Plan hash value: 2921908728
 
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 21 | 9 (0)| 00:00:01 |
| 1 | NESTED LOOPS SEMI | | 1 | 21 | 9 (0)| 00:00:01 |
| 2 | VIEW | DEMOV | 902 | 17138 | 9 (0)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DEMO1 | 968K| 17M| 8779 (1)| 00:00:01 |
| 4 | INDEX FULL SCAN DESCENDING| DEMO1_N | 968K| | 4481 (1)| 00:00:01 |
| 5 | VIEW PUSHED PREDICATE | VW_NSO_1 | 1 | 2 | 0 (0)| 00:00:01 |
|* 6 | INDEX UNIQUE SCAN | DEMO2_X | 1 | 2 | 0 (0)| 00:00:01 |
------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
6 - access("X"="V"."X")
 
Column Projection Information (identified by operation id):
-----------------------------------------------------------
 
1 - (#keys=0) "V"."N"[NUMBER,22], "V"."X"[CHARACTER,1], "V"."Y"[CHARACTER,1] 2 - "V"."N"[NUMBER,22], "V"."X"[CHARACTER,1], "V"."Y"[CHARACTER,1] 3 - "N"[NUMBER,22], "X"[CHARACTER,1], "Y"[CHARACTER,1] 4 - "DEMO1".ROWID[ROWID,10], "N"[NUMBER,22]

This cost estimation is fine. The cost of getting all rows by index access is higher than with a full table scan, but the optimizer knows that the actual cost is proportional to the number of rows fetched and then it adjusts the cost accordingly. This is fine here because the VIEW has only non-blocking operations. The problem in the first plan without the hint, was because the same arithmetic was done, without realizing that the SORT ORDER BY is a blocking operation and not a permanent sorted structure, and must be completed before being able to return the first row.

In this example, as in the real case I’ve encountered, the difference in cost is very small (7 versus 9 here) which means that the plan can be ok and switch to the bad one (full scan, call function for all rows, sort them) with a small change in statistics. Note that I mentioned that the plan was ok in 10g but that may simply be related to the PGA settings and different estimation for the cost of sorting.

 

Cet article CBO, FIRST_ROWS and VIEW misestimate est apparu en premier sur Blog dbi services.

A response to: What makes a community?

Thu, 2017-11-16 03:39

A recent tweet of mine resulted in Martin Widlake to write a really great blog post about What makes a community. Please read it before you continue to read this. There was another response from Stefan Koehler which is worth mentioning as well.

Both, Martin and Stefan, speak about Oracle communities because this is were they are involved in. At the beginning of Martin’s post he writes: “Daniel was not specific about if this was a work/user group community or a wider consideration of society, …” and this was intentional. I don’t think that it really matters much if we speak about a community around a product, a community that just comes together for drinking beer and to discuss the latest football results or even if we talk about a community as a family. At least in the German translation “Gemeinschaft” applies to a family as well. This can be a very few people (mother,father,kids) or more if we include brothers, sisters, grandmas and so on. But still the same rules that Martin outlines in hist blog post apply: You’ll always have people driving the community such as organizing dinners (when we speak about families), organizing conferences (when we speak about technical communities) or organizing parties (when we talk about fun communities) or organizing whatever for whatever people make up the specific community. Then you’ll always have the people willing to help (the people Martin describes as the people who share and/or talk) and you’ll always have the people that consume/attend which is good as well, because without them you’d have nothing to share and to organize.

We at dbi services are a community as well. As we work with various products the community is not focused on a specific product (well, it is in the area of a specific product, of course) but rather on building an environment we like to work in. The community here is tight to technology but detached from a single product. We share the same methodologies, the same passion and have fun attending great parties that are organized mostly by the non technical people in our company. In this case you could say: The non-technical people are the drivers for the community of the company even if the company is very technical from its nature. And here we have the same situation again: Some organize, some attend/consume and some share, but all are required (as Martin outlined in his post as well).

Of course I have to say something about the PostgreSQL community: Because PostgreSQL is a real community project the community around it is much more important than with other technical communities. I do not say that you do not need a community for vendor controlled products because when the vendor fails to build a community around its product the product will fail as well. What I am saying is that the PostgreSQL community goes deeper as the complete product is driven by the community. Of course there are companies that hire people working for the community but they are not able to influence the direction if there is no agreement about the direction in the community. Sometimes this can make it very hard to progress and a lot of discussions need to be discussed but at the end I believe it is better to have something which the majority agrees on. In the PostgreSQL community I think there are several drivers: For sure all the developers are drivers, the people who take care of all the infrastructure (mailing lists, commitfests, …) are drivers as well. Basically everybody you can see on the mailing lists and answers questions are drivers because they keep the community active. Then we have all the people you see in other communities as well: Those who share and those who consume/attend. I think you get the point: An open source community is by its nature far more active than what you usually see for non-opensource communities for one reason: It already starts with the developers and not with a community around a final product. You can be part of such a community from the very beginning, which is writing new features and patches.

Coming back to the original question: What makes a community? Beside what Martin outlined there are several other key points:

  • The direction of the community (no matter if technical or not) must be so that people want to be part of that
  • When we speak about a community around a product: You must identify yourself with the product. When the product goes into a direction you can not support for whatever reason you’ll leave, sooner or later. The more people leave, the weaker the community
  • It must be easy to participate and to get help
  • A lot of people are willing to spend (free-) time to do stuff for the community
  • There must be a culture which respects you and everybody else
  • Maybe most important: A common goal and people that are able and willing to work together, even if this sometimes requires a lot of discussions

When you have all of these, the drivers, the people who share, and those that attend will come anyway, I believe.

 

Cet article A response to: What makes a community? est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 18 – Instead of triggers on views

Wed, 2017-11-15 09:18

It has been quite a while since the last post in this series but today comes the next one. Being at a customer this morning this question popped up: Can we have instead of triggers on a view in PostgreSQL as well? I couln’d immediately answer (although I was quite sure you can) so here is the test. I took an example for Oracle from here and re-wrote it in PostgreSQL syntax.

I took the same tables and adjusted the data types:

CREATE TABLE CUSTOMER_DETAILS ( CUSTOMER_ID INT PRIMARY KEY
                              , CUSTOMER_NAME VARCHAR(20)
                              , COUNTRY VARCHAR(20)
                              );
CREATE TABLE PROJECTS_DETAILS ( PROJECT_ID INT PRIMARY KEY
                              , PROJECT_NAME VARCHAR(30)
                              , PROJECT_START_DATE DATE
                              , CUSTOMER_ID INT REFERENCES CUSTOMER_DETAILS(CUSTOMER_ID)
                              );

The same view definition:

CREATE OR REPLACE VIEW customer_projects_view AS
   SELECT cust.customer_id, cust.customer_name, cust.country,
          projectdtls.project_id, projectdtls.project_name, 
          projectdtls.project_start_Date
   FROM customer_details cust, projects_details projectdtls
   WHERE cust.customer_id = projectdtls.customer_id;

Try to insert:

postgres=# INSERT INTO customer_projects_view VALUES (1,'XYZ Enterprise','Japan',101,'Library management',now());
ERROR:  cannot insert into view "customer_projects_view"
DETAIL:  Views that do not select from a single table or view are not automatically updatable.
HINT:  To enable inserting into the view, provide an INSTEAD OF INSERT trigger or an unconditional ON INSERT DO INSTEAD rule.
Time: 2.135 ms

… and the answer is already in the error message. So obviously we should be able to do that. In PostgreSQL you need a trigger function:

CREATE OR REPLACE FUNCTION cust_proj_view_insert_proc() RETURNS trigger AS $$
BEGIN
        
   INSERT INTO customer_details (customer_id,customer_name,country)
          VALUES (NEW.customer_id, NEW.customer_name, NEW.country);

   INSERT INTO projects_details (project_id, project_name, project_start_Date, customer_id)
   VALUES (
     NEW.project_id,
     NEW.project_name,
     NEW.project_start_Date,
     NEW.customer_id);

   RETURN NEW;
     EXCEPTION WHEN unique_violation THEN
       RAISE EXCEPTION 'Duplicate customer or project id';
END;
$$ LANGUAGE plpgsql;

Then we need a trigger calling this function:

create trigger cust_proj_view_insert_trg 
    instead of insert on customer_projects_view for each row EXECUTE procedure cust_proj_view_insert_proc();

Try the insert again:

INSERT INTO customer_projects_view VALUES (1,'XYZ Enterprise','Japan',101,'Library management',now());
INSERT INTO customer_projects_view VALUES (2,'ABC Infotech','India',202,'HR management',now());

… and here we are:

postgres=# select * FROM customer_details;
 customer_id | customer_name  | country 
-------------+----------------+---------
           1 | XYZ Enterprise | Japan
           2 | ABC Infotech   | India

Definitely, you can :)

 

Cet article Can I do it with PostgreSQL? – 18 – Instead of triggers on views est apparu en premier sur Blog dbi services.

Auto pre-warming in EDB Postgres Advanced Server 10

Wed, 2017-11-15 06:28

Some days ago EDB Postgres Advanced Server 10 was released and one feature which might be handy is auto pre-warming. What this does is to save all the buffers (or better a description of the buffers) which are currently loaded in to shared_buffers to disk and then re-read the buffers automatically when the instance is restarted. Lets see how it works.

Before getting the feature to work we need to look at two parameters which control the behavior:

  • pg_prewarm.autoprewarm: Enabled or disabled the feature
  • pg_prewarm.autoprewarm_interval: The interval the current state is written to disk or 0 to only write once when the instance shutsdown

Another requirement is to load the library when the instance starts:

postgres=# alter system set shared_preload_libraries ='$libdir/dbms_pipe,$libdir/edb_gen,$libdir/dbms_aq,$libdir/pg_prewarm';
ALTER SYSTEM

Once the instance is restarted we can proceed with the configuration:

postgres=# alter system set pg_prewarm.autoprewarm=true;
ALTER SYSTEM
postgres=# alter system set pg_prewarm.autoprewarm_interval='10s';
ALTER SYSTEM

By doing this we told the server to write the current state of the buffers to disk every 10 seconds. You’ll also notice a new background worker process which is responsible for doing the work:

postgres=# \! ps -ef | grep prewarm | egrep -v "ps|grep"
postgres  3682  3675  0 12:05 ?        00:00:00 postgres: bgworker: autoprewarm   

Lets load something into shared_buffers:

postgres=# insert into t1 select a, md5(a::varchar) from generate_series(1,1000) a;
INSERT 0 1000
postgres=# select count(*) from t1;
 count 
-------
  1000
(1 row)
postgres=# explain (analyze,buffers) select count(*) from t1;
                                               QUERY PLAN                                                
---------------------------------------------------------------------------------------------------------
 Aggregate  (cost=21.50..21.51 rows=1 width=8) (actual time=0.492..0.492 rows=1 loops=1)
   Buffers: shared hit=9
   ->  Seq Scan on t1  (cost=0.00..19.00 rows=1000 width=0) (actual time=0.019..0.254 rows=1000 loops=1)
         Buffers: shared hit=9
 Planning time: 0.070 ms
 Execution time: 0.538 ms
(6 rows)

The “shared hit” confirms that we read the buffers from shared_buffers and not from the os/file system cache. Then lets restart and do the same check again:

postgres@centos7:/u02/pgdata/PG4/ [EDB10] pg_ctl -D . restart -m fast
postgres@centos7:/u02/pgdata/PG4/ [EDB10] psql -X postgres
psql.bin (10.1.5)
Type "help" for help.

postgres=# explain (analyze,buffers) select count(*) from t1;
                                               QUERY PLAN                                                
---------------------------------------------------------------------------------------------------------
 Aggregate  (cost=21.50..21.51 rows=1 width=8) (actual time=0.586..0.586 rows=1 loops=1)
   Buffers: shared hit=9
   ->  Seq Scan on t1  (cost=0.00..19.00 rows=1000 width=0) (actual time=0.024..0.295 rows=1000 loops=1)
         Buffers: shared hit=9
 Planning time: 0.451 ms
 Execution time: 0.766 ms
(6 rows)

postgres=# 

… here we go. How is this information stored? When you take a look at $PGDATA you’ll notice a file with the following format:

postgres@centos7:/u02/pgdata/PG4/ [EDB10] cat $PGDATA/autoprewarm.blocks | tail
<>
0,1664,1262,0,0
15471,1663,1259,0,0
15471,1663,1259,0,1
15471,1663,1259,0,2
15471,1663,1249,0,0
15471,1663,1249,0,1
15471,1663,1249,0,2
15471,1663,1249,0,3
15471,1663,1249,0,4

The first field is the OID of the database:

postgres=# select oid,datname from pg_database where oid=15471;
  oid  | datname  
-------+----------
 15471 | postgres
(1 row)

The second one is the tablespace:

postgres=# select oid,spcname from pg_tablespace where oid=1663;
 oid  |  spcname   
------+------------
 1663 | pg_default
(1 row)

The third one is the table:

postgres=# select oid,relname from pg_class where oid = 16402;
  oid  | relname 
-------+---------
 16402 | t1
(1 row)

postgres=# \! grep 16402 $PGDATA/autoprewarm.blocks
15471,1663,16402,0,0
15471,1663,16402,0,1
15471,1663,16402,0,2
15471,1663,16402,0,3
15471,1663,16402,0,4
15471,1663,16402,0,5
15471,1663,16402,0,6
15471,1663,16402,0,7
15471,1663,16402,0,8
15471,1663,16402,1,0
15471,1663,16402,1,2

The fourth one is the fork/file (0 is the datafile, 1 is the free space map) and the last one is the actual block to load. This is also described in “./contrib/pg_prewarm/autoprewarm.c” in the PostgreSQL source code:

/* Metadata for each block we dump. */
typedef struct BlockInfoRecord
{
        Oid                     database;
        Oid                     tablespace;
        Oid                     filenode;
        ForkNumber      forknum;
        BlockNumber blocknum;
} BlockInfoRecord;

For community PostgreSQL there is the contrib module pg_prewarm you can use for that, check here.

 

Cet article Auto pre-warming in EDB Postgres Advanced Server 10 est apparu en premier sur Blog dbi services.

Dynamic Sampling vs. Extended Statistics

Tue, 2017-11-14 10:09

On datawarehouse databases, I frequently recommend increasing the level of dynamic sampling because:

  • Queries have complex predicates with AND, OR, IN(), ranges and correlated values for which the optimizer cannot estimate the cardinality properly
  • Queries are long anyway (compared to OLTP) and can afford more parse time to get an optimized execution plan

However, there’s a drawback with this approach because sometimes the dynamic sampling estimation may give bad estimations, and supersedes the static statistics which were better. Here is an example in 12.2.0.1

I run with the following parameters:

SQL> show parameter adaptive;
NAME TYPE VALUE
--------------------------------- ------- -----
optimizer_adaptive_plans boolean TRUE
optimizer_adaptive_reporting_only boolean FALSE
optimizer_adaptive_statistics boolean FALSE
optimizer_dynamic_sampling integer 4

The Dynamic Sampling level comes from previous version (11g) and the Adaptive Statistics have been disabled because of all the problems seen in 12cR1 with Adaptive Dynamic Sampling bugs.

I have a query with very bad response time for some values, going to nested loops for 50000 rows. The reason is an under-estimate in the following part of the query:

SQL> explain plan for
2 SELECT count(*) FROM "APP_OWNR"."TBL_APPLICATION1_ID" "TBL_APPLICATION1_ID" WHERE upper("TBL_APPLICATION1_ID"."OPRID") =upper ('qwertz');
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 2187255533
&nbspM
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 7 | 964 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 7 | | |
|* 2 | TABLE ACCESS FULL| TBL_APPLICATION1_ID | 82 | 574 | 964 (1)| 00:00:01 |
------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(UPPER("OPRID")='QWERTZ')
 
Note
-----
- dynamic statistics used: dynamic sampling (level=4)

The estimation is 82 rows but there are actually 50000 rows. We can see dynamic sampling. The misestimate is probably caused by a sample too small.

Ok, a query with an UPPER() function on the column is not a good idea. Let’s try to gather statistics for the expression:

SQL> exec dbms_stats.gather_table_stats('APP_OWNR','TBL_APPLICATION1_ID',method_opt=>'for all columns size auto for columns (upper(OPRID)) size auto');
PL/SQL procedure successfully completed.
 
SQL> explain plan for
2 SELECT count(*) FROM "APP_OWNR"."TBL_APPLICATION1_ID" "TBL_APPLICATION1_ID" WHERE upper("TBL_APPLICATION1_ID"."OPRID") =upper ('qwertz');
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 2187255533
 
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 7 | 964 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 7 | | |
|* 2 | TABLE ACCESS FULL| TBL_APPLICATION1_ID | 82 | 574 | 964 (1)| 00:00:01 |
------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(UPPER("OPRID")='QWERTZ')
PLAN_TABLE_OUTPUT
 
Note
-----
- dynamic statistics used: dynamic sampling (level=4)

We have the same misestimate. But the problem is not our statistics on expression. This query is still doing dynamic sampling.

Here’s the proof that we have good static statistics:

SQL> alter session set optimizer_dynamic_sampling=2;
Session altered.
 
SQL> explain plan for
2 SELECT count(*) FROM "APP_OWNR"."TBL_APPLICATION1_ID" "TBL_APPLICATION1_ID" WHERE upper("TBL_APPLICATION1_ID"."OPRID") =upper ('qwertz');
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 2187255533
 
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 7 | 964 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 7 | | |
|* 2 | TABLE ACCESS FULL| TBL_APPLICATION1_ID | 48594 | 332K| 964 (1)| 00:00:01 |
------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(UPPER("OPRID")='QWERTZ')

Dynamic Sampling did not occur at level 2. Now the estimation is ok thanks to the extended statistics. I have a top-frequency histogram where the cardinality of popular value is exact.

The problem is that dynamic sampling is supposed to add more information to the optimizer, but in this case, it replaces the static statistics. In level 4, dynamic sampling is done as soon as there is a complex predicate in the where clause. And the use of the UPPER() function is considered as a complex predicate. However, in this case, because I have extended statistics, it should be considered as a simple column=value predicate.

Here I’ve set dynamic sampling manually, but this is also what happens when SQL Plan Directives trigger the use of Dynamic Sampling and the good histogram is ignored. This reminds me a Ludovico Caldara blog post about SPD.

Here, maybe, the solution would be Adaptive Dynamic Sampling which may increase the level of sampling when needed:

SQL> alter session set optimizer_dynamic_sampling=11;
Session altered.
 
SQL> explain plan for
2 SELECT count(*) FROM "APP_OWNR"."TBL_APPLICATION1_ID" "TBL_APPLICATION1_ID" WHERE upper("TBL_APPLICATION1_ID"."OPRID") =upper ('qwertz');
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 2187255533
 
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 7 | 964 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 7 | | |
|* 2 | TABLE ACCESS FULL| TBL_APPLICATION1_ID | 37831 | 258K| 964 (1)| 00:00:01 |
------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(UPPER("OPRID")='QWERTZ')
 
Note
-----
- dynamic statistics used: dynamic sampling (level=AUTO)

In this case, Adaptive Dynamic Sampling is a good approximation. But it would be better to have a level of dynamic sampling that does not consider a predicate as a complex one when the extended statistics exactly match the predicate. Before there is enough artificial intelligence to cope with this, the best recommendation is to focus on design. In this case, ensuring that we have only uppercase values is the best way to keep queries and estimations simple.

 

Cet article Dynamic Sampling vs. Extended Statistics est apparu en premier sur Blog dbi services.

Documentum xPlore: Tuning of JVM for high throughput and low CPU usage

Sun, 2017-11-12 02:05

Sometimes you have java processes or even jboss servers using a lot of CPU. In my example I had an xPlore dsearch server using like 98% of the cpu. When using jconsole and jvisualvm I figured out the garbage collector was using 50 to 60% of the cpu time.
This was because the server was indexing and accessing the internal DB very often. Hence a lot of objects were created, the JVM was not correctly sized, thus all objects went to the tenured(old) space resulting in filling up the heap. The garbage collector had to go through the whole heap and perform a lot of FUll GC. I went to a point where I had a Full GC every 5 seconds that lasted 4 seconds. So I had only 1 second every 5 seconds of “real” processing.

So if you have a process stuck in collecting garbage you can use the following parameters:
USER_MEM_ARGS=”-Xms8G -Xmx8G -XX:PermSize=64m -XX:NewSize=2g -XX:MaxNewSize=2g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+ParallelRefProcEnabled -XX:+CMSClassUnloadingEnabled -XX:CMSInitiatingOccupancyFraction=95 -XX:+UseCMSInitiatingOccupancyOnly -XX:MaxTenuringThreshold=2 -XX:MaxPermSize=256m -Xss1024k -Xloggc:/pkgs/dmsp/opt/documentum/xPlore/jboss7.1.1/server/DctmServer_PrimaryDsearch/logs/PrimaryJVM.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution”

-Xms8G: Starting limit of the size of JVM
-Xmx8G: Max limit of heap usage
-XX:PermSize=64m: Permanent space size
-XX:MaxPermSize=256m:
-XX:NewSize=2g: Young Gen space size, here 1/4 of total
-XX:MaxNewSize=2: Maximum Young Gen space
-XX:+UseParNewGC: Parallele copying collector for Young generation, parallelizes the collection process with multiple threads, better perf with multiple processor architecture
-XX:+UseConcMarkSweepGC: Use concurrent mark-sweep collection for the old generation
-XX:+CMSParallelRemarkEnabled: Parallelize the remark phase, goes with CMS option, increases response time
-XX:+ParallelRefProcEnabled: Parallelize the process of weak referenced objects (cache)
-XX:+CMSClassUnloadingEnabled: Enables the class unloading capability for CMS
-XX:CMSInitiatingOccupancyFraction=95: Puts the limit after which the full GC will be trigered, higher value means less Full GC but longuer
-XX:+UseCMSInitiatingOccupancyOnly: Prevents the JVM tu use heuristics GC triggering rules, sets the trigger to use only the previous percentage as a threeshold for Full GC trigger.
-XX:MaxTenuringThreshold=2: Maximum value for tenuring threshold. The default value is 15
-Xss1024k: Thread stack size

These arguments will help you see the behaviour of the GC process:
-Xloggc:/path/to/log/JVM.log
-XX:+PrintGCDetails
-XX:+PrintGCDateStamps
-XX:+PrintTenuringDistribution

 

After tuning the JVM I went back to a normal behaviour:

jvm-all-green

 

Cet article Documentum xPlore: Tuning of JVM for high throughput and low CPU usage est apparu en premier sur Blog dbi services.

Documentum xPlore: Several ways to start an Index Agent

Sun, 2017-11-12 01:30

In order to start index agents, you have several ways, depending on how you need to start them.

1. Use the documentum job dm_FTIndexAgentBoot. If you setup start_index_agents=T in the server.ini, the job will be called when the docbases are started.

2. Use the web interface:
Login to http://server:9200/IndexAgent/ with docbase credentials and select “Start in normal mode”

3. Login to DA and go to Indexing Management -> Index Agents and Index Servers
Right click on an index agent and select “Start Agent”

4. Use IAPI command to start it:

select index_name from dm_fulltext_index;
DOCBASE_ftindex_00
select object_name from dm_ftindex_agent_config;
server_9200_IndexAgent

apply,c,,FTINDEX_AGENT_ADMIN,NAME,S,DOCBASE_ftindex_00,AGENT_INSTANCE_NAME,S,server_9200_IndexAgent,ACTION,S,start
next,c,q0
dump,c,q0

5. Use Java:
java -cp $DM_HOME/lib/server-impl.jar:$DOCUMENTUM/dfc/dfc.jar com.documentum.server.impl.utils.IndexAgentCtrl -docbase_name <docbasename> -user_name <username> -action start

 

Cet article Documentum xPlore: Several ways to start an Index Agent est apparu en premier sur Blog dbi services.

Documentum – Unable to stop an IDS configured in SSL?

Fri, 2017-11-10 15:11

When working with the IDS, you might face some interesting behaviors as mentioned in the last blog I wrote for example. This one will focus on the SSL part of the IDS on the target side. On this blog, I will start with showing the content of our start/stop scripts and how it is working in non-SSL, then switching to SSL and try again. Therefore for this blog, I quickly installed a test IDS 7.3 using the default non-SSL port (2788).

 

So to start and stop the IDS on the target side, we are using custom scripts/services that do not contain any port information in their names because it might change or just to be able to start several agents at the same time, aso… So an example of start/stop scripts that can be used for the IDS would be:

[ids@target_server_01 ~]$ cat ~/.bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH

export TZ=UTC
export IDS_HOME=/app/ids/target
export JAVA_HOME=$IDS_HOME/product/jre/linux
export PATH=$JAVA_HOME/bin:$PATH
[ids@target_server_01 ~]$
[ids@target_server_01 ~]$ cd $IDS_HOME/admin
[ids@target_server_01 admin]$
[ids@target_server_01 admin]$
[ids@target_server_01 admin]$ cat startIDSTargetCustom.sh
#! /bin/sh
. ~/.bash_profile
echo "Starting the Interactive Delivery Services Target..."
NB_PID=`pgrep -f "com.documentum.webcache.transfer.MigAgent" | wc -l`
if [[ $NB_PID != 0 ]]; then
  echo "The Interactive Delivery Services Target is already running."
else
  if [[ -f $IDS_HOME/admin/nohup-IDSTarget.out ]]; then
    mv $IDS_HOME/admin/nohup-IDSTarget.out $IDS_HOME/admin/nohup-IDSTarget.out_`date +%F_%H%M%S`.out
  fi
  nohup $IDS_HOME/admin/dm_start_ids >> $IDS_HOME/admin/nohup-IDSTarget.out 2>&1 &
  echo "The Interactive Delivery Services Target has been started... Sleeping for 30 seconds."
  sleep 30
fi
# End of File
[ids@target_server_01 admin]$
[ids@target_server_01 admin]$
[ids@target_server_01 admin]$ cat dm_start_ids
#! /bin/sh
. ~/.bash_profile
$JAVA_HOME/bin/java -Xms6g -Xmx6g -Dfile.encoding=UTF-8 -Djava.security.egd=file:///dev/./urandom -cp "$JAVA_HOME/lib/ext/*" com.documentum.webcache.transfer.MigAgent $IDS_HOME/admin/config/2788/agent.ini &
# End of File
[ids@target_server_01 admin]$
[ids@target_server_01 admin]$
[ids@target_server_01 admin]$ cat dm_stop_ids
#!/bin/sh
. ~/.bash_profile
$JAVA_HOME/bin/java -Djava.security.egd=file:///dev/./urandom -cp "$JAVA_HOME/lib/ext/*" com.documentum.webcache.transfer.Shutdown $IDS_HOME/admin/config/2788/agent.ini $1 $2
# End of File
[ids@target_server_01 admin]$

 

So when the IDS is configured in non-SSL, this is the configuration of the agent.ini (the default one) and the behavior when you try to start/stop it:

[ids@target_server_01 admin]$ cat config/2788/agent.ini
[conn]
transfer_directory=/data/IDS
secure_connection=raw
http_port=2788
https_ca_cert=$IDS_HOME/admin/keys/ca-cert.der
https_server_cert=$IDS_HOME/admin/keys/server-cert.der
https_server_key=$IDS_HOME/admin/keys/server-key.der
check_pass=$IDS_HOME/product/tools/dm_check_password
log_file=$IDS_HOME/admin/log/2788.log
target_database_connection=jdbc:oracle:thin:@(description=(address=(host=database_server_01)(protocol=tcp)(port=1521))(connect_data=(sid=IDSSID)))
database_user=IDS_USER
database_user_pass=SCS_ENCR_TEXT/A1G8H1FBH12ZECB2P917GEN31ZCBGGC2N2HRC2CNZY
JDBC_DRIVER=oracle.jdbc.driver.OracleDriver
[ids@target_server_01 admin]$
[ids@target_server_01 admin]$ grep -E "^secure_connection|^http.*port" config/2788/agent.ini
secure_connection=raw
http_port=2788
[ids@target_server_01 admin]$
[ids@target_server_01 admin]$ ./startIDSTargetCustom.sh
Starting the Interactive Delivery Services Target...
The Interactive Delivery Services Target has been started... Sleeping for 30 seconds.
[ids@target_server_01 admin]$
[ids@target_server_01 admin]$ ./dm_stop_ids
Nov 11 08:38:42.714:T:main: INFO:       Setting socket TCP no delay to true.
Beginning shutdown...
Shutdown completed
[ids@target_server_01 admin]$
[ids@target_server_01 admin]$ cat nohup-IDSTarget.out
Nov 11 08:33:19.493:T:main: INFO:       Begin logging on: $IDS_HOME/admin/log/2788.log
Nov 11 08:33:19.493:T:main: INFO:       MigAgent.java Starting...
Nov 11 08:33:19.496:T:main: INFO:       Interactive Delivery Services - Version 7.3.0010.0003
Nov 11 08:33:19.496:T:main: INFO:       Total process heap space (bytes) : 5368709120
Nov 11 08:33:19.886:T:main: INFO:       HTTP Port:      2788
Nov 11 08:38:42.700:T:Thread-0: INFO:   Setting socket TCP no delay to true.
Nov 11 08:38:42.710:T:Thread-1: INFO:   --------------------------
Nov 11 08:38:42.717:T:Thread-1: INFO:   Checking for valid SHUTDOWN request
Nov 11 08:38:42.718:T:Thread-1: INFO:   Valid SHUTDOWN Request
Nov 11 08:38:42.718:T:Thread-1: INFO:   Shutdown command received, beginning shutdown...
Nov 11 08:38:42.718:T:Thread-1: INFO:   Shutdown complete.
[ids@target_server_01 admin]$

 

So this is working as expected for both start and stop commands. I didn’t execute an End-to-End test or an export but this is also working properly. Then switching the configuration to SSL on the IDS Target can be done pretty easily. I will let you check the documentation on how to regenerate the SSL Certificate if you want to (it is recommended) but that’s basically done using the script $IDS_HOME/product/bin/GenCerts. So let’s switch our IDS in SSL and then try again to stop/start it:

[ids@target_server_01 admin]$ grep -E "^secure_connection|^http.*port" config/2788/agent.ini
secure_connection=raw
http_port=2788
[ids@target_server_01 admin]$
[ids@target_server_01 admin]$ sed -i 's,^secure_connection=.*,secure_connection=ssl,' config/2788/agent.ini
[ids@target_server_01 admin]$ sed -i 's,^http.*port=.*,https_port=2788,' config/2788/agent.ini
[ids@target_server_01 admin]$
[ids@target_server_01 admin]$ grep -E "^secure_connection|^http.*port" config/2788/agent.ini
secure_connection=ssl
https_port=2788
[ids@target_server_01 admin]$
[ids@target_server_01 admin]$ ./startIDSTargetCustom.sh
Starting the Interactive Delivery Services Target...
The Interactive Delivery Services Target has been started... Sleeping for 30 seconds.
[ids@target_server_01 admin]$
[ids@target_server_01 admin]$
[ids@target_server_01 admin]$ ./dm_stop_ids
Connecting to secure WebCache at localhost:2788...
Nov 11 08:45:23.587:T:main: INFO:       Server Certificate : $IDS_HOME/admin/keys/server-cert.der
Nov 11 08:45:23.587:T:main: INFO:       CA Certificate : $IDS_HOME/admin/keys/ca-cert.der
Nov 11 08:45:23.587:T:main: INFO:       Server Key : $IDS_HOME/admin/keys/server-key.der
Connected!
Certificates are valid.
Nov 11 08:45:23.747:T:main: INFO:       Setting socket TCP no delay to true.
com.rsa.ssl.SSLException: An IOException occured while collecting the handshake digests: / by zero
        at com.rsa.ssl.tls1.TLSV1ClientProtocol.stateMachine(TLSV1ClientProtocol.java:283)
        at com.rsa.ssl.tls1.TLSV1ClientProtocol.init(TLSV1ClientProtocol.java:163)
        at com.rsa.ssl.tls1.TLSV1ClientProtocol.<init>(TLSV1ClientProtocol.java:127)
        at com.rsa.ssl.common.TLSV1Loader.startTLS1ClientProtocol(TLSV1Loader.java:336)
        at com.rsa.ssl.common.ClientProtocol.sendHello(ClientProtocol.java:243)
        at com.rsa.ssl.common.ClientProtocol.startHandshake(ClientProtocol.java:379)
        at com.rsa.ssl.SSLSocket.getOutputStream(SSLSocket.java:229)
        at com.documentum.webcache.transfer.Client.<init>(Unknown Source)
        at com.documentum.webcache.transfer.Shutdown.<init>(Unknown Source)
        at com.documentum.webcache.transfer.Shutdown.main(Unknown Source)
<B> <FONT color="red">
Nov 11 08:45:23.960:T:main: ERROR:      Client(): creating data streamscom.rsa.ssl.SSLException: An IOException occured while collecting the handshake digests: / by zero
</FONT> </B>
Error creating shutdown object.Error creating data streams
An IOException occured while collecting the handshake digests: / by zero
Error creating data streams
An IOException occured while collecting the handshake digests: / by zero
[ids@target_server_01 admin]$
[ids@target_server_01 admin]$
[ids@target_server_01 admin]$ cat nohup-IDSTarget.out
Nov 11 08:40:37.074:T:main: INFO:       Begin logging on: $IDS_HOME/admin/log/2788.log
Nov 11 08:40:37.075:T:main: INFO:       MigAgent.java Starting...
Nov 11 08:40:37.077:T:main: INFO:       Interactive Delivery Services - Version 7.3.0010.0003
Nov 11 08:40:37.077:T:main: INFO:       Total process heap space (bytes) : 5368709120
Nov 11 08:40:37.426:T:main: INFO:       HTTPS Port:     2788
Nov 11 08:40:37.426:T:main: INFO:       Server Certificate : $IDS_HOME/admin/keys/server-cert.der
Nov 11 08:40:37.426:T:main: INFO:       CA Certificate : $IDS_HOME/admin/keys/ca-cert.der
Nov 11 08:40:37.426:T:main: INFO:       Server Key : $IDS_HOME/admin/keys/server-key.der
Nov 11 08:45:23.744:T:Thread-0: INFO:   Setting socket TCP no delay to true.
<B> <FONT color="red">
Nov 11 08:45:23.976:T:Thread-0: ERROR:  Exception: An IOException occured while reading the finished message: read() error
</FONT> </B>
<B> <FONT color="red">
Nov 11 08:45:23.977:T:Thread-0: ERROR:  Exception: Error Spawning new requestor
</FONT> </B>
[ids@target_server_01 admin]$
[ids@target_server_01 admin]$
[ids@target_server_01 admin]$ ps -ef | grep MigAgent | grep -v grep
ids   15398     1  0 08:40 pts/2    00:00:04 $JAVA_HOME/bin/java -Xms6g -Xmx6g -Dfile.encoding=UTF-8 -Djava.security.egd=file:///dev/./urandom -cp $JAVA_HOME/lib/ext/* com.documentum.webcache.transfer.MigAgent $IDS_HOME/admin/config/2788/agent.ini
[ids@target_server_01 admin]$

 

As you can see above, the stop command isn’t working at all. It’s not doing anything since the process is still up&running. When you try to stop the IDS it will fail, apparently because of a division by zero. You can try to check the different configuration files, you can check that the IDS is working properly from End-to-End, you can do a lot of things (like I did) but you will (likely) not find any solution. This is actually a known issue on OpenText side and it is documented as part of SCS-3683. So how can you stop the IDS Target process then? Well, the only way is to kill it… So an updated stop script that would work for both non-SSL and SSL IDS Agents would be something like that:

[ids@target_server_01 admin]$ cat dm_stop_ids
#!/bin/sh
. ~/.bash_profile
AGENT_PORT="2788"
CONN_MODE=`grep "^secure_connection" $IDS_HOME/admin/config/${AGENT_PORT}/agent.ini | sed 's,^secure_connection[[:space:]]*=[[:space:]]*,,'`
if [[ "$CONN_MODE" == "ssl" ]]; then
  IDS_PID=`pgrep -f "com.documentum.webcache.transfer.MigAgent.*${AGENT_PORT}"`
  if [[ $IDS_PID != '' ]]; then
    kill $IDS_PID
    sleep 5
    IDS_PID=`pgrep -f "com.documentum.webcache.transfer.MigAgent.*${AGENT_PORT}"`
    if [[ $IDS_PID != '' ]]; then
      kill -9 $IDS_PID
    fi
    echo "The Interactive Delivery Services Target has been stopped..."
  else
    echo "The Interactive Delivery Services Target is already stopped."
  fi
else
  $JAVA_HOME/bin/java -Djava.security.egd=file:///dev/./urandom -cp "$JAVA_HOME/lib/ext/*" com.documentum.webcache.transfer.Shutdown $IDS_HOME/admin/config/${AGENT_PORT}/agent.ini $1 $2
fi
# End of File
[ids@target_server_01 admin]$
[ids@target_server_01 admin]$ ./dm_stop_ids
The Interactive Delivery Services Target has been stopped...
[ids@target_server_01 admin]$
[ids@target_server_01 admin]$ ps -ef | grep MigAgent | grep -v grep
[ids@target_server_01 admin]$

 

It annoys me to kill a process to stop it but since there is, according to OTX, no other solution…

 

 

Cet article Documentum – Unable to stop an IDS configured in SSL? est apparu en premier sur Blog dbi services.

Documentum – Unable to configure IDS 7.3 for a docbase

Fri, 2017-11-10 13:30

In this blog, I will talk about an issue with the IDS 7.3 which is installed on a Content Server 7.3. The IDS is the Interactive Delivery Services. It is a product provided by OpenText that needs to be installed on a Content Server (for the “Source” part) and on a Target Server (for the “Target” part). The IDS can be used to publish content from a Documentum repository to a target machine for use in another application or something like that.

 

When installing/configuring an IDS, there are several things to do:

  1. Install the Source + patch if needed
  2. Configure a docbase on the Source (basically install DARs)
  3. Install the Target + patch if needed
  4. Configure a docbase on the Target (basically setup of an agent which will use a DB + file system for the exported documents)

 

In this blog, I will only talk about an issue which will occur if you try to execute the step 2 with an IDS 7.3 on a Content Server 7.3. Once the IDS is installed and patched (if needed), you can configure a docbase using the config.bin file:

[dmadmin@content_server_01 ~]$ cd $DM_HOME/webcache/install/
[dmadmin@content_server_01 install]$ 
[dmadmin@content_server_01 install]$ ./config.bin

 

On the IDS Source Configurator, you just have to select the docbase you want to configure and it will start the configuration of the docbase. As mentioned above, the main thing it is doing is to install the DARs that are placed under “$DM_HOME/webcache/install/SCSDar/”. For that purpose, it will use the Headless Composer. When you do that, an error message will be printed saying the following:

DiWAWebcsConfigureDocbase failed! - Could not deploy $DM_HOME/webcache/install/SCSDar/SCSDocApp.dar.
 Please check dar installation log file $DM_HOME/webcache/install/SCSDar/DocBase1_SCSDocApp_dar.log for the installation exceptions.
Errors occured while invoking Headless Composer.; Runtime execution failed with child process "$DOCUMENTUM_SHARED/java/1.7.0_72/jre/bin/java" exit code of 13; For more detailed information, see the error log: $DM_HOME/webcache/install/setupError.log

 

So what’s the issue? To understand how twisted/evil the IDS is, let’s first talk about the CS 7.3. When you install a Content Server 7.3, it will install the binaries, the JMS, the Headless Composer and a single Java which is: $DOCUMENTUM_SHARED/java64/JAVA_LINK. Well this is actually a symbolic link to the real Java version installed by a basic CS 7.3: $DOCUMENTUM_SHARED/java64/1.8.0_77. So for a Content Server 7.3, it’s simple, everything is using Java 8u77.

[dmadmin@content_server_01 install]$ echo $JAVA_HOME
$DOCUMENTUM_SHARED/java64/JAVA_LINK
[dmadmin@content_server_01 install]$
[dmadmin@content_server_01 install]$ ls -l $JAVA_HOME
lrwxrwxrwx. 1 dmadmin dmadmin 39 Oct  5 08:07 $DOCUMENTUM_SHARED/java64/JAVA_LINK -> $DOCUMENTUM_SHARED/java64/1.8.0_77
[dmadmin@content_server_01 install]$
[dmadmin@content_server_01 install]$ $JAVA_HOME/bin/java -version
java version "1.8.0_77"
Java(TM) SE Runtime Environment (build 1.8.0_77-b03)
Java HotSpot(TM) 64-Bit Server VM (build 25.77-b03, mixed mode)
[dmadmin@content_server_01 install]$

 

Now why am I saying that the IDS is twisted? Well as you can see in the error message above, the path to the Java mentioned is “$DOCUMENTUM_SHARED/java/1.7.0_72″. Where is this coming from? One might think at first look that this is the java from a CS 7.2… And indeed it is the same version since a CS 7.2 was using Java 7u72 but it’s not even the same path… A CS 7.2 was using “$DOCUMENTUM_SHARED/java64/1.7.0_72″ (notice the “java64″ for 64 bits OS).

[dmadmin@content_server_01 install]$ $DOCUMENTUM_SHARED/java/1.7.0_72/bin/java -version
java version "1.7.0_72"
Java(TM) SE Runtime Environment (build 1.7.0_72-b14)
Java HotSpot(TM) 64-Bit Server VM (build 24.72-b04, mixed mode)

 

But it’s not all! If you take a look at the start script of the IDS 7.3 (below $IDS_HOME = $DOCUMENTUM_SHARED/wildfly9.0.1) you will see another Java!

[dmadmin@content_server_01 install]$ grep JAVA_HOME $IDS_HOME/server/startWEBCACHE.sh
JAVA_HOME="$DM_HOME/webcache/jre/linux"
export JAVA_HOME
[dmadmin@content_server_01 install]$
[dmadmin@content_server_01 install]$
[dmadmin@content_server_01 install]$ $DM_HOME/webcache/jre/linux/bin/java -version
java version "1.8.0_91"
Java(TM) SE Runtime Environment (build 1.8.0_91-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.91-b14, mixed mode)
[dmadmin@content_server_01 install]$

 

So when you install an IDS 7.3 on a CS 7.3 (that is already using Java 8u77), it will run using a Java 8u91 and it will also install a Java 7u72… Why would you make it easy when you can make it complicated! Since the IDS will only run using the Java 8u91, what’s the purpose of the Java 7u72 then? Well the only purpose I could find is precisely linked to this issue: to install DARs.

 

By default the Headless Composer – since installed with the CS 7.3 – will use the Java 8u77 ($DOCUMENTUM_SHARED/java64/JAVA_LINK) but the IDS isn’t able to install the DARs with Java 8. I don’t know where the Java version to be used by the IDS during the installation is defined but it might very well be hardcoded because the Installer is using the file /tmp/dctm_dmadmin/install_xxxxx/istempxxxxxyyyyyyyy/bundledLinux.jvm to know which Java to use and it’s always the Java 7u72 (the bundled one). You can try to update the file “$DM_HOME/webcache/install/install_info.ini” with a different “BUNDLED_JAVA_HOME” but it won’t change anything.

 

So then how can you control which Java should be used by the Headless Composer? That’s done in the java.ini file!

[dmadmin@content_server_01 install]$ grep -E "^java_library_path|^java_classpath" $DM_HOME/install/composer/ComposerHeadless/plugins/com.emc.ide.external.dfc_1.0.0/dmbasic/linux/java.ini
java_library_path = $DOCUMENTUM_SHARED/java64/JAVA_LINK/jre/lib/amd64/libjava.so
java_classpath = $DM_HOME/dctm-server.jar:$DOCUMENTUM_SHARED/dctm.jar:$DOCUMENTUM_SHARED/config:$DOCUMENTUM_SHARED/java64/JAVA_LINK/jre/lib
[dmadmin@content_server_01 install]$

 

As you can see above (and I already said that above), the Headless Composer is using the Java that comes with the CS 7.3, so it is by default Java 8u77 ($DOCUMENTUM_SHARED/java64/JAVA_LINK). If you try to change the Java used in the java.ini file from Java 8u77 (CS 7.3) to Java 8u91 (IDS 7.3), it will still not work. What you need to do is changing the Java used to the value that the IDS expect and you can do it like that:

[dmadmin@content_server_01 install]$ grep -E "^java_library_path|^java_classpath" $DM_HOME/install/composer/ComposerHeadless/plugins/com.emc.ide.external.dfc_1.0.0/dmbasic/linux/java.ini
java_library_path = $DOCUMENTUM_SHARED/java64/JAVA_LINK/jre/lib/amd64/libjava.so
java_classpath = $DM_HOME/dctm-server.jar:$DOCUMENTUM_SHARED/dctm.jar:$DOCUMENTUM_SHARED/config:$DOCUMENTUM_SHARED/java64/JAVA_LINK/jre/lib
[dmadmin@content_server_01 install]$
[dmadmin@content_server_01 install]$ cp $DM_HOME/install/composer/ComposerHeadless/plugins/com.emc.ide.external.dfc_1.0.0/dmbasic/linux/java.ini $DM_HOME/install/composer/ComposerHeadless/plugins/com.emc.ide.external.dfc_1.0.0/dmbasic/linux/java.ini_orig_before_IDS
[dmadmin@content_server_01 install]$
[dmadmin@content_server_01 install]$
[dmadmin@content_server_01 install]$ export IDS_JAVA="$DOCUMENTUM_SHARED/java/1.7.0_72"
[dmadmin@content_server_01 install]$
[dmadmin@content_server_01 install]$ sed -i "s,$JAVA_HOME,$IDS_JAVA," $DM_HOME/install/composer/ComposerHeadless/plugins/com.emc.ide.external.dfc_1.0.0/dmbasic/linux/java.ini
[dmadmin@content_server_01 install]$
[dmadmin@content_server_01 install]$ grep -E "^java_library_path|^java_classpath" $DM_HOME/install/composer/ComposerHeadless/plugins/com.emc.ide.external.dfc_1.0.0/dmbasic/linux/java.ini
java_library_path = $DOCUMENTUM_SHARED/java/1.7.0_72/jre/lib/amd64/libjava.so
java_classpath = $DM_HOME/dctm-server.jar:$DOCUMENTUM_SHARED/dctm.jar:$DOCUMENTUM_SHARED/config:$DOCUMENTUM_SHARED/java/1.7.0_72/jre/lib
[dmadmin@content_server_01 install]$

 

After doing that, you can try to execute the IDS Source Configurator again and this time, it will be able to install the IDS DARs into the target docbase. Don’t forget to restore the java.ini to its initial value afterwards…

 

 

Cet article Documentum – Unable to configure IDS 7.3 for a docbase est apparu en premier sur Blog dbi services.

Documentum – ActiveX error 12019 in D2-Config during export

Fri, 2017-11-10 11:50

Some months ago at a customer, we started the rollout of some security baselines on a new application (not yet productive). One of the changes was to enforce the use of TLSv1.2 on all our Documentum Clients like D2/D2-Config (4.5, 4.6, 4.7), DA (7.2, 7.3), aso… TLSv1.2 was already enabled before that but there was also a fallback to TLSv1.1 or 1.0. For Security reasons, at some point you will need to ensure that TLSv1.2 only can be used because the previous versions contain vulnerabilities… Obviously, there were some testing and validation to ensure that Documentum DFC Clients could handle the TLSv1.2 and everything was working properly.

 

A few days later, we started to receive a few tickets from developers that couldn’t export the configuration from D2-Config with an ActiveX error 12019:

ActiveX_12019

 

The strange thing is that this looked like a random issue because on some workstations, it was working properly and on a few others it wasn’t. We took for example two Windows 7 workstations with the same OS and patch level, with the same ActiveX, with the same Browser (IE11 with TLSv1.0, 1.1 and 1.2 enabled) and the issue could only be seen on one of the two workstations.

After more tests, it appeared that the issue could only be reproduced when the D2-Config was using TLSv1.2 and that the client workstation was a Windows 7 or Windows 8 (but not for all workstations…). For W8.1 or W10, it was always working.

 

So looking into this, we found on the Windows support site a registry that could help. As described in this webpage, when an application specifies the “WINHTTP_OPTION_SECURE_PROTOCOLS”, the OS will check for a value in the registry and use it if present. If the registry entry isn’t present, it will then use the OS default value which is specified there: for Windows 7 and 8, only SSLv3 and TLSv1.0 are enabled and for Windows 8.1 and 10, SSLv3, TLSv1.0, TLSv1.1 and TLSv1.2 are enabled.

 

The registry to edit can be found here:

  • 32 bits OS: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\WinHttp
  • 64 bits OS: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Internet Settings\WinHttp

 

At this location, you will probably not have much keys so the one to add to control this behavior is “DefaultSecureProtocols” (DWORD). As mentioned on the Microsoft website, the possible values are as follow:

DefaultSecureProtocols Value Protocol enabled  0x00000008  Enable SSL 2.0 by default  0x00000020  Enable SSL 3.0 by default  0x00000080  Enable TLS 1.0 by default  0x00000200  Enable TLS 1.1 by default  0x00000800  Enable TLS 1.2 by default

With these values in mind, you can do some simple additions (in hexadecimal of course) to enable several protocols. So for example:

  • Enabling TLSv1.1 + TLSv1.2 => 0x00000200 + 0x00000800 = 0x00000A00
  • Enabling TLSv1.0 + TLSv1.1 + TLSv1.2 => 0x00000080 + 0x00000200 + 0x00000800 = 0x00000A80

 

In our case, IE11 supports from TLSv1.0 onwards and therefore we enabled the same for the “DefaultSecureProtocols” registry, to avoid possible issues with other/older applications. Since we restrict the use of TLSv1.2 on our Documentum DFC Clients, the end-user workstations will not have the choice to use weaker protocols so we are on the safe side for our applications. After doing that, all Windows 7 workstations (without exception) were now able to export the D2-Config configuration without issue.

 

Note: All W7 workstations were all part of the same domain with the same setup, the same GPO and without admin rights for the end-users so this registry wasn’t setup at all… Yet the issue was only on some Windows 7 workstations and it is still unclear why… There must be a difference somewhere but we still didn’t find it. That’s why this registry will only be packaged as a fix for the workstations were the issue is present and until everybody is moved to Windows 8.1 or later.

 

Cet article Documentum – ActiveX error 12019 in D2-Config during export est apparu en premier sur Blog dbi services.

Displaying the contents of a PostgreSQL data file with pg_filedump

Wed, 2017-11-08 04:34

Did you ever wonder what exactly is in a PostgreSQL data file? Usually you don’t care, I agree. But there might be situations where knowing how you can do this might be a great help. Maybe your file is corrupted and you want to recover as much data as possible? Maybe you just want to do some research. There is a utility called pg_filedump which makes this pretty easy. Lets go …

Before you try to install pg_filedump you’ll need to make sure that all the header files are there in your PostgreSQL installation. Once you have that the installation is as simple as:

postgres@pgbox:/home/postgres/ [PG10] tar -axf pg_filedump-REL_10_0-c0e4028.tar.gz 
postgres@pgbox:/home/postgres/ [PG10] cd pg_filedump-REL_10_0-c0e4028
postgres@pgbox:/home/postgres/pg_filedump-REL_10_0-c0e4028/ [PG10] make
postgres@pgbox:/home/postgres/pg_filedump-REL_10_0-c0e4028/ [PG10] make install

If everything went fine the utility should be there:

postgres@pgbox:/u02/pgdata/PG10/ [PG10] pg_filedump -h

Version 10.0 (for PostgreSQL 10.x)
Copyright (c) 2002-2010 Red Hat, Inc.
Copyright (c) 2011-2017, PostgreSQL Global Development Group

Usage: pg_filedump [-abcdfhikxy] [-R startblock [endblock]] [-D attrlist] [-S blocksize] [-s segsize] [-n segnumber] file

Display formatted contents of a PostgreSQL heap/index/control file
Defaults are: relative addressing, range of the entire file, block
               size as listed on block 0 in the file

The following options are valid for heap and index files:
  -a  Display absolute addresses when formatting (Block header
      information is always block relative)
  -b  Display binary block images within a range (Option will turn
      off all formatting options)
  -d  Display formatted block content dump (Option will turn off
      all other formatting options)
  -D  Decode tuples using given comma separated list of types
      Supported types:
        bigint bigserial bool char charN date float float4 float8 int
        json macaddr name oid real serial smallint smallserial text
        time timestamp timetz uuid varchar varcharN xid xml
      ~ ignores all attributes left in a tuple
  -f  Display formatted block content dump along with interpretation
  -h  Display this information
  -i  Display interpreted item details
  -k  Verify block checksums
  -R  Display specific block ranges within the file (Blocks are
      indexed from 0)
        [startblock]: block to start at
        [endblock]: block to end at
      A startblock without an endblock will format the single block
  -s  Force segment size to [segsize]
  -n  Force segment number to [segnumber]
  -S  Force block size to [blocksize]
  -x  Force interpreted formatting of block items as index items
  -y  Force interpreted formatting of block items as heap items

The following options are valid for control files:
  -c  Interpret the file listed as a control file
  -f  Display formatted content dump along with interpretation
  -S  Force block size to [blocksize]

Report bugs to 

As we want to dump a file we obviously need a table with some data, so:

postgres=# create table t1 ( a int, b varchar(50));
CREATE TABLE
postgres=# insert into t1 (a,b) select a, md5(a::varchar) from generate_series(1,10) a;
INSERT 0 10

Get the name of the file:

postgres=# select * from pg_relation_filenode('t1');
 pg_relation_filenode 
----------------------
                24702
(1 row)

Look it up in PGDATA:

postgres@pgbox:/home/postgres/ [PG10] cd $PGDATA
postgres@pgbox:/u02/pgdata/PG10/ [PG10] find . -name 24702
./base/13212/24702

… and dump it:

postgres@pgbox:/u02/pgdata/PG10/ [PG10] pg_filedump ./base/13212/24702

*******************************************************************
* PostgreSQL File/Block Formatted Dump Utility - Version 10.0
*
* File: ./base/13212/24702
* Options used: None
*
* Dump created on: Wed Nov  8 10:39:33 2017
*******************************************************************
Error: Unable to read full page header from block 0.
  ===> Read 0 bytes

Hm, nothing in there. Why? The reasons is easy: The data is there in PostgreSQL but it is only WAL logged at the moment and not yet in the datafile as no checkpoint happened (in this case):

postgres=#  checkpoint;
CHECKPOINT
Time: 100.567 ms

Do it again:

postgres@pgbox:/u02/pgdata/PG10/ [PG10] pg_filedump ./base/13212/24702

*******************************************************************
* PostgreSQL File/Block Formatted Dump Utility - Version 10.0
*
* File: ./base/13212/24702
* Options used: None
*
* Dump created on: Wed Nov  8 10:40:45 2017
*******************************************************************

Block    0 ********************************************************
 -----
 Block Offset: 0x00000000         Offsets: Lower      64 (0x0040)
 Block: Size 8192  Version    4            Upper    7552 (0x1d80)
 LSN:  logid      0 recoff 0x478b2c48      Special  8192 (0x2000)
 Items:   10                      Free Space: 7488
 Checksum: 0x0000  Prune XID: 0x00000000  Flags: 0x0000 ()
 Length (including item array): 64

 ------ 
 Item   1 -- Length:   61  Offset: 8128 (0x1fc0)  Flags: NORMAL
 Item   2 -- Length:   61  Offset: 8064 (0x1f80)  Flags: NORMAL
 Item   3 -- Length:   61  Offset: 8000 (0x1f40)  Flags: NORMAL
 Item   4 -- Length:   61  Offset: 7936 (0x1f00)  Flags: NORMAL
 Item   5 -- Length:   61  Offset: 7872 (0x1ec0)  Flags: NORMAL
 Item   6 -- Length:   61  Offset: 7808 (0x1e80)  Flags: NORMAL
 Item   7 -- Length:   61  Offset: 7744 (0x1e40)  Flags: NORMAL
 Item   8 -- Length:   61  Offset: 7680 (0x1e00)  Flags: NORMAL
 Item   9 -- Length:   61  Offset: 7616 (0x1dc0)  Flags: NORMAL
 Item  10 -- Length:   61  Offset: 7552 (0x1d80)  Flags: NORMAL


*** End of File Encountered. Last Block Read: 0 ***

Here we go. What can we learn from that output. This is not really human readable but at least we see that there are ten rows. We can also list the actual contents of the rows:

postgres@pgbox:/u02/pgdata/PG10/ [PG10] pg_filedump -f ./base/13212/24702

*******************************************************************
* PostgreSQL File/Block Formatted Dump Utility - Version 10.0
*
* File: ./base/13212/24702
* Options used: -f 
*
* Dump created on: Wed Nov  8 10:41:21 2017
*******************************************************************

Block    0 ********************************************************
 -----
 Block Offset: 0x00000000         Offsets: Lower      64 (0x0040)
 Block: Size 8192  Version    4            Upper    7552 (0x1d80)
 LSN:  logid      0 recoff 0x478b2c48      Special  8192 (0x2000)
 Items:   10                      Free Space: 7488
 Checksum: 0x0000  Prune XID: 0x00000000  Flags: 0x0000 ()
 Length (including item array): 64

  0000: 00000000 482c8b47 00000000 4000801d  ....H,.G....@...
  0010: 00200420 00000000 c09f7a00 809f7a00  . . ......z...z.
  0020: 409f7a00 009f7a00 c09e7a00 809e7a00  @.z...z...z...z.
  0030: 409e7a00 009e7a00 c09d7a00 809d7a00  @.z...z...z...z.

 ------ 
 Item   1 -- Length:   61  Offset: 8128 (0x1fc0)  Flags: NORMAL
  1fc0: 96020000 00000000 00000000 00000000  ................
  1fd0: 01000200 02081800 01000000 43633463  ............Cc4c
  1fe0: 61343233 38613062 39323338 32306463  a4238a0b923820dc
  1ff0: 63353039 61366637 35383439 62        c509a6f75849b   

 Item   2 -- Length:   61  Offset: 8064 (0x1f80)  Flags: NORMAL
  1f80: 96020000 00000000 00000000 00000000  ................
  1f90: 02000200 02081800 02000000 43633831  ............Cc81
  1fa0: 65373238 64396434 63326636 33366630  e728d9d4c2f636f0
  1fb0: 36376638 39636331 34383632 63        67f89cc14862c   

 Item   3 -- Length:   61  Offset: 8000 (0x1f40)  Flags: NORMAL
  1f40: 96020000 00000000 00000000 00000000  ................
  1f50: 03000200 02081800 03000000 43656363  ............Cecc
  1f60: 62633837 65346235 63653266 65323833  bc87e4b5ce2fe283
  1f70: 30386664 39663261 37626166 33        08fd9f2a7baf3   

 Item   4 -- Length:   61  Offset: 7936 (0x1f00)  Flags: NORMAL
  1f00: 96020000 00000000 00000000 00000000  ................
  1f10: 04000200 02081800 04000000 43613837  ............Ca87
  1f20: 66663637 39613266 33653731 64393138  ff679a2f3e71d918
  1f30: 31613637 62373534 32313232 63        1a67b7542122c   

 Item   5 -- Length:   61  Offset: 7872 (0x1ec0)  Flags: NORMAL
  1ec0: 96020000 00000000 00000000 00000000  ................
  1ed0: 05000200 02081800 05000000 43653464  ............Ce4d
  1ee0: 61336237 66626263 65323334 35643737  a3b7fbbce2345d77
  1ef0: 37326230 36373461 33313864 35        72b0674a318d5   

 Item   6 -- Length:   61  Offset: 7808 (0x1e80)  Flags: NORMAL
  1e80: 96020000 00000000 00000000 00000000  ................
  1e90: 06000200 02081800 06000000 43313637  ............C167
  1ea0: 39303931 63356138 38306661 66366662  9091c5a880faf6fb
  1eb0: 35653630 38376562 31623264 63        5e6087eb1b2dc   

 Item   7 -- Length:   61  Offset: 7744 (0x1e40)  Flags: NORMAL
  1e40: 96020000 00000000 00000000 00000000  ................
  1e50: 07000200 02081800 07000000 43386631  ............C8f1
  1e60: 34653435 66636565 61313637 61356133  4e45fceea167a5a3
  1e70: 36646564 64346265 61323534 33        6dedd4bea2543   

 Item   8 -- Length:   61  Offset: 7680 (0x1e00)  Flags: NORMAL
  1e00: 96020000 00000000 00000000 00000000  ................
  1e10: 08000200 02081800 08000000 43633966  ............Cc9f
  1e20: 30663839 35666239 38616239 31353966  0f895fb98ab9159f
  1e30: 35316664 30323937 65323336 64        51fd0297e236d   

 Item   9 -- Length:   61  Offset: 7616 (0x1dc0)  Flags: NORMAL
  1dc0: 96020000 00000000 00000000 00000000  ................
  1dd0: 09000200 02081800 09000000 43343563  ............C45c
  1de0: 34386363 65326532 64376662 64656131  48cce2e2d7fbdea1
  1df0: 61666335 31633763 36616432 36        afc51c7c6ad26   

 Item  10 -- Length:   61  Offset: 7552 (0x1d80)  Flags: NORMAL
  1d80: 96020000 00000000 00000000 00000000  ................
  1d90: 0a000200 02081800 0a000000 43643364  ............Cd3d
  1da0: 39343436 38303261 34343235 39373535  9446802a44259755
  1db0: 64333865 36643136 33653832 30        d38e6d163e820   



*** End of File Encountered. Last Block Read: 0 ***

But this does not help much either. When you want to see the contents in human readable format use the “-D” switch and provide the list of data types you want to decode:

postgres@pgbox:/u02/pgdata/PG10/ [PG10] pg_filedump -D int,varchar ./base/13212/24702

*******************************************************************
* PostgreSQL File/Block Formatted Dump Utility - Version 10.0
*
* File: ./base/13212/24702
* Options used: -D int,varchar 
*
* Dump created on: Wed Nov  8 10:42:58 2017
*******************************************************************

Block    0 ********************************************************
 -----
 Block Offset: 0x00000000         Offsets: Lower      64 (0x0040)
 Block: Size 8192  Version    4            Upper    7552 (0x1d80)
 LSN:  logid      0 recoff 0x478b2c48      Special  8192 (0x2000)
 Items:   10                      Free Space: 7488
 Checksum: 0x0000  Prune XID: 0x00000000  Flags: 0x0000 ()
 Length (including item array): 64

 ------ 
 Item   1 -- Length:   61  Offset: 8128 (0x1fc0)  Flags: NORMAL
COPY: 1	c4ca4238a0b923820dcc509a6f75849b
 Item   2 -- Length:   61  Offset: 8064 (0x1f80)  Flags: NORMAL
COPY: 2	c81e728d9d4c2f636f067f89cc14862c
 Item   3 -- Length:   61  Offset: 8000 (0x1f40)  Flags: NORMAL
COPY: 3	eccbc87e4b5ce2fe28308fd9f2a7baf3
 Item   4 -- Length:   61  Offset: 7936 (0x1f00)  Flags: NORMAL
COPY: 4	a87ff679a2f3e71d9181a67b7542122c
 Item   5 -- Length:   61  Offset: 7872 (0x1ec0)  Flags: NORMAL
COPY: 5	e4da3b7fbbce2345d7772b0674a318d5
 Item   6 -- Length:   61  Offset: 7808 (0x1e80)  Flags: NORMAL
COPY: 6	1679091c5a880faf6fb5e6087eb1b2dc
 Item   7 -- Length:   61  Offset: 7744 (0x1e40)  Flags: NORMAL
COPY: 7	8f14e45fceea167a5a36dedd4bea2543
 Item   8 -- Length:   61  Offset: 7680 (0x1e00)  Flags: NORMAL
COPY: 8	c9f0f895fb98ab9159f51fd0297e236d
 Item   9 -- Length:   61  Offset: 7616 (0x1dc0)  Flags: NORMAL
COPY: 9	45c48cce2e2d7fbdea1afc51c7c6ad26
 Item  10 -- Length:   61  Offset: 7552 (0x1d80)  Flags: NORMAL
COPY: 10	d3d9446802a44259755d38e6d163e820

And now we can see it. This is the same data as if you’d do a select on the table:

postgres=# select * from  t1;
 a  |                b                 
----+----------------------------------
  1 | c4ca4238a0b923820dcc509a6f75849b
  2 | c81e728d9d4c2f636f067f89cc14862c
  3 | eccbc87e4b5ce2fe28308fd9f2a7baf3
  4 | a87ff679a2f3e71d9181a67b7542122c
  5 | e4da3b7fbbce2345d7772b0674a318d5
  6 | 1679091c5a880faf6fb5e6087eb1b2dc
  7 | 8f14e45fceea167a5a36dedd4bea2543
  8 | c9f0f895fb98ab9159f51fd0297e236d
  9 | 45c48cce2e2d7fbdea1afc51c7c6ad26
 10 | d3d9446802a44259755d38e6d163e820
(10 rows)

What happens when we do an update?:

postgres=# update t1 set b = 'a' where a = 4;
UPDATE 1
postgres=# checkpoint ;
CHECKPOINT

How does it look like in the file?

postgres@pgbox:/u02/pgdata/PG10/ [PG10] pg_filedump -D int,varchar ./base/13212/24702

*******************************************************************
* PostgreSQL File/Block Formatted Dump Utility - Version 10.0
*
* File: ./base/13212/24702
* Options used: -D int,varchar 
*
* Dump created on: Wed Nov  8 11:12:35 2017
*******************************************************************

Block    0 ********************************************************
 -----
 Block Offset: 0x00000000         Offsets: Lower      68 (0x0044)
 Block: Size 8192  Version    4            Upper    7520 (0x1d60)
 LSN:  logid      0 recoff 0x478c2998      Special  8192 (0x2000)
 Items:   11                      Free Space: 7452
 Checksum: 0x0000  Prune XID: 0x00000298  Flags: 0x0000 ()
 Length (including item array): 68

 ------ 
 Item   1 -- Length:   61  Offset: 8128 (0x1fc0)  Flags: NORMAL
COPY: 1	c4ca4238a0b923820dcc509a6f75849b
 Item   2 -- Length:   61  Offset: 8064 (0x1f80)  Flags: NORMAL
COPY: 2	c81e728d9d4c2f636f067f89cc14862c
 Item   3 -- Length:   61  Offset: 8000 (0x1f40)  Flags: NORMAL
COPY: 3	eccbc87e4b5ce2fe28308fd9f2a7baf3
 Item   4 -- Length:   61  Offset: 7936 (0x1f00)  Flags: NORMAL
COPY: 4	a87ff679a2f3e71d9181a67b7542122c
 Item   5 -- Length:   61  Offset: 7872 (0x1ec0)  Flags: NORMAL
COPY: 5	e4da3b7fbbce2345d7772b0674a318d5
 Item   6 -- Length:   61  Offset: 7808 (0x1e80)  Flags: NORMAL
COPY: 6	1679091c5a880faf6fb5e6087eb1b2dc
 Item   7 -- Length:   61  Offset: 7744 (0x1e40)  Flags: NORMAL
COPY: 7	8f14e45fceea167a5a36dedd4bea2543
 Item   8 -- Length:   61  Offset: 7680 (0x1e00)  Flags: NORMAL
COPY: 8	c9f0f895fb98ab9159f51fd0297e236d
 Item   9 -- Length:   61  Offset: 7616 (0x1dc0)  Flags: NORMAL
COPY: 9	45c48cce2e2d7fbdea1afc51c7c6ad26
 Item  10 -- Length:   61  Offset: 7552 (0x1d80)  Flags: NORMAL
COPY: 10	d3d9446802a44259755d38e6d163e820
 Item  11 -- Length:   30  Offset: 7520 (0x1d60)  Flags: NORMAL
COPY: 4	a

*** End of File Encountered. Last Block Read: 0 ***

The a=4 row is still there but we got a new one (Item 11) which is our update. Remember that it is the job of vacuum to recycle the dead/old rows:

postgres=# vacuum t1;
VACUUM
postgres=# checkpoint ;
CHECKPOINT

Again (just displaying the data here):

 ------ 
 Item   1 -- Length:   61  Offset: 8128 (0x1fc0)  Flags: NORMAL
COPY: 1	c4ca4238a0b923820dcc509a6f75849b
 Item   2 -- Length:   61  Offset: 8064 (0x1f80)  Flags: NORMAL
COPY: 2	c81e728d9d4c2f636f067f89cc14862c
 Item   3 -- Length:   61  Offset: 8000 (0x1f40)  Flags: NORMAL
COPY: 3	eccbc87e4b5ce2fe28308fd9f2a7baf3
 Item   4 -- Length:    0  Offset:   11 (0x000b)  Flags: REDIRECT
 Item   5 -- Length:   61  Offset: 7936 (0x1f00)  Flags: NORMAL
COPY: 5	e4da3b7fbbce2345d7772b0674a318d5
 Item   6 -- Length:   61  Offset: 7872 (0x1ec0)  Flags: NORMAL
COPY: 6	1679091c5a880faf6fb5e6087eb1b2dc
 Item   7 -- Length:   61  Offset: 7808 (0x1e80)  Flags: NORMAL
COPY: 7	8f14e45fceea167a5a36dedd4bea2543
 Item   8 -- Length:   61  Offset: 7744 (0x1e40)  Flags: NORMAL
COPY: 8	c9f0f895fb98ab9159f51fd0297e236d
 Item   9 -- Length:   61  Offset: 7680 (0x1e00)  Flags: NORMAL
COPY: 9	45c48cce2e2d7fbdea1afc51c7c6ad26
 Item  10 -- Length:   61  Offset: 7616 (0x1dc0)  Flags: NORMAL
COPY: 10	d3d9446802a44259755d38e6d163e820
 Item  11 -- Length:   30  Offset: 7584 (0x1da0)  Flags: NORMAL
COPY: 4	a

… and “Item 4″ is gone (somewhere else). The same happens when you delete data:

postgres=# delete from t1 where a = 4;
DELETE 1
postgres=# vacuum t1;
VACUUM
postgres=# checkpoint;
CHECKPOINT

You’ll notice that both, Items 4 and 11, are now gone (UNUSED):

 ------ 
 Item   1 -- Length:   61  Offset: 8128 (0x1fc0)  Flags: NORMAL
COPY: 1	c4ca4238a0b923820dcc509a6f75849b
 Item   2 -- Length:   61  Offset: 8064 (0x1f80)  Flags: NORMAL
COPY: 2	c81e728d9d4c2f636f067f89cc14862c
 Item   3 -- Length:   61  Offset: 8000 (0x1f40)  Flags: NORMAL
COPY: 3	eccbc87e4b5ce2fe28308fd9f2a7baf3
 Item   4 -- Length:    0  Offset:    0 (0x0000)  Flags: UNUSED
 Item   5 -- Length:   61  Offset: 7936 (0x1f00)  Flags: NORMAL
COPY: 5	e4da3b7fbbce2345d7772b0674a318d5
 Item   6 -- Length:   61  Offset: 7872 (0x1ec0)  Flags: NORMAL
COPY: 6	1679091c5a880faf6fb5e6087eb1b2dc
 Item   7 -- Length:   61  Offset: 7808 (0x1e80)  Flags: NORMAL
COPY: 7	8f14e45fceea167a5a36dedd4bea2543
 Item   8 -- Length:   61  Offset: 7744 (0x1e40)  Flags: NORMAL
COPY: 8	c9f0f895fb98ab9159f51fd0297e236d
 Item   9 -- Length:   61  Offset: 7680 (0x1e00)  Flags: NORMAL
COPY: 9	45c48cce2e2d7fbdea1afc51c7c6ad26
 Item  10 -- Length:   61  Offset: 7616 (0x1dc0)  Flags: NORMAL
COPY: 10	d3d9446802a44259755d38e6d163e820
 Item  11 -- Length:    0  Offset:    0 (0x0000)  Flags: UNUSED

So far for the introduction of pg_filedump, more to come in more detail.

 

Cet article Displaying the contents of a PostgreSQL data file with pg_filedump est apparu en premier sur Blog dbi services.

Automate OVM deployment for a production ready Oracle RAC 12.2 architecture – (part 01)

Wed, 2017-11-08 02:30

After having worked with OVM on various architectures I can say that it is a good technology to easily build virtualized environments for production applications. Because it is based on XEN and has simple ways to deal with existing storage (FC, ISCSI, NFS, …) and networking solution (bond, lacp, …) it is a robust and convenient way to virtualized IT infrastructures keeping “bare-metal” performance.
Besides, it is an hard partitioning technology which is compliant with the Oracle licensing policies for partitioned environments to control CPU counting for licensing.

The aim of this post is to demonstrate how simple it is to build an HA virtualized architecture with the OVM Manager command line tool only (doc link). So we will create 1 VM on each server including all Oracle OS , network and storage requirements to run RAC 12.2.

Initial state:

  • 2 physical servers installed with Oracle VM Server 3.4 (namely OVS, installation procedure here ) to host VMs including:
    • 5 NICs on each (no bounding for the example but recommended for production system)
      • eth0: administrative network connected to the organization’s administrative network
      • eth1: application network dedicated to application
      • eth2: storage network cabled to the storage for ISCSI LUNs and NFS accessibility
      • eth3: cabled between both OVS Servers for RAC interconnect link #1/2
      • eth4: cabled between both OVS Servers for RAC interconnect link #2/2
  • 1 server with Oracle VM Manager 3.4 yet installed (installation procedure here)
    • eth0: administrative network connector to the organization’s administrative network
  • 1 storage system (Here we are going to use a ZFS Storage appliance)
  • 1 OVM Template from Oracle Corp. (available here)

Summary


Step 0: Connect to the OVM Manager client

Because the client connect through SSH protocol (default port number 10000, user admin), connecting to the OVM Manager client can be done from wherever you have network connectivity with OVM Server.
OVMCLI is a separate service from OVM Manager running on the OVM Manager server. Here I check the OVMCLI service status and I connect from within the VM Manager server.

service ovmcli status

ssh -l admin localhost -p 10000

OVM>                                            # --> prompt for OVM
OVM> ?                                          # --> to show which action can be done
OVM> list ?                                     # --> to show which options are available for command "list"
OVM> set OutputMode={ Verbose | Sparse | Xml }  # --> to make output matching your automation style


Step 1: discover OVS servers

discoverServer ipAddress=192.168.56.101 password=oracle takeOwnership=Yes
discoverServer ipAddress=192.168.56.102 password=oracle takeOwnership=Yes


Step 2: Discover file server

In this example I going to store the ServerPool FSs to NFS from the ZFS Storage appliance. But it could be whatever NFS technologies or directly can be stored in ISCSI/FC LUNs.

OVM> list FileServerPlugin
Command: list FileServerPlugin
Status: Success
Time: 2017-10-19 14:51:31,311 CEST
Data:
id:oracle.ocfs2.OCFS2.OCFS2Plugin (0.1.0-47.5)  name:Oracle OCFS2 File system
id:oracle.generic.NFSPlugin.GenericNFSPlugin (1.1.0)  name:Oracle Generic Network File System

OVM> create FileServer plugin="Oracle Generic Network File System" accessHost=192.168.238.10 adminServers=ovs001,ovs002 name=zfsstorage
Command: create FileServer plugin="Oracle Generic Network File System" accessHost=192.168.238.10 adminServers=ovs001,ovs002 name=zfsstorage
Status: Success
Time: 2017-10-19 14:58:46,411 CEST
JobId: 1508417926209
Data:
id:0004fb00000900004801ecf9996f1d43  name:zfsstorage

OVM> refreshAll
Command: refreshAll
Status: Success
Time: 2017-10-19 16:26:58,705 CEST
JobId: 1508422976145

OVM> list FileSystem
Command: list FileSystem
Status: Success
Time: 2017-10-19 17:41:35,737 CEST
Data:
id:75734f6d-704d-48ee-9853-f6cc09b5af65  name:nfs on 192.168.238.10:/export/RepoOracle
id:3f81dcad-e1ce-41b9-b0f3-3222b3816b17  name:nfs on 192.168.238.10:/export/ServerPoolProd01

OVM> refresh FileSystem id=75734f6d-704d-48ee-9853-f6cc09b5af65
Command: refresh FileSystem id=75734f6d-704d-48ee-9853-f6cc09b5af65
Status: Success
Time: 2017-10-19 17:42:28,516 CEST
JobId: 1508427714903

OVM> refresh FileSystem id=3f81dcad-e1ce-41b9-b0f3-3222b3816b17
Command: refresh FileSystem id=3f81dcad-e1ce-41b9-b0f3-3222b3816b17
Status: Success
Time: 2017-10-19 17:43:02,144 CEST
JobId: 1508427760257


Step 3: Discover NAS Storage

OVM> list StorageArrayPlugin
Command: list StorageArrayPlugin
Status: Success
Time: 2017-10-19 15:28:23,932 CEST
Data:
id:oracle.s7k.SCSIPlugin.SCSIPlugin (2.1.2-3)  name:zfs_storage_iscsi_fc
id:oracle.generic.SCSIPlugin.GenericPlugin (1.1.0)  name:Oracle Generic SCSI Plugin

OVM> create StorageArray plugin=zfs_storage_iscsi_fc name=zfsstorage storageType=ISCSI accessHost=192.168.238.10 accessPort=3260 adminHost=192.168.238.10 adminUserName=ovmuser adminPassword=oracle pluginPrivateData="OVM-iSCSI,OVM-iSCSI-Target"
Command: create StorageArray plugin=zfs_storage_iscsi_fc name=zfsstorage storageType=ISCSI accessHost=192.168.238.10 accessPort=3260 adminHost=192.168.238.10 adminUserName=ovmuser adminPassword=***** pluginPrivateData="OVM-iSCSI,OVM-iSCSI-Target"
Status: Success
Time: 2017-10-19 15:48:00,761 CEST
JobId: 1508420880565
Data:
id:0004fb0000090000c105d1003f051fbd  name:zfsstorage

OVM> addAdminServer StorageArray name=zfsstorage server=ovs001
Command: addAdminServer StorageArray name=zfsstorage server=ovs001
Status: Success
Time: 2017-10-19 16:11:32,448 CEST
JobId: 1508422292175

OVM> addAdminServer StorageArray name=zfsstorage server=ovs002
Command: addAdminServer StorageArray name=zfsstorage server=ovs002
Status: Success
Time: 2017-10-19 16:11:35,424 CEST
JobId: 1508422295266

OVM> validate StorageArray name=zfsstorage
Command: validate StorageArray name=zfsstorage
Status: Success
Time: 2017-10-19 16:10:04,937 CEST
JobId: 1508422128777

OVM> refreshAll
Command: refreshAll
Status: Success
Time: 2017-10-19 16:26:58,705 CEST
JobId: 1508422976145


Step 4: Creation of a server pool

OVM need to put its physical servers in a logical space called server pool. A server pool will use a least 2 storage spaces:

  • a cluster storage configuration and disk Heartbeat (must be at least of 10GB regarding OVM 3.4’s recommendations) and it is better to separate the network access for this storage space in order to avoid unwanted cluster eviction.
  • a storage space for the serverpool in which we can store VMs configuration file, Template, ISOs and so on.
OVM> list FileSystem
Command: list FileSystem
Status: Success
Time: 2017-10-19 17:41:35,737 CEST
Data:
id:75734f6d-704d-48ee-9853-f6cc09b5af65  name:nfs on 192.168.238.10:/export/RepoOracle
id:3f81dcad-e1ce-41b9-b0f3-3222b3816b17  name:nfs on 192.168.238.10:/export/ServerPoolProd01

OVM> create ServerPool clusterEnable=yes filesystem=3f81dcad-e1ce-41b9-b0f3-3222b3816b17 name=prod01 description='Server pool for production 001' startPolicy=CURRENT_SERVER
Command: create ServerPool clusterEnable=yes filesystem=3f81dcad-e1ce-41b9-b0f3-3222b3816b17 name=prod01 description='Server pool for production 001' startPolicy=CURRENT_SERVER
Status: Success
Time: 2017-10-19 17:15:11,431 CEST
Data:
id:0004fb0000020000c6b2c32fc58646e7  name:prod01


Step 5: Add servers to the server pool

OVM> list server
Command: list server
Status: Success
Time: 2017-10-19 17:15:28,111 CEST
Data:
id:65:72:21:77:7b:0d:47:47:bc:43:e5:1f:64:3d:56:d9  name:ovs002
id:bb:06:3c:3e:a4:76:4b:e2:9c:bc:65:69:4e:35:28:b4  name:ovs001

OVM> add Server name=ovs001 to ServerPool name=prod01
Command: add Server name=ovs001 to ServerPool name=prod01
Status: Success
Time: 2017-10-19 17:17:55,131 CEST
JobId: 1508426260895

OVM> add Server name=ovs002 to ServerPool name=prod01
Command: add Server name=ovs002 to ServerPool name=prod01
Status: Success
Time: 2017-10-19 17:18:21,439 CEST
JobId: 1508426277115


Step 6: Creation of a repository to store VMs’s configuration files and to import the Oracle Template

OVM> list filesystem
Command: list filesystem
Status: Success
Time: 2017-10-19 17:44:23,811 CEST
Data:
id:0004fb00000500009cbc79dde9b6649e  name:Server Pool File System
id:75734f6d-704d-48ee-9853-f6cc09b5af65  name:nfs on 192.168.238.10:/export/RepoOracle
id:3f81dcad-e1ce-41b9-b0f3-3222b3816b17  name:nfs on 192.168.238.10:/export/ServerPoolProd01

OVM> create Repository name=RepoOracle on FileSystem name="nfs on 192.168.238.10://export//RepoOracle"
Command: create Repository name=RepoOracle on FileSystem name="nfs on 192.168.238.10://export//RepoOracle"
Status: Success
Time: 2017-10-19 17:45:22,346 CEST
JobId: 1508427888238
Data:
id:0004fb0000030000f1c8182390a36c8c  name:RepoOracle

OVM> add ServerPool name=prod01 to Repository name=RepoOracle
Command: add ServerPool name=prod01 to Repository name=RepoOracle
Status: Success
Time: 2017-10-19 17:53:08,020 CEST
JobId: 1508428361049

OVM> refresh Repository name=RepoOracle
Command: refresh Repository name=RepoOracle
Status: Success
Time: 2017-10-19 17:53:40,922 CEST
JobId: 1508428394212

OVM> importTemplate Repository name=RepoOracle url="ftp:////192.168.56.200//pub//OVM_OL7U4_X86_64_12201DBRAC_PVHVM//OVM_OL7U4_X86_64_12201DBRAC_PVHVM-1of2.tar.gz,ftp:////192.168.56.200//pub//OVM_OL7U4_X86_64_12201DBRAC_PVHVM//OVM_OL7U4_X86_64_12201DBRAC_PVHVM-2of2.tar.gz"
Command: importTemplate Repository name=RepoOracle url="ftp:////192.168.56.200//pub//OVM_OL7U4_X86_64_12201DBRAC_PVHVM//OVM_OL7U4_X86_64_12201DBRAC_PVHVM-1of2.tar.gz,ftp:////192.168.56.200//pub//OVM_OL7U4_X86_64_12201DBRAC_PVHVM//OVM_OL7U4_X86_64_12201DBRAC_PVHVM-2of2.tar.gz"
Status: Success
Time: 2017-11-02 12:05:29,341 CET
JobId: 1509619956729
Data:
id:0004fb00001400005f68a4067eda1e6b  name:OVM_OL7U4_X86_64_12201DBRAC_PVHVM-1of2.tar.gz


Step 7: Create VMs called rac001 and rac002 for my 2 nodes RAC

Here we create VMs by cloning the template OVM_OL7U4_X86_64_12201DBRAC_PVHVM from Oracle.

OVM> list vm
Command: list vm
Status: Success
Time: 2017-11-02 12:07:06,077 CET
Data:
id:0004fb00001400005f68a4067eda1e6b  name:OVM_OL7U4_X86_64_12201DBRAC_PVHVM-1of2.tar.gz

OVM> edit vm id=0004fb00001400005f68a4067eda1e6b name=OVM_OL7U4_X86_64_12201DBRAC_PVHVM
Command: edit vm id=0004fb00001400005f68a4067eda1e6b name=OVM_OL7U4_X86_64_12201DBRAC_PVHVM
Status: Success
Time: 2017-11-02 12:07:30,392 CET
JobId: 1509620850142

OVM> list vm
Command: list vm
Status: Success
Time: 2017-11-02 12:07:36,282 CET
Data:
id:0004fb00001400005f68a4067eda1e6b  name:OVM_OL7U4_X86_64_12201DBRAC_PVHVM

OVM> clone Vm name=OVM_OL7U4_X86_64_12201DBRAC_PVHVM destType=Vm destName=rac001 serverPool=prod01
Command: clone Vm name=OVM_OL7U4_X86_64_12201DBRAC_PVHVM destType=Vm destName=rac001 serverPool=prod01
Status: Success
Time: 2017-11-02 12:31:31,798 CET
JobId: 1509622291342
Data:
id:0004fb0000060000d4819629ebc0687f  name:rac001

OVM> clone Vm name=OVM_OL7U4_X86_64_12201DBRAC_PVHVM destType=Vm destName=rac002 serverPool=prod01
Command: clone Vm name=OVM_OL7U4_X86_64_12201DBRAC_PVHVM destType=Vm destName=rac002 serverPool=prod01
Status: Success
Time: 2017-11-02 13:57:34,125 CET
JobId: 1509627453634
Data:
id:0004fb0000060000482c8e4790b7081a  name:rac002

OVM> list vm
Command: list vm
Status: Success
Time: 2017-11-02 15:23:54,077 CET
Data:
id:0004fb00001400005f68a4067eda1e6b  name:OVM_OL7U4_X86_64_12201DBRAC_PVHVM
id:0004fb0000060000d4819629ebc0687f  name:rac001
id:0004fb0000060000482c8e4790b7081a  name:rac002

OVM> edit vm name=rac001 memory=2048 memoryLimit=2048
Command: edit vm name=rac001 memory=2048 memoryLimit=2048
Status: Success
Time: 2017-11-02 17:14:45,542 CET
JobId: 1509639285374

OVM> edit vm name=rac002 memory=2048 memoryLimit=2048
Command: edit vm name=rac002 memory=2048 memoryLimit=2048
Status: Success
Time: 2017-11-02 17:14:59,458 CET
JobId: 1509639299301


Step 8: Network definition for RAC interconnect and application network

create Network roles=VIRTUAL_MACHINE name=Application-Network
create Network roles=VIRTUAL_MACHINE name=Interco-Network-01
create Network roles=VIRTUAL_MACHINE name=Interco-Network-02

OVM> list network
Command: list network
Status: Success
Time: 2017-10-17 00:31:53,673 CEST
Data:
id:108572a7ca  name:Application-Network
id:10922ff6d7  name:Interco-Network-01
id:106765828d  name:Interco-Network-02

 

Next, we attach physical OVS’s NICs to corresponding networks

OVM> list port
Command: list port
Status: Success
Time: 2017-11-02 16:03:40,026 CET
Data:
id:0004fb00002000007667fde85d2a2944  name:eth0 on ovs002
id:0004fb00002000001fa791c597d71947  name:eth1 on ovs002
id:0004fb00002000003842bd1f3acb476b  name:eth2 on ovs002
id:0004fb000020000031652acb25248275  name:eth3 on ovs002
id:0004fb00002000006fb524dac1f2319c  name:eth4 on ovs001
id:0004fb0000200000748a37db41f80fb2  name:eth4 on ovs002
id:0004fb00002000000178e5cefb3c0161  name:eth3 on ovs001
id:0004fb000020000020373da7c0cdf4cf  name:eth2 on ovs001
id:0004fb0000200000b0e747714aa822b7  name:eth1 on ovs001
id:0004fb00002000002787de2e68f61ecd  name:eth0 on ovs001

add Port id=0004fb0000200000b0e747714aa822b7 to Network name=Application-Network
add Port id=0004fb00002000000178e5cefb3c0161 to Network name=Interco-Network-01
add Port id=0004fb00002000006fb524dac1f2319c to Network name=Interco-Network-02

add Port id=0004fb00002000001fa791c597d71947 to Network name=Application-Network
add Port id=0004fb000020000031652acb25248275 to Network name=Interco-Network-01
add Port id=0004fb0000200000748a37db41f80fb2 to Network name=Interco-Network-02

 

Then create Virtual NIC for Virtual Machines (the order matter as first created will fill first slot of the VM)

OVM> list vnic
Command: list vnic
Status: Success
Time: 2017-11-02 15:25:54,571 CET
Data:
id:0004fb00000700001fe86897bfb0ecd4  name:Template Vnic
id:0004fb00000700005351eb55314ab34e  name:Template Vnic

create Vnic name=rac001_vnic_admin network=Admin-Network on Vm name=rac001
create Vnic name=rac001_vnic_application network=Application-Network on Vm name=rac001
create Vnic name=rac001_vnic_interconnect network=Interco-Network-01 on Vm name=rac001
create Vnic name=rac001_vnic_interconnect network=Interco-Network-02 on Vm name=rac001

create Vnic name=rac002_vnic_admin network=Admin-Network on Vm name=rac002
create Vnic name=rac002_vnic_application network=Application-Network on Vm name=rac002
create Vnic name=rac002_vnic_interconnect network=Interco-Network-01 on Vm name=rac002
create Vnic name=rac002_vnic_interconnect network=Interco-Network-02 on Vm name=rac002

OVM> list vnic
Command: list vnic
Status: Success
Time: 2017-11-02 15:27:34,642 CET
Data:
id:0004fb00000700005631bb2fbbeed53c  name:rac002_vnic_interconnect
id:0004fb00000700005e93ec7e8cf529b6  name:rac001_vnic_interconnect
id:0004fb0000070000c091c9091b464846  name:rac002_vnic_admin
id:0004fb00000700001fe86897bfb0ecd4  name:Template Vnic
id:0004fb00000700009430b0a26566d6e3  name:rac002_vnic_application
id:0004fb0000070000c4113fb1d9375791  name:rac002_vnic_interconnect
id:0004fb00000700005351eb55314ab34e  name:Template Vnic
id:0004fb0000070000e1abd7e572bffc3a  name:rac001_vnic_admin
id:0004fb000007000079bb1fbf1d1942c9  name:rac001_vnic_application
id:0004fb000007000085d8a41dc8fd768c  name:rac001_vnic_interconnect


Step 9: Shared disks attachment to VMs for RAC ASM

Thanks to the Storage plugin available for the ZFS appliance we can directly create LUNs from the OVM Cli. You may find plugin for your Storage constructor in the Oracle Web Site https://www.oracle.com/virtualization/storage-connect-partner-program.html.
The storage plugin need to be installed on each OVS Servers and OVS servers need to be rediscovered after changes.

create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgclu001 name=clu001dgclu001 on VolumeGroup  name=data/local/OracleTech
create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgclu002 name=clu001dgclu002 on VolumeGroup  name=data/local/OracleTech
create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgclu003 name=clu001dgclu003 on VolumeGroup  name=data/local/OracleTech
create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgclu004 name=clu001dgclu004 on VolumeGroup  name=data/local/OracleTech
create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgclu005 name=clu001dgclu005 on VolumeGroup  name=data/local/OracleTech
create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgdata001 name=clu001dgdata001 on VolumeGroup  name=data/local/OracleTech
create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgdata002 name=clu001dgdata002 on VolumeGroup  name=data/local/OracleTech
create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgfra001 name=clu001dgfra001 on VolumeGroup  name=data/local/OracleTech
create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgfra002 name=clu001dgfra002 on VolumeGroup  name=data/local/OracleTech

OVM> list PhysicalDisk
Command: list PhysicalDisk
Status: Success
Time: 2017-11-02 11:44:41,624 CET
Data:
id:0004fb0000180000ae02df42a4c8e582  name:clu001dgclu004
id:0004fb0000180000d91546f7d1a09cfb  name:clu001dgclu005
id:0004fb0000180000ab0030fb540a55b9  name:clu001dgclu003
id:0004fb0000180000d20bb1d7d50d6875  name:clu001dgfra001
id:0004fb00001800009e39a0b8b1edcf90  name:clu001dgfra002
id:0004fb00001800003742306aa30bfdd4  name:clu001dgdata001
id:0004fb00001800006131006a7a9fd266  name:clu001dgdata002
id:0004fb0000180000a5177543a1ef0464  name:clu001dgclu001
id:0004fb000018000035bd38c6f5245f66  name:clu001dgclu002

create vmdiskmapping slot=10 physicalDisk=clu001dgclu001 name=asm_disk_cluster_rac001_clu001dgclu001 on Vm name=rac001
create vmdiskmapping slot=11 physicalDisk=clu001dgclu002 name=asm_disk_cluster_rac001_clu001dgclu002 on Vm name=rac001
create vmdiskmapping slot=12 physicalDisk=clu001dgclu003 name=asm_disk_cluster_rac001_clu001dgclu003 on Vm name=rac001
create vmdiskmapping slot=13 physicalDisk=clu001dgclu004 name=asm_disk_cluster_rac001_clu001dgclu004 on Vm name=rac001
create vmdiskmapping slot=14 physicalDisk=clu001dgclu005 name=asm_disk_cluster_rac001_clu001dgclu005 on Vm name=rac001
create vmdiskmapping slot=15 physicalDisk=clu001dgdata001 name=asm_disk_cluster_rac001_clu001dgdata001 on Vm name=rac001
create vmdiskmapping slot=16 physicalDisk=clu001dgdata002 name=asm_disk_cluster_rac001_clu001dgdata002 on Vm name=rac001
create vmdiskmapping slot=17 physicalDisk=clu001dgfra001 name=asm_disk_cluster_rac001_clu001dgfra001 on Vm name=rac001
create vmdiskmapping slot=18 physicalDisk=clu001dgfra002 name=asm_disk_cluster_rac001_clu001dgfra002 on Vm name=rac001

create vmdiskmapping slot=10 physicalDisk=clu001dgclu001 name=asm_disk_cluster_rac002_clu001dgclu001 on Vm name=rac002
create vmdiskmapping slot=11 physicalDisk=clu001dgclu002 name=asm_disk_cluster_rac002_clu001dgclu002 on Vm name=rac002
create vmdiskmapping slot=12 physicalDisk=clu001dgclu003 name=asm_disk_cluster_rac002_clu001dgclu003 on Vm name=rac002
create vmdiskmapping slot=13 physicalDisk=clu001dgclu004 name=asm_disk_cluster_rac002_clu001dgclu004 on Vm name=rac002
create vmdiskmapping slot=14 physicalDisk=clu001dgclu005 name=asm_disk_cluster_rac002_clu001dgclu005 on Vm name=rac002
create vmdiskmapping slot=15 physicalDisk=clu001dgdata001 name=asm_disk_cluster_rac002_clu001dgdata on Vm name=rac002
create vmdiskmapping slot=16 physicalDisk=clu001dgdata002 name=asm_disk_cluster_rac002_clu001dgdata on Vm name=rac002
create vmdiskmapping slot=17 physicalDisk=clu001dgfra001 name=asm_disk_cluster_rac002_clu001dgfra001 on Vm name=rac002
create vmdiskmapping slot=18 physicalDisk=clu001dgfra002 name=asm_disk_cluster_rac002_clu001dgfra002 on Vm name=rac002

 

#Output of an attachment:
OVM> create vmdiskmapping slot=51 physicalDisk=clu001dgfra002 name=asm_disk_cluster_rac002_clu001dgfra002 on Vm name=rac002
Command: create vmdiskmapping slot=51 physicalDisk=clu001dgfra002 name=asm_disk_cluster_rac002_clu001dgfra002 on Vm name=rac002
Status: Success
Time: 2017-11-02 15:49:44,573 CET
JobId: 1509634184144
Data:
id:0004fb0000130000d1a3ecffefcc0b5b  name:asm_disk_cluster_rac002_clu001dgfra002

 

OVM> list vmdiskmapping
Command: list vmdiskmapping
Status: Success
Time: 2017-11-02 15:50:05,117 CET
Data:
id:0004fb0000130000a2e52668e38d24f0  name:Mapping for disk Id (0004fb00001200008e5043cea31e4a1c.img)
id:0004fb00001300000b0202b6af4254b1  name:asm_disk_cluster_rac002_clu001dgclu003
id:0004fb0000130000f573415ba8af814d  name:Mapping for disk Id (0004fb0000120000073fd0cff75c5f4d.img)
id:0004fb0000130000217c1b6586d88d98  name:asm_disk_cluster_rac002_clu001dgclu002
id:0004fb00001300007c8f1b4fd9e845c4  name:asm_disk_cluster_rac002_clu001dgclu001
id:0004fb00001300009698cf153f616454  name:asm_disk_cluster_rac001_clu001dgfra002
id:0004fb0000130000c9caf8763df6bfe0  name:asm_disk_cluster_rac001_clu001dgfra001
id:0004fb00001300009771ff7e2a1bf965  name:asm_disk_cluster_rac001_clu001dgdata002
id:0004fb00001300003aed42abb7085053  name:asm_disk_cluster_rac001_clu001dgdata001
id:0004fb0000130000ac45b70bac2cedf7  name:asm_disk_cluster_rac001_clu001dgclu005
id:0004fb000013000007069008e4b91b9d  name:asm_disk_cluster_rac001_clu001dgclu004
id:0004fb0000130000a8182ada5a07d7cd  name:asm_disk_cluster_rac001_clu001dgclu003
id:0004fb00001300009edf25758590684b  name:asm_disk_cluster_rac001_clu001dgclu002
id:0004fb0000130000a93c8a73900cbf80  name:asm_disk_cluster_rac002_clu001dgfra001
id:0004fb0000130000c8c35da3ad0148c4  name:asm_disk_cluster_rac001_clu001dgclu001
id:0004fb0000130000d1a3ecffefcc0b5b  name:asm_disk_cluster_rac002_clu001dgfra002
id:0004fb0000130000ff84c64175d7e6c1  name:asm_disk_cluster_rac002_clu001dgdata
id:0004fb00001300009c08b1803928536d  name:Mapping for disk Id (dd3c390b29af49809caba202f234a443.img)
id:0004fb0000130000e85ace19b45c0ad6  name:Mapping for disk Id (0004fb00001200002aa671facc8a1307.img)
id:0004fb0000130000e595c3dc5788b87a  name:Mapping for disk Id (0004fb000012000087341e27f9faaa17.img)
id:0004fb0000130000c66fe2d0d66b7276  name:asm_disk_cluster_rac002_clu001dgdata
id:0004fb00001300009c85bca66c400366  name:Mapping for disk Id (46da481163424b739feeb08b4d22c1b4.img)
id:0004fb0000130000768a2af09207e659  name:asm_disk_cluster_rac002_clu001dgclu004
id:0004fb000013000092836d3ee569e6ac  name:asm_disk_cluster_rac002_clu001dgclu005

OVM> add StorageInitiator name=iqn.1988-12.com.oracle:1847e1b91b5b to AccessGroup name=cluster001
Command: add StorageInitiator name=iqn.1988-12.com.oracle:1847e1b91b5b to AccessGroup name=cluster001
Status: Success
Time: 2017-11-02 16:59:32,116 CET
JobId: 1509638311277

OVM> add StorageInitiator name=iqn.1988-12.com.oracle:a5c84f2c8798 to AccessGroup name=cluster001
Command: add StorageInitiator name=iqn.1988-12.com.oracle:a5c84f2c8798 to AccessGroup name=cluster001
Status: Success
Time: 2017-11-02 16:57:31,703 CET
JobId: 1509638191228

add PhysicalDisk name=clu001dgclu001 to AccessGroup name=cluster001
add PhysicalDisk name=clu001dgclu002 to AccessGroup name=cluster001
add PhysicalDisk name=clu001dgclu003 to AccessGroup name=cluster001
add PhysicalDisk name=clu001dgclu004 to AccessGroup name=cluster001
add PhysicalDisk name=clu001dgclu005 to AccessGroup name=cluster001
add PhysicalDisk name=clu001dgdata001 to AccessGroup name=cluster001
add PhysicalDisk name=clu001dgdata002 to AccessGroup name=cluster001
add PhysicalDisk name=clu001dgfra001 to AccessGroup name=cluster001
add PhysicalDisk name=clu001dgfra002 to AccessGroup name=cluster001

#Output of an Access addition:
OVM> add PhysicalDisk name=clu001dgclu001 to AccessGroup name=cluster001
Command: add PhysicalDisk name=clu001dgclu001 to AccessGroup name=cluster001
Status: Success
Time: 2017-11-02 17:10:13,636 CET
JobId: 1509639013463
OVM> refreshStorageLayer Server name=ovs001
Command: refreshStorageLayer Server name=ovs001
Status: Success
Time: 2017-11-02 16:42:26,230 CET
JobId: 1509637330270

OVM> refreshStorageLayer Server name=ovs002
Command: refreshStorageLayer Server name=ovs002
Status: Success
Time: 2017-11-02 16:42:51,296 CET
JobId: 1509637355423

 

Final state: 2 VMs hosted on 2 different Servers with OS, Network and Storage requirements to run RAC 12.2.

This concluded this part and demonstrates how easy it can be to automate those commands and deploy many different architectures.
The next part will describe how to deploy a RAC 12.2 on top of this infrastructure with the Oracle DeployCluster Tool in few commands …

I hope it may help and please do not hesitate to contact us if you have any questions or require further information.

 

Cet article Automate OVM deployment for a production ready Oracle RAC 12.2 architecture – (part 01) est apparu en premier sur Blog dbi services.

SQL Server Tips: Deactivate the Customer Experience Improvement Program (CEIP)

Tue, 2017-11-07 09:17

Before SQL Server 2016, you had the possibility to check the case “Send Windows and SQL Server Error Reports….” during the installation if you want to be a part of the Customer Experience Improvement Program (CEIP).
In SQL Server 2016, after the installation, all of the CEIP are automatically turned on.

Why?

SQL Server and SQL Azure share the same code now. On Azure, this service existed since a long time. It collects a large amount of data to automate various tasks and keeps the system functional including support for the following:

  • Incident Management (CRIs, LSIs)
  • Alert management (proactive approach)
  • Automated management via bots (based on alerts)
  • Machine learning / data science
  • Investigating potential new features that can benefit a maximum of clients

As you can see, the idea of integrating the CEIP service with SQL 2016 is to be able to extend this ability to collect “useful” data to Microsoft in order to maximize the impact on future developments.

My Thinking

In this article, I do not want to start a discussion whether to leave this service active or not.
With the guarantees given by Microsoft on the information collected, it is also not a question of security.
The SQL Server Team has published an explicit policy that spells out what and when data is collected: https://www.microsoft.com/EN-US/privacystatement/SQLServer/Default.aspx
As a lot of servers have no internet access, this service is often useless (as data cannot be sent).
In previous versions, I did not install the CEIP on Production environment. So in the same logic, I deactivated this service.

How to Deactivate the CEIP

To disable this service, we need 2 steps. I use PowerShell commands for both.
The first step is to deactivate all CEIP services.

Deactivate all CEIP services

CEIP is present for 3 SQL server services:

  • For SQL Server Engine, you have a SQL Server CEIP service
  • For SQL Server Analysis Service (SSAS), you have a SQL Analysis Services CEIP
  • For SQL Server Integration Service (SSIS), you have a SQL Server Integration Services CEIP service 13.0

CEIP01
As you can see on this picture, we have one CEIP service per instance per service. For the Engine & SSAS and one just for SSIS(shared component).
If you have a look on each service, the patterns for the name are the same:

  • For SQL Server CEIP service, you have a SQLTELEMETRY$<InstanceName>
  • For SQL Analysis Services CEIP, you have a SSASTELEMETRY$<InstanceName>
  • For SQL Server Integration Services CEIP service 13.0 CEIP, you have just SSISTELEMETRY130

CEIP02 I run PowerShell as Administrator and run these following command to have a status of these services:

Get-WMiObject win32_service |? name -Like "SQLTELEMETRY*" | Format-Table name,startname,startmode,state
Get-WMiObject win32_service |? name -Like "SSASTELEMETRY*" | Format-Table name,startname,startmode,state
Get-WMiObject win32_service |? name -Like "SSISTELEMETRY*" | Format-Table name,startname,startmode,state

CEIP03
We can also be more generic and use this command:

Get-WMiObject win32_service |? name -Like "*TELEMETRY*" | Format-Table name,startname,startmode,state

CEIP04
To disable these services, I do it in 2 steps. The first step is to stop the service and the second step is to disable the service:

  • Stop services
    Get-WMiObject win32_service |? name -Like "*TELEMETRY*" | ? state -eq "running" | Stop-Service
  • Disable services
    Get-WMiObject win32_service |? name -Like "*TELEMETRY*" | Set-Service -StartMode Disabled

Here you find the “global” script:

##################################################
# Disable CEIP services  #
##################################################
Get-WMiObject win32_service |? name -Like "*TELEMETRY*" | Format-Table name,startname,startmode,state
# Stop all CEIP services
Get-WMiObject win32_service |? name -Like "*TELEMETRY*" | ? state -eq "running" | Stop-Service
Get-WMiObject win32_service |? name -Like "*TELEMETRY*" | Format-Table name,startname,startmode,state
# Disable all CEIP services
Get-WMiObject win32_service |? name -Like "*TELEMETRY*" | Set-Service -StartMode Disabled
Get-WMiObject win32_service |? name -Like "*TELEMETRY*" | Format-Table name,startname,startmode,state
##################################################

CEIP05
All CEIP services are now stopped and disabled. Good job, Stéphane 8-) , but it’s not finished, we have a second step to do…
The second step is to set all CEIP registry keys to 0.

Set all CEIP registry keys to 0

This step is more complex because we have a lot of registry keys. Two parameters have to be set to 0:

  • CustomerFeedback
  • EnableErrorReporting

The first registry key is HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\130\
CEIP06
The second registry key is HKEY_LOCAL_MACHINE\Software\Wow6432Node\Microsoft\Microsoft SQL Server\130\
CEIP07
The other registry keys are per instance and per services(Engine, SSAS and SSRS):
HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSSQL**.<instance>\CPE\
CEIP08
HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSAS**.<instance>\CPE\
CEIP09
HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSRS**.<instance>\CPE\
CEIP10
To set all these keys to 0, I use “simply” PowerShell Commands:

##################################################
#  Deactivate CEIP registry keys #
##################################################
# Set all CustomerFeedback & EnableErrorReporting in the key directory HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server to 0
# Set HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\***\CustomerFeedback=0
# Set HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\***\EnableErrorReporting=0
# *** --> Version of SQL Server (100,110,120,130,140,...)
# For the Engine
# Set HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSSQL**.<instance>\CPE\CustomerFeedback=0
# Set HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSSQL**.<instance>\CPE\EnableErrorReporting=0
# For SQL Server Analysis Server (SSAS)
# Set HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSAS**.<instance>\CPE\CustomerFeedback=0
# Set HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSAS**.<instance>\CPE\EnableErrorReporting=0
# For Server Reporting Server (SSRS)
# Set HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSRS**.<instance>\CPE\CustomerFeedback=0
# Set HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSRS**.<instance>\CPE\EnableErrorReporting=0
# ** --> Version of SQL Server (10,11,12,13,14,...)
##################################################
$Key = 'HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server'
$FoundKeys = Get-ChildItem $Key -Recurse | Where-Object -Property Property -eq 'EnableErrorReporting'
foreach ($Sqlfoundkey in $FoundKeys)
{
$SqlFoundkey | Set-ItemProperty -Name EnableErrorReporting -Value 0
$SqlFoundkey | Set-ItemProperty -Name CustomerFeedback -Value 0
}
##################################################
# Set HKEY_LOCAL_MACHINE\Software\Wow6432Node\Microsoft\Microsoft SQL Server\***\CustomerFeedback=0
# Set HKEY_LOCAL_MACHINE\Software\Wow6432Node\Microsoft\Microsoft SQL Server\***\EnableErrorReporting=0
# *** --> Version of SQL Server(100,110,120,130,140,...)
##################################################
$WowKey = "HKLM:\SOFTWARE\Wow6432Node\Microsoft\Microsoft SQL Server"
$FoundWowKeys = Get-ChildItem $WowKey | Where-Object -Property Property -eq 'EnableErrorReporting'
foreach ($SqlFoundWowKey in $FoundWowKeys)
{
$SqlFoundWowKey | Set-ItemProperty -Name EnableErrorReporting -Value 0
$SqlFoundWowKey | Set-ItemProperty -Name CustomerFeedback -Value 0
}

As you can see, I use only the EnableErrorReporting key in the Where-Object clause to find the impacted keys. After running this script you have all CEIP registry key set to 0…
Et voila, CEIP is totally deactivated!

To finish, I will thanks all my SQL Server colleagues for their help to have a good vision of this tricky subject. It was also a good discussion internally in our SQL Server Expert Team to define what to do by customer!   :-)

 

Cet article SQL Server Tips: Deactivate the Customer Experience Improvement Program (CEIP) est apparu en premier sur Blog dbi services.

Pass Summit 2017: how to bypass SQL Server security

Mon, 2017-11-06 09:49

Last Friday I saw a very interesting session in Pass Summit 2017 about how to Bypass, or Ensure, SQL Server security by Matt Martin.
Matt explained us how to bypass SQL Server security with the complicity of your SQL Server DBA.
Msdb is the most powerful database to get stuff done: mail, jobs… so let’s have a look how to take the power within a SQL Server instance.

Start a job under SQLAgentOperator role

SQLAgentOperator give you the right to execute all job even if it runs under sysadmin.
If your SQL Server DBA grant your login as a member of this role and if a job runs a Stored Procedure what you are able to modify you are the master of the box as you can grant your login as sysadmin.

Start a job from MSDB with parameter

The goal here is to ask you DBA to create a Stored Procedure with as parameter a job name in order to simplify process because you need to run several different jobs

Procedure like that:

USE msdb
GO
CREATE PROC dbo.sp_run_job
  @name varchar(500)
  WITH EXECUTE AS OWNER
AS
  exec sp_start_job @job_name = @name

With this kind of Stored Procedure you can do whatever you want on the server.

Linked Server Elevated privilege

If you have a linked server that logins into a foreign server with a sysadmin account, ask you DBA to enable RPC out for the linked server for a good reason like decrease the stress on both servers.
If you DBA does the job you can execute this kind of script:

EXEC(‘Alter server role sysadmin add member [ad\sts]’) at linked_Server1

Nice not?

I need a sandbox database that I’m DBO on

The goal for this one is to convince your DBA to create for you a sandbox database on a production server for a good reason like working with production data (a good reason?)…
Once done ask him to schedule a Stored Procedure you wrote on a daily basis.
If he does is just alter you Stored Procedure with the following code:

ALTER SERVER ROLE SYSADMIN ADD MEMBER [ad\sts]

Another way to become quickly sysadmin!

Xp_cmdshell to take over the box

Xp_cmdshell is running under SQL Server service account which often is Local admin of the box.
Ask your DBA to grant you permissions to xp_cmdshell and argue that you need to delete some files after importing them on the server.
If the account that runs SQL Server is a local Administrator, you can add your account as an Administrator of the server and whatever you want…

EXEC xp_cmdshell ‘net localgroup administrators ad\sts /add’

Oups…

How to close Loopholes

Never enable a business user for SQLServerOperator role.
Never use SA or another sysadmin account for linked server but set it to a reader login with minimum permission.
Never schedule a job that calls a SP where business user has alter access to.
Never allow a parameterized job call in MSDB, always hardcode the job name to avoid modification of this job name.

Just give the necessary permission!
Create a server trigger which send an Email to DBA when somebody becomes sysadmin on the instance. Like that DBA can look at this new account immediately and see if somebody tries to squeeze the system.

 

Cet article Pass Summit 2017: how to bypass SQL Server security est apparu en premier sur Blog dbi services.

Multitenant internals: INT$ and INT$INT$ views

Sun, 2017-11-05 13:24

This month, I’ll talk – with lot of demos – about multitenant internals at DOAG conference. CaptureMultitenantInternals
The multitenant dictionary architecture starts with a simple idea: system metadata and data are in CDB$ROOT and user metadata and data are in PDB. And it could have been that simple. When a session connected to a PDB needs to read some system information, the session context is switched to the CDB$ROOT container and reads from CDB$ROOT SYS tablespace and objects, and then switches back to the PDB container. This is implemented by metadata and data links: the PDB lists the objects for which the session has to switch to CDB$ROOT to get metadata or data.

CaptureMultitenantInternals1But, for compatibility reason, and ease of administration, the dictionary views must display information from both containers, transparently, and then things become a little more complex with common views and extended data views.

At Oracle Open World, the multitenant architects, in the #PDBExpert session, answered questions about the multitenant architecture posted on Twitter. My first question (because I was investigating a bug at that time) was about some views, such as INT$INT$DBA_CONSTRAINTS, introduced to implement the complexity of showing the same information in dictionary views as the ones we had on non-CDB. Of course, the architects didn’t want to go too far on this and had a very accurate answer: INT$ is for internal, and here you have two ‘INT$’ so you shouldn’t look at that.

But I like to understand how things work and here is the explanation of these INT$INT$ views. And I’m not even sure that INT is for ‘internal’ but maybe ‘intermediate’. But for sure, the $ at the end is used by Oracle internal dictionary objects.

INT$ Extended Data views

We are used to seeing all objects, system ones and user ones, listed by the dictionary views. For example, DBA_PROCEDURES shows all procedures, system and user ones, and then have to read from both containers (current PDB and CDB$ROOT) through extended data links. ALL_PROCEDURES shows all procedures accessible by the user, and they also have to switch to CDB$ROOT if the user has been granted to read system objects. USER_PROCEDURES shows only the objects owned by the current user, and then can read from the current container only.

For the ease of the definition, in 12c all the joins on the underlying tables(such as procedureinfo$, user$, obj$) is done by an intermediate view such as INT$DBA_PROCEDURES which is defined as EXTENDED DATA link to read from CDB$ROOT in addition to the local table. Then DBA_PROCEDURES, ALL_PROCEDURES and USER_PROCEDURES are defined on top of it with the required where clause to filter out owner and privilege accessibility.

INT$INT$ Extended Data views

In this post, I’ll detail the special case of DBA_CONSTRAINTS because things are more complex to get the multitenant architecture behaving the same as the non-CDB.

There are several types of constraints which are identified with the CONSTRAINT_TYPE column of DBA_CONSTRAINTS, or the TYPE# of the underlying table CDEF#

Here, I query the underlying table with the CONTAINER() function to see what is stored in each container:

SQL> select decode(type#,1,'C',2,'P',3,'U',4,'R',5,'V',6,'O',7,'C',8,'H',9,'F',10,'F',11,'F',13,'F','?') constraint_type,
2 type#,con_id,count(*) from containers(cdef$)
3 group by type#,con_id order by type#,con_id;
 
CONSTRAINT_TYPE TYPE# CON_ID COUNT(*)
--------------- ----- ------ --------
C 1 1 74
C 1 3 74
P 2 1 843
P 2 3 844
U 3 1 238
U 3 3 238
R 4 1 324
R 4 3 324
V 5 1 11
O 6 1 172
O 6 3 26
C 7 1 5337
C 7 3 5337
F 11 1 11
F 11 3 11
? 12 1 3
? 12 3 3

I have very few user objects in this database. CON_ID=1 is CDB$ROOT and CON_ID=3 is my PDB. What we can see here is that we have nearly the same number of rows in both containers for the following constraint types: C (check constraint on a table), P (primary key), U (unique key), R (referential integrity), and other types related to tables. And some types have most of their rows in CDB$ROOT only: V (check option on views), R (read only on views)

That’s an implementation specificity of the multitenant architecture which makes things more complex for the dictionary views. For some objects (such as procedures and views) the metadata is stored in only one container: system objects have all their information in CDB$ROOT and the PDB has only a link which is a dummy row in OBJ$ which mentions the sharing (such as metadata link), owner and name (to match to the object in CDB$ROOT), and a signature (to verify that the DDL creating the object is the same). But other objects (such as tables) have their information duplicated in all containers for system objects (CDB$ROOT, PDB$SEED and all user PDBs). This is the reason why we see rows in both containers for constraint definition when they are related to a table.

Example on view constraint

I’ll take a constraint on system view as an example: constraint SYS_C003357 on table SYS.DBA_XS_SESSIONS


SQL> select owner,object_name,object_type,sharing from dba_objects where owner='SYS' and object_name='DBA_XS_SESSIONS';
 
OWNER OBJECT_NAME OBJECT_TYPE SHARING
----- ----------- ----------- -------
SYS DBA_XS_SESSIONS VIEW METADATA LINK
 
SQL> select owner,table_name,constraint_type,constraint_name from containers(dba_constraints) where owner='SYS' and table_name='DBA_XS_SESSIONS' and rownum=1;
 
OWNER TABLE_NAME CONSTRAINT_TYPE CONSTRAINT_NAME
----- ---------- --------------- ---------------
SYS DBA_XS_SESSIONS O SYS_C003357

I’m looking at the dependencies for the DBA_CONSTRAINTS view:

SQL> select owner,name,referenced_owner,referenced_name from dba_dependencies where owner='SYS' and name='DBA_CONSTRAINTS' and type='VIEW';
 
OWNER NAME REFERENCED_OWNER REFERENCED_NAME
----- ---- ---------------- ---------------
SYS DBA_CONSTRAINTS SYS GETLONG
SYS DBA_CONSTRAINTS SYS INT$DBA_CONSTRAINTS

So the DBA_CONSTRAINT is a view on INT$DBA_CONSTRAINTS as we have seen above. However, this view is not directly reading the tables but another view:

SQL> select owner,name,referenced_owner,referenced_name from dba_dependencies where owner='SYS' and name='INT$DBA_CONSTRAINTS' and type='VIEW';
 
OWNER NAME REFERENCED_OWNER REFERENCED_NAME
----- ---- ---------------- ---------------
SYS INT$DBA_CONSTRAINTS SYS GETLONG
SYS INT$DBA_CONSTRAINTS SYS INT$INT$DBA_CONSTRAINTS

Here is our additional INT$INT$ view which is reading the tables:

SQL> select owner,name,referenced_owner,referenced_name from dba_dependencies where owner='SYS' and name='INT$INT$DBA_CONSTRAINTS' and type='VIEW';
 
OWNER NAME REFERENCED_OWNER REFERENCED_NAME
----- ---- ---------------- ---------------
SYS INT$INT$DBA_CONSTRAINTS SYS USER$
SYS INT$INT$DBA_CONSTRAINTS SYS CDEF$
SYS INT$INT$DBA_CONSTRAINTS SYS OBJ$
SYS INT$INT$DBA_CONSTRAINTS SYS CON$
SYS INT$INT$DBA_CONSTRAINTS SYS _CURRENT_EDITION_OBJ
SYS INT$INT$DBA_CONSTRAINTS SYS _BASE_USER
SYS INT$INT$DBA_CONSTRAINTS SYS GETLONG

In summary, the EXTENDED DATA view which reads the tables on each container (CDB$ROOT and PDB) is here the INT$INT$DBA_CONSTRAINTS and the INT$DBA_CONSTRAINTS is another intermediate one before the DBA_CONSTRAINTS view.


SQL> select owner,object_name,object_type,sharing from dba_objects where object_name in ('DBA_CONSTRAINTS','INT$DBA_CONSTRAINTS','INT$INT$DBA_CONSTRAINTS') order by object_id desc;
 
OWNER OBJECT_NAME OBJECT_TYPE SHARING
----- ----------- ----------- -------
PUBLIC DBA_CONSTRAINTS SYNONYM METADATA LINK
SYS DBA_CONSTRAINTS VIEW METADATA LINK
SYS INT$DBA_CONSTRAINTS VIEW METADATA LINK
SYS INT$INT$DBA_CONSTRAINTS VIEW EXTENDED DATA LINK

In this example, we don’t understand the reason for the additional intermediate view because the return all the same number of rows in each container:


SQL> select con_id,constraint_type,constraint_name from containers(INT$INT$DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003357'
 
CON_ID CONSTRAINT_TYPE CONSTRAINT_NAME
------ --------------- ---------------
1 O SYS_C003357
3 O SYS_C003357
 
SQL> select con_id,constraint_type,constraint_name from containers(INT$DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003357'
 
CON_ID CONSTRAINT_TYPE CONSTRAINT_NAME
------ --------------- ---------------
1 O SYS_C003357
3 O SYS_C003357
 
SQL> select con_id,constraint_type,constraint_name from containers(DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003357'
 
CON_ID CONSTRAINT_TYPE CONSTRAINT_NAME
------ --------------- ---------------
1 O SYS_C003357
3 O SYS_C003357

The difference is only a few additional columns from the object definition (OWNERID,OBJECT_ID,OBJECT_TYPE#,SHARING) in the INT$ and INT$INT$ which are not selected in the final view:

SQL> select * from containers(INT$INT$DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003357'
 
OWNER OWNERID CONSTRAINT_NAME CONSTRAINT_TYPE TABLE_NAME OBJECT_ID OBJECT_TYPE# SEARCH_CONDITION_VC R_OWNER R_CONSTRAINT_NAME DELETE_RULE STATUS DEFERRABLE DEFERRED VALIDATED GENERATED BAD RELY LAST_CHANGE INDEX_OWNER INDEX_NAME INVALID VIEW_RELATED SHARING ORIGIN_CON_ID CON_ID
----- ------- --------------- --------------- ---------- --------- ------------ ------------------- ------- ----------------- ----------- ------ ---------- -------- --------- --------- --- ---- ----------- ----------- ---------- ------- ------------ ------- ------------- ------
SYS 0 SYS_C003357 O DBA_XS_SESSIONS 10316 4 ENABLED NOT DEFERRABLE IMMEDIATE NOT VALIDATED GENERATED NAME 26-JAN-17 1 1 1
SYS 0 SYS_C003357 O DBA_XS_SESSIONS 10316 4 ENABLED NOT DEFERRABLE IMMEDIATE NOT VALIDATED GENERATED NAME 26-JAN-17 1 1 3
 
SQL> select * from containers(INT$DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003357'
 
OWNER OWNERID CONSTRAINT_NAME CONSTRAINT_TYPE TABLE_NAME OBJECT_ID OBJECT_TYPE# SEARCH_CONDITION_VC R_OWNER R_CONSTRAINT_NAME DELETE_RULE STATUS DEFERRABLE DEFERRED VALIDATED GENERATED BAD RELY LAST_CHANGE INDEX_OWNER INDEX_NAME INVALID VIEW_RELATED SHARING ORIGIN_CON_ID CON_ID
----- ------- --------------- --------------- ---------- --------- ------------ ------------------- ------- ----------------- ----------- ------ ---------- -------- --------- --------- --- ---- ----------- ----------- ---------- ------- ------------ ------- ------------- ------
SYS 0 SYS_C003357 O DBA_XS_SESSIONS 10316 4 ENABLED NOT DEFERRABLE IMMEDIATE NOT VALIDATED GENERATED NAME 26-JAN-17 1 1 1
SYS 0 SYS_C003357 O DBA_XS_SESSIONS 10316 4 ENABLED NOT DEFERRABLE IMMEDIATE NOT VALIDATED GENERATED NAME 26-JAN-17 1 1 3
 
SQL> select * from containers(DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003357'
 
OWNER CONSTRAINT_NAME CONSTRAINT_TYPE TABLE_NAME SEARCH_CONDITION_VC R_OWNER R_CONSTRAINT_NAME DELETE_RULE STATUS DEFERRABLE DEFERRED VALIDATED GENERATED BAD RELY LAST_CHANGE INDEX_OWNER INDEX_NAME INVALID VIEW_RELATED ORIGIN_CON_ID CON_ID
----- --------------- --------------- ---------- ------------------- ------- ----------------- ----------- ------ ---------- -------- --------- --------- --- ---- ----------- ----------- ---------- ------- ------------ ------------- ------
SYS SYS_C003357 O DBA_XS_SESSIONS ENABLED NOT DEFERRABLE IMMEDIATE NOT VALIDATED GENERATED NAME 26-JAN-17 1 1
SYS SYS_C003357 O DBA_XS_SESSIONS ENABLED NOT DEFERRABLE IMMEDIATE NOT VALIDATED GENERATED NAME 26-JAN-17

If we look at the INT$DBA_CONSTRAINTS definition we see some filters on those object definition:

SQL> ddl INT$DBA_CONSTRAINTS
 
CREATE OR REPLACE FORCE NONEDITIONABLE VIEW "SYS"."INT$DBA_CONSTRAINTS" ("OWNER", "OWNERID", "CONSTRAINT_NAME", "CONSTRAINT_TYPE", "TABLE_NAME", "OBJECT_ID", "OBJECT_TYPE#", "SEARCH_CONDITION", "SEARCH_CONDITION_VC", "R_OWNER", "R_CONSTRAINT_NAME", "DELETE_RULE", "STATUS", "DEFERRABLE", "DEFERRED", "VALIDATED", "GENERATED", "BAD", "RELY", "LAST_CHANGE", "INDEX_OWNER", "INDEX_NAME", "INVALID", "VIEW_RELATED", "SHARING", "ORIGIN_CON_ID") AS
select OWNER, OWNERID, CONSTRAINT_NAME, CONSTRAINT_TYPE,
TABLE_NAME, OBJECT_ID, OBJECT_TYPE#, SEARCH_CONDITION,
SEARCH_CONDITION_VC, R_OWNER, R_CONSTRAINT_NAME, DELETE_RULE, STATUS,
DEFERRABLE, DEFERRED, VALIDATED, GENERATED,
BAD, RELY, LAST_CHANGE, INDEX_OWNER, INDEX_NAME,
INVALID, VIEW_RELATED, SHARING, ORIGIN_CON_ID
from INT$INT$DBA_CONSTRAINTS INT$INT$DBA_CONSTRAINTS
where INT$INT$DBA_CONSTRAINTS.OBJECT_TYPE# = 4 /* views */
OR (INT$INT$DBA_CONSTRAINTS.OBJECT_TYPE# = 2 /* tables */
AND (INT$INT$DBA_CONSTRAINTS.ORIGIN_CON_ID
= TO_NUMBER(SYS_CONTEXT('USERENV', 'CON_ID'))));

For views (OBJECT_TYPE#=4) there is no filter, which explains why we see the same number of rows in the previous example. But for tables (OBJECT_TYPE#=2) there’s an additional filter to keep the row from the current container only.

Example on table constraint

Then, I’ll take another example with a constraint definition for a table:

SQL> select owner,object_name,object_type,sharing from dba_objects where owner='SYS' and object_name='RXS$SESSIONS';
 
OWNER OBJECT_NAME OBJECT_TYPE SHARING
----- ----------- ----------- -------
SYS RXS$SESSIONS TABLE METADATA LINK
 
SQL> select owner,table_name,constraint_type,constraint_name from dba_constraints where owner='SYS' and table_name='RXS$SESSIONS' and rownum=1;
 
OWNER TABLE_NAME CONSTRAINT_TYPE CONSTRAINT_NAME
----- ---------- --------------- ---------------
SYS RXS$SESSIONS C SYS_C003339

From the INT$INT$ view, we have a duplicate when we query on a PDB because for tables the PDB not only holds a dummy row in OBJ$ but full information about the table is duplicated in other tables such as TAB$ and CDEF$:

SQL> select con_id,constraint_type,constraint_name from containers(INT$INT$DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003339'
 
CON_ID CONSTRAINT_TYPE CONSTRAINT_NAME
------ --------------- ---------------
1 C SYS_C003339
3 C SYS_C003339
3 C SYS_C003339

This is the reason for the additional intermediate view: filtering out those duplicate by removing the rows from CDB$ROOT when queried from a PDB.

SQL> select con_id,constraint_type,constraint_name from containers(INT$DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003339'
 
CON_ID CONSTRAINT_TYPE CONSTRAINT_NAME
------ --------------- ---------------
1 C SYS_C003339
3 C SYS_C003339

Thanks to that, the duplicates are not visible to the end-user views DBA_CONSTRAINTS and PDB_CONSTRAINTS.

You may wonder why only DBA_CONSTRAINTS needs this views and not DBA_TABLES, DBA_INDEXES or DBA_TAB_COLUMNS? That’s because all information about system tables and indexes are replicated in all PDBs and then there is no need for EXTENDED DATA and context switches. DBA_CONSTRAINT has the particularity of showing information about tables and views, which implement the metadata links in a different way.

 

Cet article Multitenant internals: INT$ and INT$INT$ views est apparu en premier sur Blog dbi services.

Multitenant dictionary: what is consolidated and what is not

Sun, 2017-11-05 11:38

The documentation says that for Reduction of duplication and Ease of database upgrade the Oracle-supplied objects such as data dictionary table definitions and PL/SQL packages are represented only in the root.

Unfortunately, this is only partly true. System PL/SQL packages are only in root but system table definition are replicated into all PDBs.

This post is an extension of a previous blog post which was on 12cR1. This one is on 12cR2.

As I did at Open World and will do at DOAG, I show multitenant internals by creating a metadata link procedure. When I do a simple ‘describe’ when connected to a PDB, the sql_trace shows that the session switches to the CDB$ROOT to get the procedure information:

*** 2017-11-05T16:17:36.339126+01:00 (CDB$ROOT(1))
=====================
PARSING IN CURSOR #140420856738440 len=143 dep=1 uid=0 oct=3 lid=0 tim=101728244788 hv=2206365737 ad='7f60a7f0' sqlid='9fjf75a1s4y19'
select procedure#,procedurename,properties,itypeobj#, properties2 from procedureinfo$ where obj#=:1 order by procedurename desc, overload# desc
END OF STMT

All information about the system PL/SQL procedures is stored in the root only. The PDB has only a dummy row in OBJ$ to mention that it is a metadata link. And this is why you pay for the multitenant option: consolidation of all system dictionary objects into the root only. You save space (on disk and related memory) and you have only one place to upgrade.

But this is implemented only for some objects, like PL/SQL procedures, but not for others like table and indexes. If you ‘describe’ a metadata link table when connected to a PDB you will not see any switch to CDB$ROOT in the sql_trace:

*** 2017-11-05T13:01:53.541231+01:00 (PDB1(3))
PARSING IN CURSOR #139835947128936 len=86 dep=1 uid=0 oct=3 lid=0 tim=98244321664 hv=2195287067 ad='75f823b8' sqlid='32bhha21dkv0v'
select col#,intcol#,charsetid,charsetform from col$ where obj#=:1 order by intcol# asc
END OF STMT
PARSE #139835947128936:c=0,e=158,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,plh=3765558045,tim=98244321664
BINDS #139835947128936:
Bind#0
oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
oacflg=08 fl2=1000001 frm=00 csi=00 siz=24 off=0
kxsbbbfp=7f2e124fef10 bln=22 avl=03 flg=05
value=747
EXEC #139835947128936:c=1000,e=603,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,plh=3765558045,tim=98244322311
FETCH #139835947128936:c=0,e=15,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=4,plh=3765558045,tim=98244322342
FETCH #139835947128936:c=0,e=1,p=0,cr=0,cu=0,mis=0,r=1,dep=1,og=4,plh=3765558045,tim=98244322356
FETCH #139835947128936:c=0,e=4,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=3765558045,tim=98244322369
STAT #139835947128936 id=1 cnt=2 pid=0 pos=1 obj=0 op='SORT ORDER BY (cr=3 pr=0 pw=0 str=1 time=16 us cost=3 size=234 card=13)'
STAT #139835947128936 id=2 cnt=2 pid=1 pos=1 obj=21 op='TABLE ACCESS CLUSTER COL$ (cr=3 pr=0 pw=0 str=1 time=11 us cost=2 size=234 card=13)'
STAT #139835947128936 id=3 cnt=1 pid=2 pos=1 obj=3 op='INDEX UNIQUE SCAN I_OBJ# (cr=2 pr=0 pw=0 str=1 time=6 us cost=1 size=0 card=1)'
CLOSE #139835947128936:c=0,e=1,dep=1,type=3,tim=98244322439

Here all information about the columns is read from COL$ in the PDB. And if you look at TAB$ (tables), COL$ (table columns), IND$ (indexes), CONS$ and CDEF$ (constraints), you will see that they contain rows in a PDB where no user objects have been created. This is the case for all information related to tables: they are stored in CDB$ROOT and replicated into all other containers: PDB$SEED and all user created PDB. Only the information related to non-data objects, are stored only in one container.

I’ve run a query to count the rows in CDB$ROOT and PDB$SEED and here is the result:
CaptureMultitenantNumRows

All rows in OBJ$ are replicated, which is expected because this is where the metadata link information is stored. But you see also all information related to tables that are also replicated, such as the 100000+ columns in COL$. And this is the reason why you do not see a big consolidation benefit when you look at the size of the SYSTEM tablespace in pluggable databases which do no contain any user data:

List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 820 SYSTEM YES /u01/oradata/CDB1A/system01.dbf
3 630 SYSAUX NO /u01/oradata/CDB1A/sysaux01.dbf
4 80 UNDOTBS1 YES /u01/oradata/CDB1A/undotbs01.dbf
5 250 PDB$SEED:SYSTEM NO /u01/oradata/CDB1A/pdbseed/system01.dbf
6 390 PDB$SEED:SYSAUX NO /u01/oradata/CDB1A/pdbseed/sysaux01.dbf
7 5 USERS NO /u01/oradata/CDB1A/users01.dbf
8 100 PDB$SEED:UNDOTBS1 NO /u01/oradata/CDB1A/pdbseed/undotbs01.dbf
9 270 PDB1:SYSTEM YES /u01/oradata/CDB1A/PDB1/system01.dbf
10 440 PDB1:SYSAUX NO /u01/oradata/CDB1A/PDB1/sysaux01.dbf
11 100 PDB1:UNDOTBS1 YES /u01/oradata/CDB1A/PDB1/undotbs01.dbf
12 5 PDB1:USERS NO /u01/oradata/CDB1A/PDB1/users01.dbf

Here I have 250MB in PDB$SEED which is supposed to contain only links to the 820GB SYSTEM tablespace, but there is a lot more than that.

So, basically, not all the dictionary is consolidated in multitenant but only the non-data part such as those PL/SQL packages and the dictionary views definition. You can think about the multitenant option consolidation as an extension to sharing the Oracle Home among several databases. It concerns the software part only. But the part of the dictionary which contains data about system objects is replicated into all containers, and is read locally without a context switch. This also means that a patch or upgrade on them has to be run in all containers.

With the fact that some information is replicated and some are not, comes the complexity to manage that in the dictionary views, and this will be the subject of the next blog post about INT$INT$ views.

 

Cet article Multitenant dictionary: what is consolidated and what is not est apparu en premier sur Blog dbi services.

Can I have the same table published and subscribed (bi-directional) in PostgreSQL 10 logical replication?

Fri, 2017-11-03 04:03

When you start using PostgreSQL 10 logical replication you might think it is a good idea to setup bi-directional replication so you end up with two or more masters that are all writable. I will not go into the details of multi master replication here (conflict resolution, …) but will show what happens when you try to do that. Lets go …

My two instances run on the same host, one on port 6000 the other one on 6001. To start I’ll create the same table in both instances:

postgres=# create table t1 ( a int primary key, b varchar(50) );
CREATE TABLE
postgres=# alter table t1 replica identity using INDEX t1_pkey;
ALTER TABLE
postgres=# \d+ t1
                                            Table "public.t1"
 Column |         Type          | Collation | Nullable | Default | Storage  | Stats target | Description 
--------+-----------------------+-----------+----------+---------+----------+--------------+-------------
 a      | integer               |           | not null |         | plain    |              | 
 b      | character varying(50) |           |          |         | extended |              | 
Indexes:
    "t1_pkey" PRIMARY KEY, btree (a) REPLICA IDENTITY

Create the same publication on both sides:

postgres=# create publication my_pub for table t1;
CREATE PUBLICATION
postgres=# select * from pg_publication;
 pubname | pubowner | puballtables | pubinsert | pubupdate | pubdelete 
---------+----------+--------------+-----------+-----------+-----------
 my_pub  |       10 | f            | t         | t         | t
(1 row)
postgres=# select * from pg_publication_tables;
 pubname | schemaname | tablename 
---------+------------+-----------
 my_pub  | public     | t1
(1 row)

Create the same subscription on both sides (except for the port, of course):

postgres=# show port;
 port 
------
 6000
(1 row)
ppostgres=# create subscription my_sub connection 'host=localhost port=6001 dbname=postgres user=postgres' publication my_pub;
CREATE SUBSCRIPTION
postgres=# select * from pg_subscription;
 subdbid | subname | subowner | subenabled |                      subconninfo                       | subslotname | 
---------+---------+----------+------------+--------------------------------------------------------+-------------+-
   13212 | my_sub  |       10 | t          | host=localhost port=6001 dbname=postgres user=postgres | my_sub      | 
(1 row)


### second instance

postgres=# show port;
 port 
------
 6001
(1 row)

postgres=# create subscription my_sub connection 'host=localhost port=6000 dbname=postgres user=postgres' publication my_pub;
CREATE SUBSCRIPTION
postgres=# select * from pg_subscription;
 subdbid | subname | subowner | subenabled |                      subconninfo                       | subslotname | 
---------+---------+----------+------------+--------------------------------------------------------+-------------+-
   13212 | my_sub  |       10 | t          | host=localhost port=6000 dbname=postgres user=postgres | my_sub      | 
(1 row)

So far, so good, everything worked until now. Now lets insert a row in the first instance:

postgres=# insert into t1 (a,b) values (1,'a');
INSERT 0 1
postgres=# select * from t1;
 a | b 
---+---
 1 | a
(1 row)

That seemed to worked as well as the row is there on the second instance as well:

postgres=# show port;
 port 
------
 6001
(1 row)

postgres=# select * from t1;
 a | b 
---+---
 1 | a
(1 row)

But: When you take a look at the log file of the first instance you’ll see something like this (which is repeated over and over again):

2017-11-03 09:56:29.176 CET - 2 - 10687 -  - @ ERROR:  duplicate key value violates unique constraint "t1_pkey"
2017-11-03 09:56:29.176 CET - 3 - 10687 -  - @ DETAIL:  Key (a)=(1) already exists.
2017-11-03 09:56:29.178 CET - 29 - 10027 -  - @ LOG:  worker process: logical replication worker for subscription 16437 (PID 10687) exited with exit code 1
2017-11-03 09:56:34.198 CET - 1 - 10693 -  - @ LOG:  logical replication apply worker for subscription "my_sub" has started

Now the second instance is constantly trying to insert the same row back to the first instance and that obviously can not work as the row is already there. So the answer to the original question: Do not try to do that, it will not work anyway.

 

Cet article Can I have the same table published and subscribed (bi-directional) in PostgreSQL 10 logical replication? est apparu en premier sur Blog dbi services.

Quick history on database growth

Thu, 2017-11-02 12:13

AWR collects segment statistics, and this can be used to quickly understand an abnormal database growth. Here is a script I use to get, from the AWR history, the segments that have grown by more than 1% of the database size, in one hour.

First I must mention that this uses only the part of AWR which is not subject to additional option. This even works in Standard Edition:
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
control_management_pack_access string NONE

So here is the query, easy to modify with different threshold:
set echo on pagesize 1000
set sqlformat ansiconsole
select * from (
select
round(sum(SPACE_ALLOCATED_DELTA)/1024/1024/1024) GBYTE_ALLOCATED
,trunc(max(end_interval_time),'hh24') snap_time
,round(sum(SPACE_ALLOCATED_DELTA)/1024/1024/1024*24*(cast(max(end_interval_time) as date)-cast(min(begin_interval_time) as date))) "GB/hour"
,owner,object_name,subobject_name,object_type
from DBA_HIST_SEG_STAT join DBA_HIST_SEG_STAT_OBJ using (dbid,ts#,obj#,dataobj#) join dba_hist_snapshot using(dbid,snap_id)
group by trunc(end_interval_time,'hh24'),owner,object_name,subobject_name,object_type
) where "GB/hour" > (select sum(bytes)/1024/1024/1024/1e2 "one percent of database size" from dba_data_files)
order by snap_time
;

and the sample output, showing only the snapshots and segments where more than 1% of the database size has been allocated within one hour:

GBYTE_ALLOCATED SNAP_TIME GB/hour OWNER OBJECT_NAME SUBOBJECT_NAME OBJECT_TYPE
--------------- --------- ------- ----- ----------- -------------- -----------
4 25-OCT-2017 19:00:00 4 BIGDATA SYS_LOB0000047762C00006$$ LOB
9 25-OCT-2017 20:00:00 9 BIGDATA SYS_LOB0000047762C00006$$ LOB
9 25-OCT-2017 21:00:00 9 BIGDATA SYS_LOB0000047762C00006$$ LOB
3 25-OCT-2017 22:00:00 3 BIGDATA SYS_LOB0000047762C00006$$ LOB
5 26-OCT-2017 00:00:00 5 BIGDATA SYS_LOB0000047762C00006$$ LOB
6 26-OCT-2017 01:00:00 6 BIGDATA SYS_LOB0000047762C00006$$ LOB
7 26-OCT-2017 02:00:00 7 BIGDATA SYS_LOB0000047762C00006$$ LOB
7 26-OCT-2017 03:00:00 7 BIGDATA SYS_LOB0000047762C00006$$ LOB
7 26-OCT-2017 04:00:00 7 BIGDATA SYS_LOB0000047762C00006$$ LOB
5 26-OCT-2017 05:00:00 5 BIGDATA SYS_LOB0000047762C00006$$ LOB
2 26-OCT-2017 06:00:00 2 BIGDATA SYS_LOB0000047719C00008$$ LOB
2 26-OCT-2017 06:00:00 2 BIGDATA SYS_LOB0000047710C00006$$ LOB

With this, it is easier to ask to the application owners if this growth is normal or not.

 

Cet article Quick history on database growth est apparu en premier sur Blog dbi services.

Are large objects supported in PostgreSQL 10 logical replication

Thu, 2017-11-02 01:32

Another interesting topic that popped up last week during pgconfeu: Are large objects supported with logical replication in PostgreSQL 10? The only truth is a test, isn’t it? Lets go…

Obviously we need a table containing same large objects to start with:

postgres=# create table t1 ( a int, b oid);
CREATE TABLE

Before inserting some data lets create a publication for that table right now:

postgres=# create publication my_pub for table t1;
CREATE PUBLICATION

Ok, that works. Now we need a subscription for that, so on a second instance:

postgres=# create table t1 ( a int, b oid);
CREATE TABLE
postgres=# create subscription my_sub connection 'host=localhost port=6000 dbname=postgres user=postgres' publication my_pub;
CREATE SUBSCRIPTION

So far, so good. Lets insert some data on the publishing instance and see what happens:

postgres=# \! which cp
/usr/bin/cp
postgres=# insert into t1 (a,b) values (1, lo_import('/usr/bin/cp'));
INSERT 0 1

That worked. What do we see on the subscription side?

postgres=# select * from t1;
 a |   b   
---+-------
 1 | 16418
(1 row)

postgres=# select * from pg_size_pretty ( pg_relation_size ( 't1' ) );
 pg_size_pretty 
----------------
 8192 bytes
(1 row)

So, at least “something” is there. Lets prove it:

postgres=# select lo_export(b,'/tmp/cp') from t1;
ERROR:  large object 16418 does not exist
postgres=# 

Hm, this is not what was expected, right? Doing the same on the publishing side works:

postgres=# select lo_export(b,'/tmp/cp') from t1;
 lo_export 
-----------
         1
(1 row)

postgres=# \! chmod +x /tmp/cp
postgres=# \! /tmp/cp --help | head -1
Usage: /tmp/cp [OPTION]... [-T] SOURCE DEST

This means the OID is replicated but not the large object itself. So the answer is: No, large objects can not be used with PostgreSQL 10 logical replication.

 

Cet article Are large objects supported in PostgreSQL 10 logical replication est apparu en premier sur Blog dbi services.

Pages