Feed aggregator

Oracle Code : Paris – The Journey Begins

Tim Hall - Tue, 2018-07-03 01:40

It was a normal start to the day. I woke up with my regular work alarm, packed and got a taxi to the airport.

The drive was quick and the taxi driver was interesting, which helps. I couldn’t do online check-in because my ticket was with Air France, but the flight was Flybe. Neither website would let me check in online. I was dreading an epic queue, but fortunately the airport was quiet. Even so, I witnessed someone wearing ear-buds being asked the same question multiple times. Can’t we pass a law to make it legal to smack people that do this?

The flight to Paris was due to take off at 11:35, but it was about 11:50 when we finally departed. I got lucky with a free seat next to me, so I was able to get the laptop out and do some work. I was not so lucky with the folks on the other side of the aisle, who were far too loud.

I took a train from the airport to the city centre, then got a taxi from there to my hotel. It was about 5 minutes walk from the conference venue and 10 minutes from the Eiffel Tower, so I walked across to check them both out, then it was back to the hotel to run through my session and demo for tomorrow, then crash…

Cheers

Tim…

Oracle Code : Paris – The Journey Begins was first posted on July 3, 2018 at 7:40 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Email Domain Extraction using sql query

Tom Kyte - Tue, 2018-07-03 01:06
if the part of email before domain matches for eg in : xyz@gmail.com xyzef@gmail.com if xyz and xyzef does not match ,then do not consider this records and if the scenario is like , if non domain part are equal like xyz@gmail.com xyz@g...
Categories: DBA Blogs

Make Index Invisible for a session

Tom Kyte - Tue, 2018-07-03 01:06
Hi With the advent of In-Memory capabilities in 12C is there a way to make the optimizer ignore indexes for particular table for a session. The reason being if we want to use Oracle 12C as HTAP (Hybrid Transactional Analytical Processing) we wou...
Categories: DBA Blogs

ORA-31011: XML parsing failed issue

Tom Kyte - Tue, 2018-07-03 01:06
While extracting data from xml parsing issue is coming. Issue is coming while extracting data for field_name 401K_LOAN_1 and 401K_LOAN_2. Please advise. <code>PROCEDURE SP1( SXML IN CLOB, p_status OUT VARCHAR2, p_message OUT VARCHAR...
Categories: DBA Blogs

XMLForest for more than one tables

Tom Kyte - Tue, 2018-07-03 01:06
hi all i'm trying to make an enquiry from 2 tables with XML function, but i have some problems: this is sql i'm using for this sample select from the 2 different tables i'm using <code>select deptno,dname from dept; select empno,ename,de...
Categories: DBA Blogs

Python cx_Oracle 6.4 Brings a World Cup of Improvements

Christopher Jones - Mon, 2018-07-02 19:58

cx_Oracle logo

cx_Oracle 6.4, the extremely popular Oracle Database interface for Python, is now Production on PyPI.

cx_Oracle is an open source package that covers the Python Database API specification with many additions to support Oracle advanced features.

At a nicely busy time of year, cx_Oracle 6.4 has landed. To keep it brief I'll point you to the release notes since there have been quite a number of improvements. Some of those will significantly help your apps

A few things to note:

  • Improvements to Continuous Query Notification and Advanced Queuing notifications

  • Improvements to session pooling

  • A new encodingErrors setting to choose how to handle decoding corrupt character data queried from the database

  • You can now use a cursor as a context manager:

    with conn.cursor() as c: c.execute("SELECT * FROM DUAL") result = c.fetchall() print(result)
cx_Oracle References

Home page: oracle.github.io/python-cx_Oracle/index.html

Installation instructions: cx-oracle.readthedocs.io/en/latest/installation.html

Documentation: cx-oracle.readthedocs.io/en/latest/index.html

Release Notes: cx-oracle.readthedocs.io/en/latest/releasenotes.html

Source Code Repository: github.com/oracle/python-cx_Oracle

Event Sourcing: CQN is not a replacement for CDC

Yann Neuhaus - Mon, 2018-07-02 16:02

We are in an era where software architects want to stream the transactions out of the database and distribute them, as events, to multiple microservices. Don’t ask why, but that’s the trend: store inconsistent eventually consistent copies of data in different physical components, rather than simply using logical views in the same database, where the data is ACIDely stored, processed and protected. Because it was decided that this segregation, in CQRS (Command Query Responsibility Segregation), will be physical, on different systems, the need for logical replication and change data capture is raising, with a new name: Event Sourcing.

When we want to replicate the changes without adding an overhead to the database, the solution is Change Data Capture from the redo stream. The redo contains all the physical changes and, with dictionary information and a little supplemental logging, we can mine it to extract the logical changes. Currently are commercial products (Oracle GoldenGate, Attunity, Dbvisit replicate) and there are some open source ones based on LogMiner (StreamSets, Debezium). LogMiner is available on all Oracle Database editions without any option. In Enterprise Edition, a more efficient solution was possible with Streams but now you have to pay for GoldenGate to use Streams. Unfortunately, sometimes you pay software update to get features removed and be sold in additional products.

Oracle has another feature that can help to replicate changes: Database Change notification, now known as Continuous Query Notification (CQN) or Object Change Notification (OCN). This feature has been implemented to refresh caches: you have a query that loads the cache and you want to be notified when some changes occurred, so that you have to update/refresh the cache. Then, in theory, this can be used to stream out the changes. However, CQN was not built for frequent changes but rather for nearly static, or slowly changing data. But sometimes we have to test by ourselves and here are my test using CQN with a lot of changes on the underlying table, just to show how it increases the load on the database and slows down the changes.

I create a DEMO table with one million rows:

17:21:56 SQL> whenever sqlerror exit failure;
17:21:56 SQL> create table DEMO (ID constraint DEMO_ID primary key) as select rownum from xmltable('1 to 1000000');
 
Table DEMO created.
 

And a table to hold notifications. As always when I want to start with an example, I start to get it from oracle-base:

17:21:58 SQL> -- from Tim Hall https://oracle-base.com/articles/10g/dbms_change_notification_10gR2
17:21:58 SQL> CREATE TABLE notifications (
2 id NUMBER,
3 message VARCHAR2(4000),
4 notification_date DATE
5 );
 
Table NOTIFICATIONS created.
 
17:21:58 SQL> CREATE SEQUENCE notifications_seq;
 
Sequence NOTIFICATIONS_SEQ created.

The callback function:

17:21:58 SQL> CREATE OR REPLACE PROCEDURE callback (ntfnds IN SYS.chnf$_desc) IS
2 l_regid NUMBER;
3 l_table_name VARCHAR2(60);
4 l_event_type NUMBER;
5 l_numtables NUMBER;
6 l_operation_type NUMBER;
7 l_numrows NUMBER;
8 l_row_id VARCHAR2(20);
9 l_operation VARCHAR2(20);
10 l_message VARCHAR2(4000) := NULL;
11 BEGIN
12 l_regid := ntfnds.registration_id;
13 l_numtables := ntfnds.numtables;
14 l_event_type := ntfnds.event_type;
15 IF l_event_type = DBMS_CHANGE_NOTIFICATION.EVENT_OBJCHANGE THEN
16 FOR i IN 1 .. l_numtables LOOP
17 l_table_name := ntfnds.table_desc_array(i).table_name;
18 l_operation_type := ntfnds.table_desc_array(i).Opflags;
19 IF (BITAND(l_operation_type, DBMS_CHANGE_NOTIFICATION.ALL_ROWS) = 0) THEN
20 l_numrows := ntfnds.table_desc_array(i).numrows;
21 ELSE
22 l_numrows :=0; /* ROWID INFO NOT AVAILABLE */
23 END IF;
24 CASE
25 WHEN BITAND(l_operation_type, DBMS_CHANGE_NOTIFICATION.INSERTOP) != 0 THEN
26 l_operation := 'Records Inserted';
27 WHEN BITAND(l_operation_type, DBMS_CHANGE_NOTIFICATION.UPDATEOP) != 0 THEN
28 l_operation := 'Records Updated';
29 WHEN BITAND(l_operation_type, DBMS_CHANGE_NOTIFICATION.DELETEOP) != 0 THEN
30 l_operation := 'Records Deleted';
31 WHEN BITAND(l_operation_type, DBMS_CHANGE_NOTIFICATION.ALTEROP) != 0 THEN
32 l_operation := 'Table Altered';
33 WHEN BITAND(l_operation_type, DBMS_CHANGE_NOTIFICATION.DROPOP) != 0 THEN
34 l_operation := 'Table Dropped';
35 WHEN BITAND(l_operation_type, DBMS_CHANGE_NOTIFICATION.UNKNOWNOP) != 0 THEN
36 l_operation := 'Unknown Operation';
37 ELSE
38 l_operation := '?';
39 END CASE;
40 l_message := 'Table (' || l_table_name || ') - ' || l_operation || '. Rows=' || l_numrows;
41 INSERT INTO notifications (id, message, notification_date)
42 VALUES (notifications_seq.NEXTVAL, l_message, SYSDATE);
43 COMMIT;
44 END LOOP;
45 END IF;
46 END;
47 /
 
Procedure CALLBACK compiled
 
17:21:58 SQL> -- thanks Tim

and the CQN registration:

17:21:58 SQL> -- register on DEMO;
17:21:58 SQL>
17:21:58 SQL> DECLARE
2 reginfo CQ_NOTIFICATION$_REG_INFO;
3 v_cursor SYS_REFCURSOR;
4 regid NUMBER;
5 BEGIN
6 reginfo := cq_notification$_reg_info ( 'callback', DBMS_CHANGE_NOTIFICATION.QOS_ROWIDS, 0, 0, 0);
7 regid := sys.DBMS_CHANGE_NOTIFICATION.new_reg_start(reginfo);
8 OPEN v_cursor FOR
9 SELECT dbms_cq_notification.CQ_NOTIFICATION_QUERYID, demo.* from DEMO;
10 CLOSE v_cursor;
11 sys.DBMS_CHANGE_NOTIFICATION.reg_end;
12 END;
13 /
 
PL/SQL procedure successfully completed.

Now I delete 1 million rows and commit:


17:21:58 SQL> exec dbms_workload_repository.create_snapshot;
 
PL/SQL procedure successfully completed.
 
17:22:02 SQL>
17:22:02 SQL> -- 1000000 deletes
17:22:02 SQL>
17:22:02 SQL> exec for i in 1..1000000 loop delete from DEMO WHERE id=i; commit; end loop;
 
PL/SQL procedure successfully completed.
 
17:39:23 SQL>
17:39:23 SQL> exec dbms_workload_repository.create_snapshot;

Here are the notifications captured:

17:39:41 SQL> select count(*) from notifications;
COUNT(*)
--------
942741
 
17:39:54 SQL> select * from notifications fetch first 10 rows only;
 
ID MESSAGE NOTIFICATION_DATE
--- ------------------------------------------- -----------------
135 Table (DEMO.DEMO) - Records Deleted. Rows=1 09-MAY-18
138 Table (DEMO.DEMO) - Records Deleted. Rows=1 09-MAY-18
140 Table (DEMO.DEMO) - Records Deleted. Rows=1 09-MAY-18
142 Table (DEMO.DEMO) - Records Deleted. Rows=1 09-MAY-18
145 Table (DEMO.DEMO) - Records Deleted. Rows=1 09-MAY-18
147 Table (DEMO.DEMO) - Records Deleted. Rows=1 09-MAY-18
149 Table (DEMO.DEMO) - Records Deleted. Rows=1 09-MAY-18
152 Table (DEMO.DEMO) - Records Deleted. Rows=1 09-MAY-18
154 Table (DEMO.DEMO) - Records Deleted. Rows=1 09-MAY-18
156 Table (DEMO.DEMO) - Records Deleted. Rows=1 09-MAY-18

The DML has been long and SQL Monitoring shows that 64% of the time was waiting on ‘Wait for EMON to process ntfns’ which is the notification process:
CaptureCQN

The execution of the delete itself (cdq5w65zk18r1 DELETE FROM DEMO WHERE ID=:B1) is only a small part of the database time. And we have additional load on the database:
CaptureCQN01

The following is activity related to Continuous Query notification queuing of messages, the one that slows down the modifications, during the delete (from 17:22 to 17:38):

59p1yadp2g6mb call DBMS_AQADM_SYS.REGISTER_DRIVER ( )
gzf71xphapf1b select /*+ INDEX(TAB AQ$_AQ_SRVNTFN_TABLE_1_I) */ tab.rowid, tab.msgid, tab.corrid, tab.priority, tab.delay, tab.expiration ,tab.retry_count, tab.exception_qschema, tab.exception_queue, tab.chain_no, tab.local_order_no, tab.enq_time, tab.time_manager_info, tab.state, tab.enq_tid, tab.step_no, tab.sender_name, tab.sender_address, tab.sender_protocol, tab.dequeue_msgid, tab.user_prop, tab.user_data from "SYS"."AQ_SRVNTFN_TABLE_1" tab where q_name = :1 and (state = :2 ) order by q_name, state, enq_time, step_no, chain_no, local_order_no for update skip locked
61cgh171qq5m6 delete /*+ CACHE_CB("AQ_SRVNTFN_TABLE_1") */ from "SYS"."AQ_SRVNTFN_TABLE_1" where rowid = :1
ccrv58ajb7pxg begin callback(ntfnds => :1); end;
cdq5w65zk18r1 DELETE FROM DEMO WHERE ID=:B1

And at the end (17:38), when the modifications are committed, my callback function is running to process the messages:
CaptureCQN02
the main query is the insert from the callback function:
8z4m5tw9uh02d INSERT INTO NOTIFICATIONS (ID, MESSAGE, NOTIFICATION_DATE) VALUES (NOTIFICATIONS_SEQ.NEXTVAL, :B1 , SYSDATE)
The callback function may send the changes to another system, rather than inserting them here, but then you can question the availability and, anyway, this will still have a high overhead in context switches and network roundtrips.

In summary, for 1 million rows deleted, here are the queries that have been executed 1 million times:

Elapsed
Executions Rows Processed Rows per Exec Time (s) %CPU %IO SQL Id
------------ --------------- -------------- ---------- ----- ----- -------------
1,000,000 1,000,000 1.0 123.4 55.2 3.2 cdq5w65zk18r1 Module: java@VM188 (TNS V1-V3) DELETE FROM DEMO WHERE ID=:B1
999,753 999,753 1.0 261.5 88.6 .7 dw9yv631knnqd insert into "SYS"."AQ_SRVNTFN_TABLE_1" (q_name, msgid, corrid, priority, state, delay, expiration, time_manager_info, local_order_no, chain_no, enq_time, step_no, enq_uid, enq_tid, retry_count, exception_qschema, exception_queue, recipient_key, dequeue_msgid, user_data, sender_name, sender_address, sender_protoc
978,351 978,351 1.0 212.5 64.3 0 61cgh171qq5m6 Module: DBMS_SCHEDULER delete /*+ CACHE_CB("AQ_SRVNTFN_TABLE_1") */ from "SYS"."AQ_SRVNTFN_TABLE_1" where rowid = :1
978,248 942,657 1.0 971.6 20 .7 8z4m5tw9uh02d Module: DBMS_SCHEDULER INSERT INTO NOTIFICATIONS (ID, MESSAGE, NOTIFICATION_DATE) VALUES (NOTIFICATIONS_SEQ.NEXTVAL, :B1 , SYSDATE)
978,167 942,559 1.0 1,178.7 33.1 .5 ccrv58ajb7pxg Module: DBMS_SCHEDULER begin callback(ntfnds => :1); end;
977,984 977,809 1.0 73.9 96.5 0 brq600g3299zp Module: DBMS_SCHEDULER SELECT INSTANCE_NUMBER FROM SYS.V$INSTANCE
933,845 978,350 1.0 446.9 51.4 .7 gzf71xphapf1b Module: DBMS_SCHEDULER select /*+ INDEX(TAB AQ$_AQ_SRVNTFN_TABLE_1_I) */ tab.rowid, tab.msgid, tab.corrid, tab.priority, tab.delay, tab.expiration ,tab.retry_count, tab.exception_qschema, tab.exception_queue, tab.chain_no, tab.local_order_no, tab.enq_time, tab.time_manager_info, tab.state, tab.enq_tid, tab.step_no, tab.sender_name

This is a huge overhead. And all this has generated 8 millions of redo entries.

In summary, just forget about CQN to stream changes. This feature is aimed at cache refresh for rarely changing data. What we call today ‘event sourcing’ exists for a long time in the database, with redo logs. When a user executes some DML, Oracle generates the redo records first, store them and apply them to update the current version of the table rows. And the redo logs keeps the atomicity of transaction (the ‘A’ in ACID). Then better use this if the changes need to be propagated to other systems.

 

Cet article Event Sourcing: CQN is not a replacement for CDC est apparu en premier sur Blog dbi services.

Oracle Database Software Downloads: 18c released

Dietrich Schroff - Mon, 2018-07-02 12:19
Ok - not the database binaries but the Oracle Database 18c Release 1 Client is released:

Only for windows and linux - not really suprising, but that shows, which platforms are well supported ;-)

The documentation can be found here.


The documentation is for solaris, too - but without binaries this sounds a little bit strange:

New OA Framework 12.2.5 Update 20 Now Available

Steven Chan - Mon, 2018-07-02 11:10

Web-based content in Oracle E-Business Suite Release 12 runs on the Oracle Application Framework (also known as OA Framework, OAF, or FWK) user interface libraries and infrastructure. Since the initial release of Oracle E-Business Suite Release 12.2 in 2013, we have released a number of cumulative updates to Oracle Application Framework to fix performance, security, and stability issues.

These updates are provided in cumulative Release Update Packs, and cumulative Bundle Patches that can be applied on top of the Release Update Packs. In this context, cumulative means that the latest RUP or Bundle Patch contains everything released earlier.

The latest OAF update for Oracle E-Business Suite Release 12.2.5 is now available:

Where is this update documented?

Instructions for installing this OAF Release Update Pack are in the following My Oracle Support knowledge document:

Who should apply this patch?

All Oracle E-Business Suite Release 12.2.5 users should apply this patch.  Future OAF patches for EBS Release 12.2.5 will require this patch as a prerequisite. 

What's new in this update?

OAF bundle patches are cumulative: they include all fixes released in previous bundle patches.

This latest patch also provides fixes for the following issues:

  • In attachment image style, adding an attachment fails when a primary key of an entity map has a null value.
  • There is no warning message when a user navigates to the preferences page after making changes to a page that has the property 'Warn About Changes' set to true.
  • FND_STATE_LOSS_ERROR is displayed for Oracle Configurator flow when diagnostic logging is enabled.

Related Articles

Categories: APPS Blogs

Clustering_Factor

Jonathan Lewis - Mon, 2018-07-02 07:24

Here’s another little note on the clustering_factor for an index and the table preference table_cached_blocks that can be set with a call to dbms_stats.set_table_prefs(). I might be repeating a point that someone made in a comment on an older posting but if that’s the case I can’t find the comment at present, and it’s worth its own posting anyway.

The call to dbms_stats.set_table_prefs(null,'{tablename}’,’table_cached_blocks’,N) – where N can be any integer between 1 and 255, will modify Oracle’s algorithm for calculating the clustering_factor of an index. The default is 1, which often means the clustering_factor is much higher than it ought to be from a humanly visible perspective and leads to Oracle not using an index that could be a very effective index.

The big point is this: the preference has no effect when you execute a “create index” statement, or an “alter index rebuild” statement. Here’s a simple script to demonstrate the point.


rem
rem     Script:         table_cached_blocks_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jun 2018
rem
rem     Last tested
rem             12.2.0.1
rem             12.1.0.2
rem

drop table t1 purge;
create table t1
segment creation immediate
nologging
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        mod(rownum-1,100)               n1,
        mod(rownum-1,100)               n2,
        lpad(rownum,10,'0')             v1,
        lpad('x',100,'x')               padding
from
        generator
;

column blocks new_value m_blocks

select  blocks 
from    user_tables
where   table_name = 'T1'
;

column preference_value format a40

select  preference_name, preference_value
from    user_tab_stat_prefs
where
        table_name = 'T1'
;

I’ve created a very simple table of 10,000 rows with two identical columns and captured the number of blocks (which I know will be less than 256) in a substitution variable which I’m going to use in a call to set_table_prefs(). I’ve also run a quick check to show there are no table preferences set for the table. I’ll be running the same check again after setting the table_cached_blocks preference. Step 1 – create two indexes, but set the preference after building the first one; I’ve shown the result of the query against user_indexes immediately after the query:


create index t1_i1 on t1(n1);

execute dbms_stats.set_table_prefs(null,'t1','table_cached_blocks',&m_blocks)

create index t1_i2 on t1(n2);

select
        index_name, clustering_factor, to_char(last_analyzed, 'dd-mon-yyyy hh24:mi:ss') analyzed
from
        user_indexes
where
        table_name = 'T1'
order by
        index_name
;

INDEX_NAME	     CLUSTERING_FACTOR ANALYZED
-------------------- ----------------- -----------------------------
T1_I1				 10000 26-jun-2018 14:13:51
T1_I2				 10000 26-jun-2018 14:13:51


Now we check the effect of rebuilding the t1_i2 index – the one second sleep is so that we can use the last_analyzed time to see that new stats have been created for the index:


execute dbms_lock.sleep(1)
alter index t1_i2 rebuild /* online */ ;

select
        index_name, clustering_factor, to_char(last_analyzed, 'dd-mon-yyyy hh24:mi:ss') analyzed
from
        user_indexes
where
        table_name = 'T1'
order by
        index_name
;

INDEX_NAME	     CLUSTERING_FACTOR ANALYZED
-------------------- ----------------- -----------------------------
T1_I1				 10000 26-jun-2018 14:13:51
T1_I2				 10000 26-jun-2018 14:13:52


Finally we do an explicit gather_index_stats():


execute dbms_lock.sleep(1)
execute dbms_stats.gather_index_stats(null,'t1_i2')

select
        index_name, clustering_factor, to_char(last_analyzed, 'dd-mon-yyyy hh24:mi:ss') analyzed
from
        user_indexes
where
        table_name = 'T1'
order by
        index_name
;

INDEX_NAME	     CLUSTERING_FACTOR ANALYZED
-------------------- ----------------- -----------------------------
T1_I1				 10000 26-jun-2018 14:13:51
T1_I2				   179 26-jun-2018 14:13:53

At last – on the explicit call to gather stats – the table_cached_blocks preference is used.

Dire Threat

Think about what this means: you’ve carefully worked out that a couple of indexes really need a special setting of table_cached_blocks and you gathered stats on those indexes so you have a suitable value for the clustering_factor. Then, one night, someone decides that they’re going to rebuild some of those indexes. The following morning the clustering_factor is much higher and a number of critical execution plans change as a consequence, and don’t revert until the index statistics (which are perfectly up to date!) are re-gathered.

Footnote

The same phenomenon appears even when you’ve set the global preference for stats collection with dbms_stats.set_global_prefs().

Transpose Rows Into Column

Tom Kyte - Mon, 2018-07-02 06:46
Hi, My question is regarding transpose of rows into columns: <code>BANNER_CODE/DIV_CODE/LEG_MATNR/SAP_MATNR/MAKTX/LEG_MATKL/SAP_MATKL/LEG_WHERL/SAP_WHERL/ CS/1/10137/58351/BAKE KING CHOCOLATE RICE 160G/384/10203004/34/SG/1 GH/1/36762/212615/M...
Categories: DBA Blogs

How to Set Up OCI Tenancy for PeopleSoft Cloud Manager – Part II

PeopleSoft Technology Blog - Mon, 2018-07-02 03:43

In the previous blog on setting up tenancy, you learned that a Virtual Cloud Network (VCN) and subnets must be created as a prerequisite for PeopleSoft Cloud Manager Image 6. In Oracle Cloud Infrastructure (OCI), you get the flexibility to configure networking to mimic your on premises environment.  Networking can be configured in many ways for PeopleSoft environments.  You can have separate subnets for demo, development, testing, pre-production and production environments.  You can also group environments into two subnets, one for production and another for non-production environments.  Alternatively, you can create one subnet each for database, middle tier components (Application Server, Process Scheduler, and PIA), PeopleSoft Client, Elasticsearch instances and load balancer.  Another option is to create only three subnets, one for load balancer, the second for database and the third for rest of the instance types.  Subnets can either be public or private.  You can choose to deploy all instances on a private subnet to secure them from the Internet, or put them on a public network where they are accessible from the Internet, or put a few on private and a few on public subnets.  Complete flexibility!

In this blog, let’s take an example of a simple networking architecture. This example network has one VCN with one public subnet and one private subnet.  The public subnet (red dotted line in the illustration below) hosts the Cloud Manager, File Server, middle tier, PeopleSoft (Windows) Clients and Elasticsearch instances.  Database instances are hosted in a private subnet. 

 

First step, create a VCN using OCI web console by selecting Networking, Virtual Cloud Networks. Let’s call it MyVCN

Note. To locate the commands mentioned here, click the Navigation Menu at the top left of the OCI console home page.

You can choose to create the default subnets by selecting the option Create Virtual Cloud Network plus related resources or select Create Virtual Cloud Network only to create your own customized subnets.  Let’s select the latter option for the purpose of this blog. In the example below, a VCN is created with a CIDR 10.0.0.0/16.  You can use a CIDR that mimics your on premises networking too. Read more on VCN and subnet here to familiarize with the concepts and management of networks.

The next important part is creating security lists.  By default every subnet will have a security list.  You can customize the same default security list or create your own.  In the sample network illustrated above, there are two security lists – one for a database subnet and another for the middle tier subnet. The security list for the database subnet can have rules as shown below.  The first rule indicates that the database instance can be accessed over the Internet on port 22 for SSH, and the second rule indicates that all network connections arising from the middle tier subnet (10.0.1.0) must be allowed to connect to all ports on instances running on database subnet.

Now, let’s take a look at the middle tier subnet security list. Here is the summary of the rules:

  • Rule 1 – allow all connections to port 22 (SSH access)
  • Rule 2 – allow all connections to port range 8000 – 8200 (HTTP PIA access)
  • Rule 3 – allow all connections to port range 8443 – 8643 (HTTPS PIA access)
  • Rule 4 – allow all connections between instances within middle tier subnet
  • Rule 5 – allow all connections to port 3389 (RDP to Windows Client)

Also add Egress rules for each subnet as per your requirement.  The example shown below allows connections to all destinations.

Define an internet gateway as shown below.  Also add any route tables if required. More details are available in OCI documentation here.

It is important to ensure that the Cloud Manager and the file server instance has access to all subnets on which it will provision new environments.  The NFS share in the file server on which DPKs are stored, is mounted on the instances being provisioned and hence all NFS ports must be opened in the security lists. More details about the VCN, subnets and security lists specific to Cloud Manager is available here.

All security rules in this blog are not very restrictive and serve as examples to start with.  It is always recommended to restrict access to subnets by adding more rules that allow only required ports.  To learn more about securing access refer to OCI documentation here.

Next, create subnets for each Availability Domain.  In the example below, there are two subnets created for an Availability Domain evQs US-ASHBURN-AD-1.  One subnet is used for hosting database instances and another to host the remainder of the instances, such as middle tier (running Application Server, PIA server and Process Scheduler), Windows client and Elasticsearch.  Create similar subnets on each of the availability domains.  This way, you’ll have a uniform network layout on each of the availability domains.  Ensure that you select the right security list for each subnet.

With this, the tenancy is ready for you to set up PeopleSoft Cloud Manager.  You have set up a compartment, user, policies, VCN, subnets and security lists.  Learn more about installing Cloud Manager here and how to use it here.

How to Set up OCI Tenancy for PeopleSoft Cloud Manager – Part I

PeopleSoft Technology Blog - Mon, 2018-07-02 02:33

In our previous blog we announced the release of the latest PeopleSoft Cloud Manager Image 6, which includes support for Oracle Cloud Infrastructure (OCI).  Cloud Manager for OCI is designed to work in a simple way.  Let’s try to understand those design aspects that must be considered for your planning.

  • Deploy all instances of a PeopleSoft environment in a single OCI Region
    Cloud Manager can deploy PeopleSoft environments in only one region, the one in which it is running.  When you sign in to Cloud Manager in a browser and access the Cloud Manager Infrastructure Settings page, you see this region listed as “Deployment Region.” When you create or configure an environment template in Cloud Manger, you will see all regions listed, but you must always specify the region chosen as Deployment Region on the Cloud Manager Infrastructure Settings page.
  • Deploy all instances of a PeopleSoft environment in a single Availability Domain (AD) in the chosen Deployment Region
    In environment templates, you can set the AD in which you want to deploy an environment.  All instances are deployed in the chosen AD.  Cloud Manager doesn’t provide the ability to choose different ADs for each instance of the same environment. For example, you will not be able to deploy a Database instance in one AD and middle tier components (Application Server, Process Scheduler, and PIA) or a PeopleSoft Client instance in another AD.
  • Deploy all instances of a PeopleSoft environment in a single compartment
    While defining environment templates you get an option to choose the compartment in which an environment will be deployed.  All instances of an environment will always be deployed in the selected compartment.  You will not be allowed to deploy instances of the same environment across two or more compartments.  For example, you cannot deploy a Database instance in one compartment and middle tier instances or a PeopleSoft Client instance on another compartment. 
  • Deploy the Cloud Manager instance, and all instances of a PeopleSoft environment, on a single VCN that is dedicated for PeopleSoft environments. 
    The Cloud Manager instance must be deployed in the same VCN as all instances of a PeopleSoft environment.  Under the dedicated VCN, you can create multiple subnets that can be used to isolate individual instances or production and non-production environments.

Let’s go through those requirements that satisfy the above mentioned design considerations.  To provision PeopleSoft environments in OCI using Cloud Manager, you’ll need to prepare your tenancy with a few pre-requisite configurations. 

Create a compartment

When you get access to your OCI tenancy, you’ll have an administrator user only.  You can find an overview about how to set up your tenancy here

Note. To locate the commands mentioned here, click the Navigation Menu at the top left of the OCI console home page.

Let’s start by creating a compartment in which all PeopleSoft environments will be provisioned.  If you want to deploy your development, test or production environments on separate compartments, then create enough compartments for the various environments.  For this blog, let’s assume we have created a compartment and named it nkpsft.

Create a cloud user for PeopleSoft

Sign in as the administrator user for OCI, and create a user for Cloud Manager with sufficient privileges to create and manage instances, the Virtual Cloud Network and subnets.  This involves multiple steps as outlined below.   

First, create a Group for Cloud Manager users; for example, CM-Admin.

Next, create a policy, for example CM-Admin-Policy, to grant access to compartment nkpsft to Cloud Manager group.

After that, create a user cloudmgr that will be used in Cloud Manager Settings.

Finally, add cloudmgr user to CM-Admin group.  With this, we have set up a compartment and a user with admin privileges to create and manage instances on OCI.

In our next blog, we’ll talk about the next important pre-requisite – Networking.  Cloud Manager Image 6 on OCI does not enforce a flat networking architecture, as it does in OCI-Classic.  Your network admin can design and lay out the required networking architecture.  You can create your own networking layout and Cloud Manager automation will use it to deploy PeopleSoft environments.  Remember, you’ll be creating networking components in the same compartment that you had created earlier, so that Cloud Manager gets access to it. 

 

Documentum – How to really configure the D2 JMS logs starting with JBoss/WildFly

Yann Neuhaus - Sun, 2018-07-01 15:00

If you are working with Documentum for quite some time, you are probably familiar with the logback.xml file that can be used to configure the D2 logs. In this blog, I will be talking only about the Content Server side of this configuration. As you probably know, Documentum upgraded the JMS to use JBoss 7.1.1 “recently” (several years already) and even WildFly9.0.1 with the CS 7.3+. On this blog, I will only use “JBoss” but it refers to both JBoss and WildFly versions.  With these recent versions, the logback.xml file stopped working on linux environments (I’m not sure about Windows, I only work on linux). Therefore you will face an issue: the D2 JMS logs cannot really be configured properly, by default. Of course you will still be able to configure the JBoss and JMS logs properly because that is done through the logging.properties file (for the boot.log), through the standalone.xml file (for the server.log) and through all log4j.properties files for each JMS Applications (ServerApps, ACS, BPM, aso…). But if you are using D2, then all the D2 JMS logs (previously stored on D2-JMS.log) will also be added to the server.log as well a console output.

Unfortunately for us, the D2 JMS logs are using DEBUG by default for everything so it might represent some big files at the end of the day as soon as you start working more than XXX concurrent users. Worse than that, the D2 JMS logs, which are in DEBUG, are considered as INFO from the JBoss point of view and therefore, if you are using JBoss with INFO log level, it will print all the DEBUG information from the D2 JMS logs. Of course you could still set the JBoss level to WARN so it would remove all the DEBUG but in this case, you will also be missing the INFO from the JBoss as well as the D2 JMS sides which might include some pretty important information like for example the assurance that the D2.Lockbox can be read properly (no problems with the passwords and/or fingerprint).

So what to do about it? Well there is a JVM parameter that can actually be used to force the JBoss Server to read and use a specific logback.xml file. For that, simply update the startMethodServer.sh script as done below. I will use the logback.xml file that is present by default right under ServerApps.ear and that I will customize to get the best out of it.

First of all, I’m updating the content to add some things. Here is a template for this file:

[dmadmin@content_server_01 ~]$ cd $DOCUMENTUM_SHARED/wildfly9.0.1/server/
[dmadmin@content_server_01 server]$ logback_file="$DOCUMENTUM_SHARED/wildfly9.0.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/logback.xml"
[dmadmin@content_server_01 server]$ 
[dmadmin@content_server_01 server]$ # Here I'm updating the content of the default file to add custom patterns, log level, console output, aso...
[dmadmin@content_server_01 server]$ vi ${logback_file}
[dmadmin@content_server_01 server]$ 
[dmadmin@content_server_01 server]$ cat ${logback_file}
<?xml version="1.0" encoding="UTF-8"?>

<configuration scan="true" scanPeriod="60 seconds">

  <appender class="ch.qos.logback.core.rolling.RollingFileAppender" name="RootFileAppender">
    <file>/tmp/D2-JMS.log</file>
    <append>true</append>
    <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
      <level>debug</level>
    </filter>
    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
      <fileNamePattern>/tmp/D2-JMS-%d{yyyy-MM-dd}.log.zip</fileNamePattern>
      <MaxHistory>5</MaxHistory>
    </rollingPolicy>
    <layout class="ch.qos.logback.classic.PatternLayout">
      <pattern>%d{"yyyy-MM-dd HH:mm:ss,SSS z"} [%-5p] \(%t\) - %-45(%C{44}) : %m%n</pattern>
    </layout>
  </appender>

  <appender class="ch.qos.logback.core.ConsoleAppender" name="RootConsoleAppender">
    <layout>
      <pattern>[%-5p] - %-45(%C{44}) : %m%n</pattern>
    </layout>
  </appender>

  <root>
    <level value="${logLevel:-info}"/>
    <appender-ref ref="RootFileAppender"/>
    <appender-ref ref="RootConsoleAppender"/>
  </root>

</configuration>
[dmadmin@content_server_01 server]$

 

Then once you have your template logback.xml file, you need to force JBoss to load and use it otherwise it will just be ignored. As mentioned above, here is the JVM parameter to be added:

[dmadmin@content_server_01 server]$ 
[dmadmin@content_server_01 server]$ grep "JAVA_OPTS=" startMethodServer.sh
JAVA_OPTS="$USER_MEM_ARGS -Djboss.server.base.dir=$JBOSS_BASE_DIR -Dorg.apache.coyote.http11.Http11Protocol.SERVER=MethodServer"
[dmadmin@content_server_01 server]$ 
[dmadmin@content_server_01 server]$ sed -i 's,^JAVA_OPTS="[^"]*,& -Dlogback.configurationFile=$JBOSS_BASE_DIR/deployments/ServerApps.ear/logback.xml,' startMethodServer.sh
[dmadmin@content_server_01 server]$ 
[dmadmin@content_server_01 server]$ grep "JAVA_OPTS=" startMethodServer.sh
JAVA_OPTS="$USER_MEM_ARGS -Djboss.server.base.dir=$JBOSS_BASE_DIR -Dorg.apache.coyote.http11.Http11Protocol.SERVER=MethodServer -Dlogback.configurationFile=$JBOSS_BASE_DIR/deployments/ServerApps.ear/logback.xml"
[dmadmin@content_server_01 server]$

 

Once done, you can customize some values like the path and name of the log file, the number of files to keep, the log level you want to use, aso. Here are some commands to do just that:

[dmadmin@content_server_01 server]$ 
[dmadmin@content_server_01 server]$ d2_log="$DOCUMENTUM_SHARED/wildfly9.0.1/server/DctmServer_MethodServer/logs/D2-JMS.log"
[dmadmin@content_server_01 server]$ 
[dmadmin@content_server_01 server]$ # Commands to update some values on this pattern file using the ${d2_log} variable
[dmadmin@content_server_01 server]$ sed -i "s,<file>.*</file>,<file>${d2_log}</file>," ${logback_file}
[dmadmin@content_server_01 server]$ sed -i "s,<fileNamePattern>.*</fileNamePattern>,<fileNamePattern>${d2_log}-%d{yyyy-MM-dd}.zip</fileNamePattern>," ${logback_file}
[dmadmin@content_server_01 server]$ sed -i "s,<MaxHistory>.*</MaxHistory>,<MaxHistory>180</MaxHistory>," ${logback_file}
[dmadmin@content_server_01 server]$ sed -i "s,<level>.*</level>,<level>info</level>," ${logback_file}
[dmadmin@content_server_01 server]$

 

With the above done, you can just restart the JMS and afterwards, you will have a new file created D2-JMS.log at the location specified and with the log level specified.

[dmadmin@content_server_01 server]$ $JMS_HOME/server/stopMethodServer.sh
{"outcome" => "success"}
[dmadmin@content_server_01 server]$
[dmadmin@content_server_01 server]$
[dmadmin@content_server_01 server]$
[dmadmin@content_server_01 server]$ $JMS_HOME/server/startJMS.sh
Starting the JMS...
The JMS process has been started.
[dmadmin@content_server_01 server]$ sleep 30
[dmadmin@content_server_01 server]$ 
[dmadmin@content_server_01 server]$ cat ${d2_log}
2018-06-16 17:16:48,652 UTC [INFO ] (default task-6) - com.emc.d2.api.methods.D2Method               : D2Method Main method com.emc.d2.api.methods.D2SubscriptionMethod arguments: {-user_name=dmadmin, -method_trace_level=0, -dcbase_name=Repo1.Repo1, -class_name=com.emc.d2.api.methods.D2SubscriptionMethod, -job_id=080f123450001612}
2018-06-16 17:16:49,668 UTC [INFO ] (default task-6) - com.emc.d2.api.methods.D2Method               : ==== START ======================================================================
2018-06-16 17:16:49,670 UTC [INFO ] (default task-6) - com.emc.d2.api.methods.D2Method               : D2-API v4.7.0070 build 186
2018-06-16 17:16:49,674 UTC [INFO ] (default task-6) - com.emc.d2.api.methods.D2Method               : DFC version : 7.3.0040.0025
2018-06-16 17:16:49,675 UTC [INFO ] (default task-6) - com.emc.d2.api.methods.D2Method               : file.encoding : ANSI_X3.4-1968
2018-06-16 17:16:49,676 UTC [INFO ] (default task-6) - com.emc.d2.api.methods.D2Method               : Arguments : {-user_name=dmadmin, -method_trace_level=0, -docbase_name=Repo1.Repo1, -class_name=com.emc.d2.api.methods.2SubscriptionMethod, -job_id=080f123450001612}
[dmadmin@content_server_01 server]$

 

Here you have a working D2-JMS.log file with INFO only information and no DEBUG.

 

Some tips regarding the logback.xml configuration (I put an example of each in the template configuration file above):

  • If you want to display the full date, time (with milliseconds) and timezone in the logs, you will need to add quotes like that: <pattern>%d{“yyyy-MM-dd HH:mm:ss,SSS z”} …</pattern>. This is simply because the comma (,) is used normally to separate the timeformat from the timezone you want to display the logs on (E.g.: %d{HH:mm:ss.SSS, UTC}) but it won’t display the timezone on the logs, in this case. So you if want the seconds to be separated from the milliseconds using a comma, you need to quote the whole string. If you want the current timezone to be displayed on the logs, you can usually do it using the “z” (with not too old Java versions)
  • By default, you cannot use parentheses in the pattern to enclose parameters (like “%-5p”, “%t”, aso…). This is because parentheses are used to group parameters together to apply formatting to them as part of a group. If you really want to use parentheses on the output, then you have to escape them
  • You can define the minimum length of a specific pattern parameter using the “%-X” where X is the number of characters. Using that, you can align the logs as you want (E.g.: “%-5p” for the log level in 5 chars => “DEBUG”, “INFO “, “WARN “, “ERROR”)
  • You can also shorten a specific pattern parameter using {X} where X is again the number of characters you would want the output string to be reduced to. It is not an exact value but the logger will do its best to reduce the length to what you want.
  • You can use different appenders to redirect the logs to different outputs. Usually you will want a file appender to store everything in a file but you can also add a console appender so it gets stored in your default console output (be it your shell, a nohup file or the server.log). If you do not want the console appender so it gets stored only on the D2-JMS.log file, you can just comment the line ‘<appender-ref ref=”RootConsoleAppender”/>’

 

You might be wondering why this JVM parameter is not added by default by the Documentum installer since it is a valid solution for this issue, right? Well, I would simply reply that it’s Documentum. ;)

 

 

Cet article Documentum – How to really configure the D2 JMS logs starting with JBoss/WildFly est apparu en premier sur Blog dbi services.

Docker: Network configuration: How to customize the network bridge and use my own subnet / netmask / CiDR

Dietrich Schroff - Sun, 2018-07-01 14:33
In my last posting i described how to configure the network settings of a container via docker command line:
--net none
--net bridgeNow i want to try to change the subnet from the standard 172.17.0.0/16 to another ip range.

There are some tutorials out there which say:

docker run -it  --net bridge  --fixed-cidr "10.100.0.0/24"  alpine /bin/ash
unknown flag: --fixed-cidr
but this doesa not work any more.

First you have to create new network:
docker network create --driver=bridge --subnet=10.100.0.0/24  --gateway=10.100.0.1 mybrigde
6249c9a5f6c6f7e36e7e61009b9bde7ac338173d8e222e214a65b9793d36ad6c
Just do a verification:
docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
a00386e6a5bc        bridge              bridge              local
9365e4a966d0        docker_gwbridge     bridge              local
9d9fa338a975        host                host                local
6249c9a5f6c6        mybrigde            bridge              local
9ff819cf7ddb        none                null                local
and here we go:

alpine:~# docker run -it  --network  mybrigde  alpine /bin/ash
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:0A:64:00:02 
          inet addr:10.100.0.2  Bcast:0.0.0.0  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:14 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1156 (1.1 KiB)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback 
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
Removing the network bridge is easy:
docker network rm mybrigde


and narrowing the IP range can be done like this:
alpine:~# docker network create --driver=bridge --subnet=10.100.0.0/24  --ip-range=10.100.0.128/25 --gateway=10.100.0.1 mybrigde
b0ba1d963a6ca3097d083d4f5fd979e0fb0f91f81f1279132ae773c06f821396
Just do a check:
alpine:~# docker run -it  --network  mybrigde  alpine /bin/ash
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:0A:64:00:80 
          inet addr:10.100.0.128  Bcast:0.0.0.0  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:12 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1016 (1016.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback 
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
The ip address of the container is set to 10.100.0.128 as configured with --ip-range 10.100.0.128/25.

If you are not familiar with the CIDR notation, just us this nice online tool (http://www.subnet-calculator.com/cidr.php):







Documentum – Unable to restart a JMS, failed deployment of bundle.jar

Yann Neuhaus - Sun, 2018-07-01 14:21

While working on a very big Documentum project with several other teams, some people were complaining that the JMS wouldn’t start anymore on one of the DEV environments. It is kind of rare to face an issue with the JMS itself (JBoss works pretty well usually…) so I was interested in checking this. This was a Documentum 7.2 environment but I’m sure it would apply to older versions as well as newer versions (7.x, 16.4, …). This was the content of the JMS console output:

[dmadmin@content_server_01 ~]$ cat $JMS_HOME/server/nohup-JMS.out
...
2018-06-12 07:15:54,765 UTC INFO  [org.jboss.modules] JBoss Modules version 1.1.1.GA
2018-06-12 07:15:54,968 UTC INFO  [org.jboss.msc] JBoss MSC version 1.0.2.GA
2018-06-12 07:15:55,019 UTC INFO  [org.jboss.as] JBAS015899: JBoss AS 7.1.1.Final "Brontes" starting
2018-06-12 07:15:56,415 UTC INFO  [org.jboss.as.server] JBAS015888: Creating http management service using socket-binding (management-http)
2018-06-12 07:15:56,417 UTC INFO  [org.xnio] XNIO Version 3.0.3.GA
2018-06-12 07:15:56,428 UTC INFO  [org.xnio.nio] XNIO NIO Implementation Version 3.0.3.GA
2018-06-12 07:15:56,437 UTC INFO  [org.jboss.remoting] JBoss Remoting version 3.2.3.GA
2018-06-12 07:15:56,448 UTC INFO  [org.jboss.as.logging] JBAS011502: Removing bootstrap log handlers
2018-06-12 07:15:56,478 UTC INFO  [org.jboss.as.configadmin] (ServerService Thread Pool -- 27) JBAS016200: Activating ConfigAdmin Subsystem
2018-06-12 07:15:56,486 UTC INFO  [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 32) JBAS010280: Activating Infinispan subsystem.
2018-06-12 07:15:56,510 UTC INFO  [org.jboss.as.osgi] (ServerService Thread Pool -- 40) JBAS011940: Activating OSGi Subsystem
2018-06-12 07:15:56,518 UTC INFO  [org.jboss.as.naming] (ServerService Thread Pool -- 39) JBAS011800: Activating Naming Subsystem
2018-06-12 07:15:56,527 UTC INFO  [org.jboss.as.security] (ServerService Thread Pool -- 45) JBAS013101: Activating Security Subsystem
2018-06-12 07:15:56,533 UTC INFO  [org.jboss.as.security] (MSC service thread 1-9) JBAS013100: Current PicketBox version=4.0.7.Final
2018-06-12 07:15:56,535 UTC INFO  [org.jboss.as.connector] (MSC service thread 1-4) JBAS010408: Starting JCA Subsystem (JBoss IronJacamar 1.0.9.Final)
2018-06-12 07:15:56,548 UTC INFO  [org.jboss.as.webservices] (ServerService Thread Pool -- 49) JBAS015537: Activating WebServices Extension
2018-06-12 07:15:56,585 UTC INFO  [org.jboss.as.naming] (MSC service thread 1-1) JBAS011802: Starting Naming Service
2018-06-12 07:15:56,594 UTC INFO  [org.jboss.as.mail.extension] (MSC service thread 1-12) JBAS015400: Bound mail session 
2018-06-12 07:15:56,688 UTC INFO  [org.jboss.ws.common.management.AbstractServerConfig] (MSC service thread 1-8) JBoss Web Services - Stack CXF Server 4.0.2.GA
2018-06-12 07:15:56,727 UTC INFO  [org.apache.coyote.http11.Http11Protocol] (MSC service thread 1-10) Starting Coyote HTTP/1.1 on http--0.0.0.0-9080
2018-06-12 07:15:56,821 UTC INFO  [org.hornetq.core.server.impl.HornetQServerImpl] (MSC service thread 1-13) live server is starting with configuration HornetQ Configuration (clustered=false,backup=false,sharedStore=true,journalDirectory=$JMS_HOME/server/DctmServer_MethodServer/data/messagingjournal,bindingsDirectory=$JMS_HOME/server/DctmServer_MethodServer/data/messagingbindings,largeMessagesDirectory=$JMS_HOME/server/DctmServer_MethodServer/data/messaginglargemessages,pagingDirectory=$JMS_HOME/server/DctmServer_MethodServer/data/messagingpaging)
2018-06-12 07:15:56,827 UTC INFO  [org.hornetq.core.server.impl.HornetQServerImpl] (MSC service thread 1-13) Waiting to obtain live lock
2018-06-12 07:15:56,852 UTC INFO  [org.hornetq.core.persistence.impl.journal.JournalStorageManager] (MSC service thread 1-13) Using AIO Journal
2018-06-12 07:15:56,923 UTC INFO  [org.hornetq.core.server.impl.AIOFileLockNodeManager] (MSC service thread 1-13) Waiting to obtain live lock
2018-06-12 07:15:56,924 UTC INFO  [org.hornetq.core.server.impl.AIOFileLockNodeManager] (MSC service thread 1-13) Live Server Obtained live lock
2018-06-12 07:15:56,981 UTC INFO  [org.jboss.as.remoting] (MSC service thread 1-5) JBAS017100: Listening on 0.0.0.0/0.0.0.0:9092
2018-06-12 07:15:56,982 UTC INFO  [org.jboss.as.remoting] (MSC service thread 1-14) JBAS017100: Listening on /127.0.0.1:9084
2018-06-12 07:15:56,996 UTC INFO  [org.jboss.as.server.deployment.scanner] (MSC service thread 1-10) JBAS015012: Started FileSystemDeploymentService for directory $JMS_HOME/server/DctmServer_MethodServer/deployments
2018-06-12 07:15:57,019 UTC INFO  [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015014: Re-attempting failed deployment bundle.jar
2018-06-12 07:15:57,020 UTC INFO  [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015014: Re-attempting failed deployment bundle.jar
2018-06-12 07:15:57,021 UTC INFO  [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015014: Re-attempting failed deployment bundle.jar
2018-06-12 07:15:57,023 UTC INFO  [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015014: Re-attempting failed deployment bundle.jar
2018-06-12 07:15:57,024 UTC INFO  [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015014: Re-attempting failed deployment bundle.jar
2018-06-12 07:15:57,025 UTC INFO  [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015014: Re-attempting failed deployment bundle.jar
2018-06-12 07:15:57,026 UTC INFO  [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015003: Found ServerApps.ear in deployment directory. To trigger deployment create a file called ServerApps.ear.dodeploy
2018-06-12 07:15:57,027 UTC INFO  [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015003: Found XhiveConnector.ear in deployment directory. To trigger deployment create a file called XhiveConnector.ear.dodeploy
2018-06-12 07:15:57,028 UTC INFO  [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015003: Found bpm.ear in deployment directory. To trigger deployment create a file called bpm.ear.dodeploy
2018-06-12 07:15:57,029 UTC INFO  [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015003: Found acs.ear in deployment directory. To trigger deployment create a file called acs.ear.dodeploy
2018-06-12 07:15:57,090 UTC INFO  [org.apache.coyote.http11.Http11Protocol] (MSC service thread 1-2) Starting Coyote HTTP/1.1 on http-0.0.0.0-0.0.0.0-9082
2018-06-12 07:15:57,235 UTC INFO  [org.hornetq.core.remoting.impl.netty.NettyAcceptor] (MSC service thread 1-13) Started Netty Acceptor version 3.2.5.Final-a96d88c 0.0.0.0:9090 for CORE protocol
2018-06-12 07:15:57,237 UTC INFO  [org.hornetq.core.remoting.impl.netty.NettyAcceptor] (MSC service thread 1-13) Started Netty Acceptor version 3.2.5.Final-a96d88c 0.0.0.0:9091 for CORE protocol
2018-06-12 07:15:57,239 UTC INFO  [org.hornetq.core.server.impl.HornetQServerImpl] (MSC service thread 1-13) Server is now live
2018-06-12 07:15:57,239 UTC INFO  [org.hornetq.core.server.impl.HornetQServerImpl] (MSC service thread 1-13) HornetQ Server version 2.2.13.Final (HQ_2_2_13_FINAL_AS7, 122) [b774a781-a4da-11e6-a1e8-005056082847]) started
2018-06-12 07:15:57,256 UTC INFO  [org.jboss.as.messaging] (MSC service thread 1-14) JBAS011601: Bound messaging object to jndi name java:jboss/exported/jms/RemoteConnectionFactory
2018-06-12 07:15:57,257 UTC INFO  [org.jboss.as.messaging] (MSC service thread 1-14) JBAS011601: Bound messaging object to jndi name java:/RemoteConnectionFactory
2018-06-12 07:15:57,258 UTC INFO  [org.jboss.as.messaging] (MSC service thread 1-4) JBAS011601: Bound messaging object to jndi name java:/ConnectionFactory
2018-06-12 07:15:57,261 UTC INFO  [org.hornetq.core.server.impl.HornetQServerImpl] (MSC service thread 1-9) trying to deploy queue jms.queue.touchRpcQueue
2018-06-12 07:15:57,267 UTC INFO  [org.jboss.as.messaging] (MSC service thread 1-9) JBAS011601: Bound messaging object to jndi name java:jboss/exported/jms/queue/touchRpcQueue
2018-06-12 07:15:57,268 UTC INFO  [org.jboss.as.messaging] (MSC service thread 1-9) JBAS011601: Bound messaging object to jndi name java:/queue/touchRpcQueue
2018-06-12 07:15:57,269 UTC INFO  [org.jboss.as.messaging] (MSC service thread 1-3) JBAS011601: Bound messaging object to jndi name java:/TouchRpcQueueConnectionFactory
2018-06-12 07:15:57,270 UTC INFO  [org.jboss.as.messaging] (MSC service thread 1-3) JBAS011601: Bound messaging object to jndi name java:jboss/exported/jms/TouchRpcQueueConnectionFactory
2018-06-12 07:15:57,301 UTC INFO  [org.jboss.as.deployment.connector] (MSC service thread 1-8) JBAS010406: Registered connection factory java:/JmsXA
2018-06-12 07:15:57,311 UTC INFO  [org.hornetq.ra.HornetQResourceAdapter] (MSC service thread 1-8) HornetQ resource adaptor started
2018-06-12 07:15:57,312 UTC INFO  [org.jboss.as.connector.services.ResourceAdapterActivatorService$ResourceAdapterActivator] (MSC service thread 1-8) IJ020002: Deployed: file://RaActivatorhornetq-ra
2018-06-12 07:15:57,316 UTC INFO  [org.jboss.as.deployment.connector] (MSC service thread 1-4) JBAS010401: Bound JCA ConnectionFactory 
2018-06-12 07:15:57,343 UTC ERROR [org.jboss.as.controller.management-operation] (DeploymentScanner-threads - 2) Operation ("add") failed - address: ([("deployment" => "bundle.jar")]) - failure description: "JBAS014803: Duplicate resource [(\"deployment\" => \"bundle.jar\")]"
2018-06-12 07:15:57,348 UTC ERROR [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS014654: Composite operation was rolled back
2018-06-12 07:15:57,349 UTC INFO  [org.jboss.as] (Controller Boot Thread) JBAS015951: Admin console listening on http://127.0.0.1:9085
2018-06-12 07:15:57,350 UTC ERROR [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS014654: Composite operation was rolled back
2018-06-12 07:15:57,350 UTC INFO  [org.jboss.as] (Controller Boot Thread) JBAS015874: JBoss AS 7.1.1.Final "Brontes" started in 2885ms - Started 148 of 226 services (78 services are passive or on-demand)
2018-06-12 07:15:57,351 UTC ERROR [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS014654: Composite operation was rolled back
2018-06-12 07:15:57,353 UTC ERROR [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS014654: Composite operation was rolled back
2018-06-12 07:15:57,354 UTC ERROR [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS014654: Composite operation was rolled back
2018-06-12 07:15:57,355 UTC ERROR [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) {"JBAS014653: Composite operation failed and was rolled back. Steps that failed:" => {"Operation step-1" => "JBAS014803: Duplicate resource [(\"deployment\" => \"bundle.jar\")]"}}
2018-06-12 07:15:57,357 UTC ERROR [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) undefined
2018-06-12 07:15:57,358 UTC ERROR [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) undefined
2018-06-12 07:15:57,359 UTC ERROR [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) undefined
2018-06-12 07:15:57,359 UTC ERROR [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) undefined

 

This was linked to deployments so I tried as a first step to force a redeploy of all Documentum applications. It can be done as shown below and in those cases, it is better to stop the JMS before otherwise it will try to redeploy the applications on the fly, which might work but you aren’t sure they will start again on next startup… So here are the steps:

[dmadmin@content_server_01 ~]$ cd $JMS_HOME/server/DctmServer_MethodServer/deployments
[dmadmin@content_server_01 deployments]$ 
[dmadmin@content_server_01 deployments]$ find . -maxdepth 1 -name "*.failed"
./acs.ear.failed
./ServerApps.ear.failed
./error.war.failed
./bpm.ear.failed
[dmadmin@content_server_01 deployments]$ 
[dmadmin@content_server_01 deployments]$ 
[dmadmin@content_server_01 deployments]$ for i in `find . -maxdepth 1 -name "*.failed"`; do name=`echo ${i} | sed 's,.failed,.dodeploy,'`; mv ${i} ${name}; done
[dmadmin@content_server_01 deployments]$

 

Then you can just start again the JMS:

[dmadmin@content_server_01 deployments]$ $JMS_HOME/server/startJMS.sh
Starting the JMS...
The JMS process has been started.
[dmadmin@content_server_01 deployments]$

 

Unfortunately in this case, the JMS logs were similar and the Documentum applications were still in failed. So looking closer at these logs, it seemed to be related to a bundle.jar file:

[dmadmin@content_server_01 deployments]$ find . -name bundle.jar
./felix-cache/bundle3/version0.0/bundle.jar
./felix-cache/bundle5/version0.0/bundle.jar
./felix-cache/bundle1/version0.0/bundle.jar
./felix-cache/bundle4/version0.0/bundle.jar
./felix-cache/bundle6/version0.0/bundle.jar
./felix-cache/bundle2/version0.0/bundle.jar
[dmadmin@content_server_01 deployments]$
[dmadmin@content_server_01 deployments]$ ls felix-cache/*
felix-cache/bundle0:
bundle.id

felix-cache/bundle1:
bundle.id  bundle.lastmodified  bundle.location  bundle.startlevel  bundle.state  version0.0

felix-cache/bundle2:
bundle.id  bundle.lastmodified  bundle.location  bundle.startlevel  bundle.state  version0.0

felix-cache/bundle3:
bundle.id  bundle.lastmodified  bundle.location  bundle.startlevel  bundle.state  version0.0

felix-cache/bundle4:
bundle.id  bundle.lastmodified  bundle.location  bundle.startlevel  bundle.state  version0.0

felix-cache/bundle5:
bundle.id  bundle.lastmodified  bundle.location  bundle.startlevel  bundle.state  version0.0

felix-cache/bundle6:
bundle.id  bundle.lastmodified  bundle.location  bundle.startlevel  bundle.state  version0.0
[dmadmin@content_server_01 deployments]$
[dmadmin@content_server_01 deployments]$
[dmadmin@content_server_01 deployments]$ for i in `ls felix-cache/`; do echo "felix-cache/${i}:"; cat felix-cache/${i}/bundle.location; echo; echo; done
felix-cache/bundle0:
cat: felix-cache/bundle0/bundle.location: No such file or directory


felix-cache/bundle1:
file:$JMS_HOME/server/DctmServer_MethodServer/deployments/acs.ear/lib/HttpService.jar

felix-cache/bundle2:
file:$JMS_HOME/server/DctmServer_MethodServer/deployments/acs.ear/lib/HttpServiceImpl.jar

felix-cache/bundle3:
file:$JMS_HOME/server/DctmServer_MethodServer/deployments/acs.ear/lib/Common.jar

felix-cache/bundle4:
file:$JMS_HOME/server/DctmServer_MethodServer/deployments/acs.ear/lib/Web.jar

felix-cache/bundle5:
file:$JMS_HOME/server/DctmServer_MethodServer/deployments/acs.ear/lib/Jmx.jar

felix-cache/bundle6:
file:$JMS_HOME/server/DctmServer_MethodServer/deployments/acs.ear/lib/org.apache.felix.bundlerepository-1.2.1.jar

[dmadmin@content_server_01 deployments]$
[dmadmin@content_server_01 deployments]$ diff felix-cache/bundle1/version0.0/bundle.jar $JMS_HOME/server/DctmServer_MethodServer/deployments/acs.ear/lib/HttpService.jar
[dmadmin@content_server_01 deployments]$
[dmadmin@content_server_01 deployments]$ diff felix-cache/bundle3/version0.0/bundle.jar $JMS_HOME/server/DctmServer_MethodServer/deployments/acs.ear/lib/Common.jar
[dmadmin@content_server_01 deployments]$

 

This felix-cache is normally created by the ACS when you start it and it will normally be placed in the folder from where you start the JMS. In this case, it was under the deployments folder, probably because someone started the JMS using the startMethodServer.sh script directly and he didn’t use our custom start script (startJMS.sh as you can see above and below) which actually takes care of that and switch to the correct folder before (and with the proper nohup, aso…). Because it was created under the deployments folder, the JMS was trying to deploy it as any other applications and so it failed. As you can see above, the file “bundle.jar” (in the different folders) is actually a copy of some of the ACS library files – the ones mentioned in the bundle.location – that has been put to the felix-cache by the ACS.

As you know, with a newly installed JMS for Documentum, there is no felix-cache folder under the deployments. What you can find here are the ServerApps, ACS or BPM for example but that’s pretty much it. Therefore, to solve this issue, simply remove the felix-cache folder, force a new deployment of all Documentum applications, restart it again properly and it should be good then:

[dmadmin@content_server_01 deployments]$ $JMS_HOME/server/stopMethodServer.sh; sleep 5; ps -ef | grep MethodServer
[dmadmin@content_server_01 deployments]$
[dmadmin@content_server_01 deployments]$ rm -rf felix-cache/
[dmadmin@content_server_01 deployments]$
[dmadmin@content_server_01 deployments]$ find . -maxdepth 1 -name "*.failed"
./acs.ear.failed
./ServerApps.ear.failed
./error.war.failed
./bpm.ear.failed
[dmadmin@content_server_01 deployments]$
[dmadmin@content_server_01 deployments]$ for i in `find . -maxdepth 1 -name "*.failed"`; do name=`echo ${i} | sed 's,.failed,.dodeploy,'`; mv ${i} ${name}; done
[dmadmin@content_server_01 deployments]$
[dmadmin@content_server_01 deployments]$ $JMS_HOME/server/startJMS.sh
Starting the JMS...
The JMS process has been started.
[dmadmin@content_server_01 deployments]$

 

Then checking if the issue is gone and if all the applications are now properly deployed:

[dmadmin@content_server_01 deployments]$ grep -E " deployment | Deployed " $JMS_HOME/server/nohup-JMS.out
2018-06-12 07:30:15,142 UTC INFO  [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015003: Found ServerApps.ear in deployment directory. To trigger deployment create a file called ServerApps.ear.dodeploy
2018-06-12 07:30:15,144 UTC INFO  [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015003: Found XhiveConnector.ear in deployment directory. To trigger deployment create a file called XhiveConnector.ear.dodeploy
2018-06-12 07:30:15,145 UTC INFO  [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015003: Found bpm.ear in deployment directory. To trigger deployment create a file called bpm.ear.dodeploy
2018-06-12 07:30:15,146 UTC INFO  [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015003: Found acs.ear in deployment directory. To trigger deployment create a file called acs.ear.dodeploy
2018-06-12 07:30:15,503 UTC INFO  [org.jboss.as.connector.services.ResourceAdapterActivatorService$ResourceAdapterActivator] (MSC service thread 1-1) IJ020002: Deployed: file://RaActivatorhornetq-ra
2018-06-12 07:30:15,535 UTC INFO  [org.jboss.as.server.deployment] (MSC service thread 1-5) JBAS015876: Starting deployment of "bpm.ear"
2018-06-12 07:30:15,535 UTC INFO  [org.jboss.as.server.deployment] (MSC service thread 1-13) JBAS015876: Starting deployment of "ServerApps.ear"
2018-06-12 07:30:15,535 UTC INFO  [org.jboss.as.server.deployment] (MSC service thread 1-7) JBAS015876: Starting deployment of "error.war"
2018-06-12 07:30:15,535 UTC INFO  [org.jboss.as.server.deployment] (MSC service thread 1-11) JBAS015876: Starting deployment of "acs.ear"
2018-06-12 07:30:17,824 UTC INFO  [org.jboss.as.server.deployment] (MSC service thread 1-2) JBAS015876: Starting deployment of "documentum-bocs-ws.war"
2018-06-12 07:30:17,825 UTC INFO  [org.jboss.as.server.deployment] (MSC service thread 1-15) JBAS015876: Starting deployment of "bocs.war"
2018-06-12 07:30:18,093 UTC INFO  [org.apache.catalina.core.ContainerBase.[jboss.web].[default-host].[/ACS]] (MSC service thread 1-16) Initializing CORS filter as per the deployment descriptor
2018-06-12 07:30:18,608 UTC INFO  [org.jboss.as.server.deployment] (MSC service thread 1-11) JBAS015876: Starting deployment of "DmMethods.war"
2018-06-12 07:30:18,608 UTC INFO  [org.jboss.as.server.deployment] (MSC service thread 1-9) JBAS015876: Starting deployment of "DmMail.war"
2018-06-12 07:30:18,904 UTC INFO  [org.jboss.as.server.deployment] (MSC service thread 1-11) JBAS015876: Starting deployment of "bpm.war"
2018-06-12 07:30:32,017 UTC INFO  [org.jboss.as.server] (DeploymentScanner-threads - 2) JBAS018559: Deployed "bpm.ear"
2018-06-12 07:30:32,019 UTC INFO  [org.jboss.as.server] (DeploymentScanner-threads - 2) JBAS018559: Deployed "error.war"
2018-06-12 07:30:32,020 UTC INFO  [org.jboss.as.server] (DeploymentScanner-threads - 2) JBAS018559: Deployed "acs.ear"
2018-06-12 07:30:32,021 UTC INFO  [org.jboss.as.server] (DeploymentScanner-threads - 2) JBAS018559: Deployed "ServerApps.ear"
[dmadmin@content_server_01 deployments]$

 

So takes care when you start the JMS, if you do not have any custom script (it’s pretty much mandatory for the JMS!) or if you do not change the working directory before executing the startMethodServer.sh script, you might have some surprises.

 

Edit: After writing this blog, I searched the OTX website for something related to this and found that there is a KB (KB7795388) about this but I think this blog still makes sense because it provides more information and some explanation. That’s a bad habit of mine, I usually try fixing things myself before checking the OTX website because I don’t like it and the less I’m using it, the better.

 

 

Cet article Documentum – Unable to restart a JMS, failed deployment of bundle.jar est apparu en premier sur Blog dbi services.

Windows Server – Service not starting with ‘Error 1067: The process terminated unexpectedly’ – again

Yann Neuhaus - Sun, 2018-07-01 14:10

Some months ago, I wrote a blog regarding a Windows Service not starting up with the error 1067. The first time I faced this issue, it was about an fmeAG Migration Center installation and as said in this previous blog, I’m not an expert in this software so I started to work on the Windows side of the issue and I found a workaround, which I explained in the above mentioned blog. A few weeks ago, I faced the exact same issue on another environment. Since I had a few hours available at that time and since I was much more familiar with the fmeAG Migration Center, I tried to really find the issue and not just apply a – kind of stupid – workaround. You guessed it, I found the issue, otherwise I wouldn’t have written this blog in the first place…

So first of all, the Windows Service that couldn’t start is the “Migration Center Job Server” which uses the wrapper.exe which can be found in the installation folder (E.g.: D:\fmeAG\migration-center Server Components 3.3\). In this folder, there is a wrapper.log as well as wrapper.conf so that looked like a good starting point. From the logs, this is what I could find (I cut most of the non-needed lines):

STATUS | wrapper  | 2018/06/14 12:46:47 | --> Wrapper Started as Service
STATUS | wrapper  | 2018/06/14 12:46:48 | Launching a JVM...
INFO   | jvm 1    | 2018/06/14 12:46:48 | Usage: java [-options] class [args...]
INFO   | jvm 1    | 2018/06/14 12:46:48 |            (to execute a class)
INFO   | jvm 1    | 2018/06/14 12:46:48 |    or  java [-options] -jar jarfile [args...]
INFO   | jvm 1    | 2018/06/14 12:46:48 |            (to execute a jar file)
INFO   | jvm 1    | 2018/06/14 12:46:48 | where options include:
INFO   | jvm 1    | 2018/06/14 12:46:48 |     -d32	  use a 32-bit data model if available
INFO   | jvm 1    | 2018/06/14 12:46:48 |     -d64	  use a 64-bit data model if available
...
INFO   | jvm 1    | 2018/06/14 12:46:48 | See http://www.oracle.com/technetwork/java/javase/documentation/index.html for more details.
ERROR  | wrapper  | 2018/06/14 12:46:48 | JVM exited while loading the application.
STATUS | wrapper  | 2018/06/14 12:46:52 | Launching a JVM...
...
ERROR  | wrapper  | 2018/06/14 12:46:53 | JVM exited while loading the application.
STATUS | wrapper  | 2018/06/14 12:46:57 | Launching a JVM...
...
ERROR  | wrapper  | 2018/06/14 12:46:58 | JVM exited while loading the application.
STATUS | wrapper  | 2018/06/14 12:47:02 | Launching a JVM...
...
ERROR  | wrapper  | 2018/06/14 12:47:02 | JVM exited while loading the application.
STATUS | wrapper  | 2018/06/14 12:47:07 | Launching a JVM...
...
ERROR  | wrapper  | 2018/06/14 12:47:07 | JVM exited while loading the application.
FATAL  | wrapper  | 2018/06/14 12:47:07 | There were 5 failed launches in a row, each lasting less than 300 seconds.  Giving up.
FATAL  | wrapper  | 2018/06/14 12:47:07 |   There may be a configuration problem: please check the logs.
STATUS | wrapper  | 2018/06/14 12:47:07 | <-- Wrapper Stopped

 

This actually looked quite interesting… The fact that when starting the JVM, the java “help” is displayed would tend to show that the start command isn’t correct or that there is something wrong with it. I was able to start a JVM using the Windows command line tools so it wasn’t an issue with Java. As a result, I checked the wrapper.conf which was shipped with the software. In our installation, we only slightly updated this configuration file to add custom JVM parameters but this wasn’t the issue (I still checked with the default file to be sure). There were no issue with the content of this file or with its formatting but there are still something useful in it: the possibility to change the log level. There are the relevant settings:

# Log Level for console output.  (See docs for log levels)
wrapper.console.loglevel=ERROR

# Log file to use for wrapper output logging.
wrapper.logfile=./wrapper.log

# Log Level for log file output.  (See docs for log levels)
wrapper.logfile.loglevel=INFO

 

So to see more information on the log file, you simply have to switch “wrapper.logfile.loglevel” from INFO to DEBUG for example. After doing that, the logs were clearer:

STATUS | wrapper  | 2018/06/14 12:53:10 | --> Wrapper Started as Service
DEBUG  | wrapper  | 2018/06/14 12:53:10 | Using tick timer.
DEBUG  | wrapperp | 2018/06/14 12:53:10 | server listening on port 32000.
STATUS | wrapper  | 2018/06/14 12:53:10 | Launching a JVM...
DEBUG  | wrapper  | 2018/06/14 12:53:10 | command: "D:\Java\jdk1.8.0_171\bin\java.exe" -Xss512k -DdocumentDirectory.home="%DOCUMENTDIRECTORY_HOME%" -Dclb.library.path=.\lib\mc-d2-importer\LockBox\lib\native\win_vc100_ia32 -Duser.timezone=UTC -Xms512m -Xmx1536m -Djava.library.path=".;./lib/mc-outlook-adaptor;./lib/mc-domino-scanner/lib;D:\DFC\DFC_7.3\Shared;D:\Oracle\instantclient_12_2;D:\Java\jdk1.8.0_171\bin;C:\windows\system32;C:\windows;C:\windows\System32\Wbem;C:\windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\1E\NomadBranch"" -classpath "wrappertest.jar;wrapper.jar;./lib/mc-common/mc-common-3.3.jar;./lib/mc-api/mc-api-3.3.jar;./lib/mc-api/ojdbc7.jar;./lib/mc-api/orai18n.jar;./lib/mc-api/runtime12.jar;./lib/mc-api/translator.jar;./lib/mc-server;./lib/mc-server/log4j-1.2.17.jar;./lib/mc-server/mc-server-3.3.jar;./lib/mc-dctm-adaptor;./lib/mc-dctm-adaptor/mc-dctm-adaptor-3.3.jar;./lib/mc-d2-importer;./lib/mc-d2-importer/C2-API.jar;./lib/mc-d2-importer/C6-Common-4.7.0.jar;./lib/mc-d2-importer/commons-collections-3.2.jar;./lib/mc-d2-importer/commons-compress-1.5.jar;./lib/mc-d2-importer/commons-io-1.4.jar;./lib/mc-d2-importer/commons-lang-2.4.jar;./lib/mc-d2-importer/D2-API-4.7.0.jar;./lib/mc-d2-importer/D2-BOCS-65.jar;./lib/mc-d2-importer/D2-Specifications-API.jar;./lib/mc-d2-importer/D2-Specifications.jar;./lib/mc-d2-importer/D2BofServices-4.7.0.jar;./lib/mc-d2-importer/D2FS-Generated-4.7.0.jar;./lib/mc-d2-importer/D2FS4DCTM-API-4.7.0.jar;./lib/mc-d2-importer/dfc.jar;./lib/mc-d2-importer/diff-0.4.2.jar;./lib/mc-d2-importer/ehcache-core-1.7.2.jar;./lib/mc-d2-importer/emc-dfs-rt.jar;./lib/mc-d2-importer/emc-dfs-services.jar;./lib/mc-d2-importer/gwt-servlet-2.5.1.jar;./lib/mc-d2-importer/logback-classic-0.9.18.jar;./lib/mc-d2-importer/logback-core-0.9.18.jar;./lib/mc-d2-importer/mail.jar;./lib/mc-d2-importer/mc-d2-importer-3.3.jar;./lib/mc-d2-importer/poi-3.6-20091214.jar;./lib/mc-d2-importer/poi-contrib-3.6-20091214.jar;./lib/mc-d2-importer/slf4j-api-1.5.10.jar;./lib/mc-dcm-importer;./lib/mc-dcm-importer/dcm.jar;./lib/mc-dcm-importer/dcmibof.jar;./lib/mc-dcm-importer/dcmproperties.jar;./lib/mc-dcm-importer/dcmresource.jar;./lib/mc-dcm-importer/DmcRecords.jar;./lib/mc-dcm-importer/mc-dcm-importer-3.3.jar;./lib/mc-dcm-importer/pss.jar;./lib/mc-otcs-common/activation.jar;./lib/mc-otcs-common/aspectjrt.jar;./lib/mc-otcs-common/commons-lang3-3.3.2.jar;./lib/mc-otcs-common/jaxb-api.jar;./lib/mc-otcs-common/jaxb-impl.jar;./lib/mc-otcs-common/jaxws-api.jar;./lib/mc-otcs-common/jaxws-rt.jar;./lib/mc-otcs-common/jsr173_api.jar;./lib/mc-otcs-common/jsr181-api.jar;./lib/mc-otcs-common/jsr250-api.jar;./lib/mc-otcs-common/mimepull.jar;./lib/mc-otcs-common/resolver.jar;./lib/mc-otcs-common/saaj-api.jar;./lib/mc-otcs-common/saaj-impl.jar;./lib/mc-otcs-common/stax-ex.jar;./lib/mc-otcs-common/streambuffer.jar;./lib/mc-d2-importer/LockBox;./lib/mc-firstdoc-importer;D:\Documentum\config;D:\DFC\DFC_7.3\dctm.jar" -Dwrapper.key="eFO3zr2BRv874Qb4" -Dwrapper.port=32000 -Dwrapper.jvm.port.min=31000 -Dwrapper.jvm.port.max=31999 -Dwrapper.debug="TRUE" -Dwrapper.pid=4832 -Dwrapper.version="3.2.3" -Dwrapper.native_library="wrapper" -Dwrapper.cpu.timeout="10" -Dwrapper.jvmid=1 org.tanukisoftware.wrapper.WrapperSimpleApp de.fme.mc.server.Main
DEBUG  | wrapper  | 2018/06/14 12:53:10 | JVM started (PID=4756)
INFO   | jvm 1    | 2018/06/14 12:53:10 | Usage: java [-options] class [args...]
INFO   | jvm 1    | 2018/06/14 12:53:10 |            (to execute a class)
INFO   | jvm 1    | 2018/06/14 12:53:10 |    or  java [-options] -jar jarfile [args...]
INFO   | jvm 1    | 2018/06/14 12:53:10 |            (to execute a jar file)
INFO   | jvm 1    | 2018/06/14 12:53:10 | where options include:
INFO   | jvm 1    | 2018/06/14 12:53:10 |     -d32	  use a 32-bit data model if available
INFO   | jvm 1    | 2018/06/14 12:53:10 |     -d64	  use a 64-bit data model if available
...
INFO   | jvm 1    | 2018/06/14 12:53:10 | See http://www.oracle.com/technetwork/java/javase/documentation/index.html for more details.
ERROR  | wrapper  | 2018/06/14 12:53:10 | JVM exited while loading the application.
...
FATAL  | wrapper  | 2018/06/14 12:53:30 | There were 5 failed launches in a row, each lasting less than 300 seconds.  Giving up.
FATAL  | wrapper  | 2018/06/14 12:53:30 |   There may be a configuration problem: please check the logs.
STATUS | wrapper  | 2018/06/14 12:53:30 | <-- Wrapper Stopped

 

From the beginning, this looked like an issue with the java command executed so I took a close look at it and I could find that there were indeed something wrong. If you have good eyes (and took the time to scroll a little bit), you can see that at some point, there are two consecutive double quotes ( “” ) to close the “-Djava.library.path” JVM parameter. As a result, the following parameter which is -classpath isn’t taken properly and it just cause the whole command to be wrongly formatted…

The value to be used for the “-Djava.library.path” JVM parameter is coming from the wrapper.conf file as well: in this file, you can find the “wrapper.java.library.path.X=” lines where X is a number starting with 1 and each of these values are concatenated (separated with semicolon) to form the final value. By default, the last of these lines will have “%PATH%” as value so it will replace it at runtime with the actual value of this environment variable. Since it was identified that the issue is coming from the double quotes at the end of the “-Djava.library.path”, it is therefore safe to assume that the issue is inside %PATH% definition…

Checking its value using the command prompt didn’t show anything strange but through the Control Panel, it confirmed my suspicions: the declaration of the %PATH% environment variable for the current user ended with a double quote ( ” ) while there were no double quotes at the start of this declaration. After removing it, the Service was able to start successfully. After investigation on the root cause, it appeared that this double quote was actually coming from an issue with the Windows Server delivery tool that wrongly set this environment variable on the server delivery. This also explained why the workaround I described in the previous blog worked: it cleaned the environment variables for this user (there were no specific declaration on this user’s %PATH%, it was only the default Windows stuff).

 

 

Cet article Windows Server – Service not starting with ‘Error 1067: The process terminated unexpectedly’ – again est apparu en premier sur Blog dbi services.

acfs not supported in oracle linux 7 though oracle documents says its supported.

Tom Kyte - Sun, 2018-07-01 12:26
Hi tom, In a rac environment with grid infrastructure i am trying to configure acfs. But i am getting following error. ACFS-9459: ADVM/ACFS is not supported on this OS version: 'unknown' ACFS-9201: Not Supported Blogs says i need to appl...
Categories: DBA Blogs

Creating linguistic indexes for CANADIAN FRENCH

Tom Kyte - Sun, 2018-07-01 12:26
When creating a linguistic index, I am not able to specify CANADIAN FRENCH. Oracle reports that the NLS parameter string is invalid. I suspect that it's because there is a space in it, but the answer eludes me. Here is a short example of a script ...
Categories: DBA Blogs

ora_rowscn - is it always incremental,

Tom Kyte - Sun, 2018-07-01 12:26
Hello, I want to sqoop data out of my Oracle 11.2 database on a daily basis. However, I want to do only incremental extracts. Apparently, scn_to_timestamp doesn't always work due to ORA-08181: specified number is not a valid system change number...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator