Feed aggregator

Oracle Big Data Lite VM 4.6.0 available!

Big Data Lite 4.6.0 is now available on OTN.  The VM is packed with all of the latest capabilities from the big data platform: Oracle Enterprise Linux 6.8 Oracle Database 12c...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Is Oracle E-Business Suite FIPS 140 Certified?

Steven Chan - Wed, 2016-11-02 02:05

Customers frequently ask whether Oracle E-Business Suite is certified with the FIPS standards. 

The Federal Information Processing Standards (FIPS) Publication 140-2 defines US Federal standards for cryptographic-based security systems.  FIPS Publications are issues by the National Institute of Standards and Technology (NIST).

From our Oracle E-Business Suite Security FAQ (Note 2063486.1)

The cryptographic modules in Oracle E-Business Suite currently cannot be considered FIPS certified. Some elements of the Oracle E-Business Suite Release 12.1.3 technology stack have been FIPS certified; however, some of the cryptographic libraries that are used by Oracle E-Business Suite have not been FIPS certified. Additionally, some of the cryptographic libraries that have been FIPS certified have had patches issued since certification, which technically takes them out of compliance.

Specifically, the cryptographic libraries that Oracle HTTP Server (OHS) uses for SSL/TLS traffic were FIPS certified. However, there have been a variety of security and non-security related patches in that area that technically take it out of compliance, and it has not been recertified since those patches have been issued. Oracle E-Business Suite also makes use of other cryptographic libraries for variety of usages that have not been FIPS certified.  Oracle does not currently plan to certify all of the cryptographic libraries currently used by Oracle E-Business Suite.


Categories: APPS Blogs

Documentum story – Documentum JMS Log Configuration

Yann Neuhaus - Wed, 2016-11-02 01:00

The aim of this blog is to provide you a way to configure the JMS Logs in order to align all applications logging with date information, log rotation and retention. Some changes have to be done on the jboss container as well as on the log4j utility for each deployed JMS applications (acs.ear, serverapps.ear and bpm.ear).

General configuration

First, go to the JMS configuration at $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/configuration/
The jboss version may vary depending on your Content Server version.
Do a backup of the standalone file like follow:

cp standalone.xml standalone.xml_$(date +%Y%m%d).log

Then edit the file standalone.xml by replacing each pattern-formatter with the following configuration:

<pattern-formatter pattern="%d{yyyy-MM-dd HH:mm:ss,SSS z} %-5p [%c] (%t) %s%E%n"/>

Note that you can change this setting which will change the way the log file will look like, but try to be consistent with other environments and components.
Now go to application deployments: $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments
Once again, depending your Content Server version you could have to go into deploy instead of deployments.

For ServerApps.ear

Backup the current log4j.properties file:

cp ./ServerApps.ear/APP-INF/classes/log4j.properties ./ServerApps.ear/APP-INF/classes/log4j.properties_$(date +%Y%m%d).log

Then edit ./ServerApps.ear/APP-INF/classes/log4j.properties and set it like this:

log4j.rootCategory=WARN, A1, F1
 log4j.category.MUTE=OFF
 log4j.additivity.tracing=false
 log4j.category.tracing=DEBUG, FILE_TRACE
#------------------- CONSOLE --------------------------
 log4j.appender.A1=org.apache.log4j.ConsoleAppender
 log4j.appender.A1.threshold=ERROR
 log4j.appender.A1.layout=org.apache.log4j.PatternLayout
 log4j.appender.A1.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS z} %-5p [%c] (%t) %m%n
#------------------- FILE --------------------------
 log4j.appender.F1=org.apache.log4j.RollingFileAppender
 log4j.appender.F1.File=$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/logs/ServerApps.log
 log4j.appender.F1.MaxFileSize=10MB
 log4j.appender.F1.layout=org.apache.log4j.PatternLayout
 log4j.appender.F1.MaxBackupIndex=10
 log4j.appender.F1.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS z} %-5p [%c] (%t) %m%n
#------------------- ACS --------------------------
 log4j.category.acs=WARN, ACS_LOG
 log4j.appender.ACS_LOG=org.apache.log4j.RollingFileAppender
 log4j.appender.ACS_LOG.File=$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/logs/AcsServer.log
 log4j.appender.ACS_LOG.MaxFileSize=100KB
 log4j.appender.ACS_LOG.layout=org.apache.log4j.PatternLayout
 log4j.appender.ACS_LOG.MaxBackupIndex=10
 log4j.appender.ACS_LOG.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS z} %-5p [%c] (%t) %m%n
#------------------- FILE_TRACE --------------------------
 log4j.appender.FILE_TRACE=org.apache.log4j.RollingFileAppender
 log4j.appender.FILE_TRACE.File=$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/logs/ServerApps_trace.log
 log4j.appender.FILE_TRACE.MaxFileSize=100MB
 log4j.appender.FILE_TRACE.layout=org.apache.log4j.PatternLayout
 log4j.appender.FILE_TRACE.MaxBackupIndex=10
 log4j.appender.FILE_TRACE.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS z} %-5p [%c] (%t) %m%n

We changed the conversion pattern in order to add more info when logging. For example, here we added “z” in order to show the TimeZone. With such configuration it is easier to compare several logs which can be generated from different servers in different time zones.

We also added the MaxFileSize and MaxBackupIndex in order to manage the retention. In the code above the logs will be generated in maximum 10 files of 100MB, hence it will never exceed 1GB on the file system. The drawback is that if you have a lot of logs generated it will grow fast and the older files will be replaced by the new ones.

For acs.log

You can do the same as before, first backup the configuration file:

cp ./acs.ear/lib/configs.jar/log4j.properties ./acs.ear/lib/configs.jar/log4j.properties_$(date +%Y%m%d).log

Then edit ./acs.ear/lib/configs.jar/log4j.properties and set it like this:

log4j.rootCategory=WARN, A1, F1
 log4j.category.MUTE=OFF
 log4j.additivity.tracing=false
 log4j.category.tracing=DEBUG, FILE_TRACE
#------------------- CONSOLE --------------------------
 log4j.appender.A1=org.apache.log4j.ConsoleAppender
 log4j.appender.A1.threshold=ERROR
 log4j.appender.A1.layout=org.apache.log4j.PatternLayout
 log4j.appender.A1.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS z} %-5p [%c] (%t) %m%n
#------------------- FILE --------------------------
 log4j.appender.F1=org.apache.log4j.RollingFileAppender
 log4j.appender.F1.File=$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/logs/acs.log
 log4j.appender.F1.MaxFileSize=10MB
 log4j.appender.F1.layout=org.apache.log4j.PatternLayout
 log4j.appender.F1.MaxBackupIndex=10
 log4j.appender.F1.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS z} %-5p [%c] (%t) %m%n
#------------------- ACS --------------------------
 log4j.category.acs=WARN, ACS_LOG
 log4j.appender.ACS_LOG=org.apache.log4j.RollingFileAppender
 log4j.appender.ACS_LOG.File=$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/logs/AcsServer.log
 log4j.appender.ACS_LOG.MaxFileSize=100KB
 log4j.appender.ACS_LOG.layout=org.apache.log4j.PatternLayout
 log4j.appender.ACS_LOG.MaxBackupIndex=10
 log4j.appender.ACS_LOG.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS z} %-5p [%c] (%t) %m%n
#------------------- FILE_TRACE --------------------------
 log4j.appender.FILE_TRACE=org.apache.log4j.RollingFileAppender
 log4j.appender.FILE_TRACE.File=$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/logs/acs_trace.log
 log4j.appender.FILE_TRACE.MaxFileSize=100MB
 log4j.appender.FILE_TRACE.layout=org.apache.log4j.PatternLayout
 log4j.appender.FILE_TRACE.MaxBackupIndex=10
 log4j.appender.FILE_TRACE.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS z} %-5p [%c] (%t) %m%n
#-------------------- ATMOS LOGGING ---------------------
 log4j.logger.com.documentum.content.store.plugin.atmos=DEBUG,ACS_LOG
 log4j.logger.com.emc.esu=WARN,ACS_LOG

 

For bpm.ear

You can do the same as before, first backup the configuration file:

cp ./bpm.ear/APP-INF/classes/log4j.properties ./bpm.ear/APP-INF/classes/log4j.properties_$(date +%Y%m%d).log

Then edit ./bpm.ear/APP-INF/classes/log4j.properties and set it like this:

log4j.rootCategory=WARN, A1, F1
 log4j.category.MUTE=OFF
 log4j.additivity.tracing=false
 log4j.category.tracing=DEBUG, FILE_TRACE
#------------------- CONSOLE --------------------------
 log4j.appender.A1=org.apache.log4j.ConsoleAppender
 log4j.appender.A1.threshold=ERROR
 log4j.appender.A1.layout=org.apache.log4j.PatternLayout
 log4j.appender.A1.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS z} %-5p [%c] (%t) %m%n
#------------------- FILE --------------------------
 log4j.appender.F1=org.apache.log4j.RollingFileAppender
 log4j.appender.F1.File=$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/logs/bpm.log
 log4j.appender.F1.MaxFileSize=10MB
 log4j.appender.F1.layout=org.apache.log4j.PatternLayout
 log4j.appender.F1.MaxBackupIndex=10
 log4j.appender.F1.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS z} %-5p [%c] (%t) %m%n
#------------------- BPM --------------------------
 log4j.logger.com.documentum.bpm=WARN, bpmappender
 log4j.logger.com.documentum.bps=WARN, bpmappender
 log4j.additivity.com.documentum.bpm=false
 log4j.additivity.com.documentum.bps=false
 log4j.appender.bpmappender=org.apache.log4j.RollingFileAppender
 log4j.appender.bpmappender.File=$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/logs/bpm-runtime.log
 log4j.appender.bpmappender.MaxFileSize=1MB
 log4j.appender.bpmappender.layout=org.apache.log4j.PatternLayout
 log4j.appender.bpmappender.MaxBackupIndex=10
 log4j.appender.bpmappender.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS z} %-5p [%c] (%t) %m%n
#------------------- FILE_TRACE --------------------------
 log4j.appender.FILE_TRACE=org.apache.log4j.RollingFileAppender
 log4j.appender.FILE_TRACE.File=$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/logs/bpm_trace.log
 log4j.appender.FILE_TRACE.MaxFileSize=100MB
 log4j.appender.FILE_TRACE.layout=org.apache.log4j.PatternLayout
 log4j.appender.FILE_TRACE.MaxBackupIndex=10
 log4j.appender.FILE_TRACE.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS z} %-5p [%c] (%t) %m%n

 

When everything is setup you can restart the JMS and verify that all logs are properly written in $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/logs

Now you have your JMS log system setup consistently.

 

Cet article Documentum story – Documentum JMS Log Configuration est apparu en premier sur Blog dbi services.

I Am Speaking At SANGAM 16

Oracle in Action - Tue, 2016-11-01 23:11

RSS content

SANGAM is the Largest Independent Oracle Users Group Conference in India, organised annually in the month of November. This year’s Sangam (Sangam16 - 8th Annual Oracle Users Group Conference) will be held  on Friday 11th & Saturday 12th November 2016 at Crowne Plaza,  Bengaluru Electronics City, India .

I will be speaking at this year’s SANGAM about “Policy Based Cluster Management In Oracle 12c“.

Oracle Clusterware 11g R2 introduced server pools as a means for specifying resource placement and administering server allocation and access. However,   all servers were considered to be equal in relation to processors, memory and other characteristics. This can lead to  sub-optimal performance of some applications if the servers  assigned to the server pools hosting those applications do not meet applications’ requirements.

Oracle Grid Infrastructure 12c enhances the use of server pools by introducing server attributes e.g. memory, CPU_count etc. which can be associated with each server. Server pools can be configured so that their members belong to a category of servers, which share a particular set of attributes. Moreover,  administrators can maintain a library of policies and switch between them as required rather than manually reallocating servers to various server pools based on workload. My session will  discuss in detail the new features of policy based cluster management in 12c.

My session will be held on Saturday November 12, 2016 10:05am – 10:55am  in
Room 4.

Hope to meet you there!!



Tags:  

Del.icio.us
Digg

Copyright © ORACLE IN ACTION [I Am Speaking At SANGAM 16], All Right Reserved. 2016.

The post I Am Speaking At SANGAM 16 appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Identify SQL_ID from Failed Per-SQL Time Limit SQL_TUNING_TASK

Michael Dinh - Tue, 2016-11-01 21:32

First, many thanks to Ivica Arsov from Pythian for all his help is solving the issue.

The objective is to find the 2 SQLs failed due to time limit.

Advance apologies for lazy post as I am just going to only provide references and results from triage.

$ oerr ora 16957
16957, 00000, "SQL Analyze time limit interrupt"
// *Cause: This is an internal error code used indicate that SQL analyze has
//         reached its time limit.
// *Action:

$ oerr ora 13639
13639, 00000, "The current operation was interrupted because it timed out."
// *Cause:  The task or object operation timed out.
// *Action: None

+++++++++++

SQL> 
SELECT DBMS_AUTO_SQLTUNE.report_auto_tuning_task FROM dual;

REPORT_AUTO_TUNING_TASK
--------------------------------------------------------------------------------
GENERAL INFORMATION SECTION
-------------------------------------------------------------------------------
Tuning Task Name                        : SYS_AUTO_SQL_TUNING_TASK
Tuning Task Owner                       : SYS
Workload Type                           : Automatic High-Load SQL Workload
Execution Count                         : 39
Current Execution                       : EXEC_17554
Execution Type                          : TUNE SQL
Scope                                   : COMPREHENSIVE
Global Time Limit(seconds)              : 14400
Per-SQL Time Limit(seconds)             : 2700
Completion Status                       : INTERRUPTED
Started at                              : 10/20/2016 22:00:01
Completed at                            : 10/21/2016 02:00:03
Number of Candidate SQLs                : 248
Cumulative Elapsed Time of SQL (s)      : 4837418

-------------------------------------------------------------------------------
Error: ORA-13639: The current operation was interrupted because it timed out.
-------------------------------------------------------------------------------

-------------------------------------------------------------------------------
SUMMARY SECTION
-------------------------------------------------------------------------------
                      Global SQL Tuning Result Statistics
-------------------------------------------------------------------------------
Number of SQLs Analyzed                      : 130
Number of SQLs in the Report                 : 23
Number of SQLs with Findings                 : 22
Number of SQLs with Statistic Findings       : 2
Number of SQLs with Alternative Plan Findings: 5
Number of SQLs with SQL profiles recommended : 14
Number of SQLs with Index Findings           : 11
Number of SQLs with SQL Restructure Findings : 3
Number of SQLs with Timeouts                 : 2
Number of SQLs with Errors                   : 1


Here are the references and what have been attempted.
It looks like attr7 varies by versions.

+++++++++

Automatic SQL Tune Job Fails With ORA-13639 (Doc ID 1363111.1)
Increase the job "Time Limit" Parameter to appropriate higher value to allow the tuning task to complete , using the following command :

Global Time Limit(seconds)              : 14400
Per-SQL Time Limit(seconds)             : 2700
Number of SQLs with Timeouts            : 2

+++++++

https://anargodjaev.wordpress.com/2014/07/22/ora-16957-sql-analyze-time-limit-interrupt-2/

SQL> r
  1  SELECT sql_id, sql_text FROM dba_hist_sqltext
  2  WHERE sql_id IN (SELECT attr1 FROM dba_advisor_objects
  3  WHERE execution_name = 'EXEC_17554'
  4  AND task_name = 'SYS_AUTO_SQL_TUNING_TASK'
  5  AND type = 'SQL' AND bitand(attr7,64) <> 0 )
  6*

no rows selected

SQL>

++++++++++

Bug 9874145 : PROCESS 0X0X00000002E2392938 APPEARS TO BE HUNG IN AUTO SQL TUNING TASK   
Product Version    11.1.0.7

SQL> select attr1 SQL_ID from dba_advisor_objects
  where task_name ='SYS_AUTO_SQL_TUNING_TASK'
  and type = 'SQL'
  and attr7 = 32
  order by 1;
 
SQL> r
  1* select attr7, count(*) from DBA_ADVISOR_OBJECTS where execution_name='EXEC_17554' and task_name ='SYS_AUTO_SQL_TUNING_TASK' and type='SQL' group by attr7

     ATTR7   COUNT(*)
---------- ----------
    32       99
     0       31

SQL> r
  1  SELECT count(sql_id)  FROM dba_hist_sqltext
  2  WHERE sql_id IN (SELECT attr1 FROM dba_advisor_objects
  3  WHERE execution_name = 'EXEC_17554'
  4  AND task_name = 'SYS_AUTO_SQL_TUNING_TASK'
  5  AND type = 'SQL' AND bitand(attr7,32) <> 0 )
  6*

COUNT(SQL_ID)
-------------
       99

SQL> r
  1  select  distinct sql_id
  2  from    DBA_ADVISOR_SQLPLANS
  3  where   execution_name = 'EXEC_17554'
  4         and sql_id not in (select sql_id from dba_advisor_sqlstats where execution_name = 'EXEC_17554')
  5*

SQL_ID
-------------
2dhfrqwrv0m16
58h2g858zqckc
apzyk43bfp6np
g84pgjy5ycw2g
20ypq7mzad6ah
cr1s7zpp7285p
ag01z6qn8450d
4z3gq9xh9a5jr
byav70kx3bj8w
8m3qru4z4687w
d28dgmyr5910n
5mgq2hd4xz6vv

12 rows selected.

SQL>

++++++++++

SQL> select sql_id from DBA_ADVISOR_SQLPLANS where EXECUTION_NAME='EXEC_17554'
minus
select sql_id from DBA_ADVISOR_SQLSTATS where EXECUTION_NAME='EXEC_17554'
;  2    3    4

SQL_ID
-------------
20ypq7mzad6ah
2dhfrqwrv0m16
4z3gq9xh9a5jr
58h2g858zqckc
5mgq2hd4xz6vv
8m3qru4z4687w
ag01z6qn8450d
apzyk43bfp6np
byav70kx3bj8w
cr1s7zpp7285p
d28dgmyr5910n
g84pgjy5ycw2g

12 rows selected.

SQL> select sql_id from DBA_ADVISOR_SQLSTATS where EXECUTION_NAME='EXEC_17554'
minus
select sql_id from DBA_ADVISOR_SQLPLANS where EXECUTION_NAME='EXEC_17554'
;  2    3    4

no rows selected

SQL>

++++++++++

case when finding_type = 3 AND bitand(finding_flags, 2) <> 0 then 1 else 0 end

1 - has timeout
0 - no timeout

finding_type and finding_flags are from sys.wri$_adv_findings table.

SQL> r
  1* select distinct TYPE, FLAGS from sys.wri$_adv_findings

      TYPE    FLAGS
---------- ----------
     4        1
     1        0
     4
     4        4
     2        0
     5        0
     2
     3        0
     4        0
     3        2
     4        9
     1
     3       16

13 rows selected.

SQL> r
  1  SELECT oe.*,
  2          f.id finding_id,
  3          f.type finding_type,
  4          f.flags finding_flags
  5  FROM
  6    (SELECT
  7       o.exec_name ,
  8        o.id object_id,
  9        o.attr1 sql_id,
 10        o.attr3 parsing_schema,
 11        to_number(NVL(o.attr5, '0')) phv,
 12        NVL(o.attr8,0) obj_attr8
 13      FROM  sys.wri$_adv_objects o
 14      WHERE o.exec_name = 'EXEC_17554'
 15      AND o.type      = 7
 16    ) oe,
 17    wri$_adv_findings f
 18  WHERE  f.exec_name (+)  = oe.exec_name
 19  AND f.obj_id (+)     = oe.object_id
 20  AND type             = 3
 21  AND bitand(flags,2) <> 0
 22*

EXEC_NAME        OBJECT_ID SQL_ID          PARSING_SCHEMA         PHV  OBJ_ATTR8 FINDING_ID FINDING_TYPE FINDING_FLAGS
--------------- ---------- --------------- --------------- ---------- ---------- ---------- ------------ -------------
EXEC_17554           29975 bxurwhv4muqj5   DEMO1234        2724156332 2438782765      31681            3             2
EXEC_17554           30015 1gzgzfuxn18v1   DEMO345670      4228937525 3182936792      31724            3             2

SQL> 

Proper way to compare difference between between 2 dates to a constant

Tom Kyte - Tue, 2016-11-01 21:26
Hi Tom, I am writing some code to check if the number of hours between two dates (in HH24MI) is equal to 0.5 hour, but the results vary depends on where the "24" factor is placed. Here is my testing script: >>>>> SELECT -- case 1: 1...
Categories: DBA Blogs

Performance tuning document

Tom Kyte - Tue, 2016-11-01 21:26
Dear Tom, Because of you i learn oracle. Could you please provide me Performance tuning documentation. Regards RAJ..
Categories: DBA Blogs

Schema

Tom Kyte - Tue, 2016-11-01 21:26
Hi tom , Can we create a schema without creating user to that schema to store objects?
Categories: DBA Blogs

Schema with ORM JPA

Tom Kyte - Tue, 2016-11-01 21:26
Hi there, please pointed me into the right direction. I have recently been assigened a small project - is to do with cateloging historical insurance complaints that filed by customers. my background is more to Java and have basic undestanding to...
Categories: DBA Blogs

Data Dictionary

Tom Kyte - Tue, 2016-11-01 21:26
Hi, I excuted below query SELECT * FROM dba_objects WHERE object_name LIKE 'DBA_OBJE%' then i noticed on thing that dba_objects view and synonym has same name,but oracle doesnot allow us to create any data base object with same name...
Categories: DBA Blogs

Whenever sqlerror - transferring text varuiable into shell

Tom Kyte - Tue, 2016-11-01 21:26
Hello, Tom. The description of WHENEVER SQLERROR instruction says, that one can define EXIT {variable} in it. is it possible to return the text variable? I need sql-script to return into shell-script (by which it's run) not onlu SQLCODE (or a pre...
Categories: DBA Blogs

Rowid of a table

Tom Kyte - Tue, 2016-11-01 21:26
Hi! Team, Is there a way whereby we can get all rowids of a table without querying the table but some dba views or other stuffs? Actually we have a procedure which calculates cost of goods based on FIFO method. We have a table where all inventory re...
Categories: DBA Blogs

"Cost Based Optimizer: Grundlagen – mit Update für Oracle 12c" Artikel (German)

Randolf Geist - Tue, 2016-11-01 18:31
Seit gestern steht auf der "Informatik Aktuell"-Seite mein Artikel "Cost Based Optimizer: Grundlagen – mit Update für Oracle 12c" zur Verfügung.

Er stimmt auch inhaltlich auf meinen Vortrag bei den IT-Tagen 2016 am 13. Dezember ein.

Sollte Sie das Thema interessieren, lade ich Sie hiermit also herzlich zu meinem Vortrag ein, bei dem ich das Thema auch mit Live-Demonstrationen vertiefen werde.

Debian dist-upgrade: from 7(wheezy) to 8(jessie): udev...

Dietrich Schroff - Tue, 2016-11-01 15:01
I was running my own kernel on my laptop. But doing the upgrade from debian 7 to 8, i ran in the following problem:
Since release 198, udev requires support for the following features in
the running kernel:

- inotify(2)            (CONFIG_INOTIFY_USER)
- signalfd(2)           (CONFIG_SIGNALFD)
- accept4(2)
- open_by_handle_at(2)  (CONFIG_FHANDLE)
- timerfd_create(2)     (CONFIG_TIMERFD)
- epoll_create(2)       (CONFIG_EPOLL)
Since release 176, udev requires support for the following features in
the running kernel:

- devtmpfs         (CONFIG_DEVTMPFS)

Please upgrade your kernel before or while upgrading udev.

AT YOUR OWN RISK, you can force the installation of this version of udev
WHICH DOES NOT WORK WITH YOUR RUNNING KERNEL AND WILL BREAK YOUR SYSTEM
AT THE NEXT REBOOT by creating the /etc/udev/kernel-upgrade file.
There is always a safer way to upgrade, do not try this unless you
understand what you are doing!


dpkg: Fehler beim Bearbeiten des Archivs /var/cache/apt/archives/udev_215-17+deb8u5_i386.deb (--unpack):
 Unterprozess neues pre-installation-Skript gab den Fehlerwert 1 zurückSo i changed to the default kernel of debian, but with this one systems was not able to mount the partitions, because the old kernel mounted the disk on /dev/hda and the new one on /dev/sda.
So i had to rewrite the /etc/fstab and after that, the dist-upgrade could continue...

Debian dist-upgrade: from 7(wheezy) to 8(jessie): udev...

Dietrich Schroff - Tue, 2016-11-01 15:01
I was running my own kernel on my laptop. But doing the upgrade from debian 7 to 8, i ran in the following problem:
Since release 198, udev requires support for the following features in
the running kernel:

- inotify(2)            (CONFIG_INOTIFY_USER)
- signalfd(2)           (CONFIG_SIGNALFD)
- accept4(2)
- open_by_handle_at(2)  (CONFIG_FHANDLE)
- timerfd_create(2)     (CONFIG_TIMERFD)
- epoll_create(2)       (CONFIG_EPOLL)
Since release 176, udev requires support for the following features in
the running kernel:

- devtmpfs         (CONFIG_DEVTMPFS)

Please upgrade your kernel before or while upgrading udev.

AT YOUR OWN RISK, you can force the installation of this version of udev
WHICH DOES NOT WORK WITH YOUR RUNNING KERNEL AND WILL BREAK YOUR SYSTEM
AT THE NEXT REBOOT by creating the /etc/udev/kernel-upgrade file.
There is always a safer way to upgrade, do not try this unless you
understand what you are doing!


dpkg: Fehler beim Bearbeiten des Archivs /var/cache/apt/archives/udev_215-17+deb8u5_i386.deb (--unpack):
 Unterprozess neues pre-installation-Skript gab den Fehlerwert 1 zurückSo i changed to the default kernel of debian, but with this one systems was not able to mount the partitions, because the old kernel mounted the disk on /dev/hda and the new one on /dev/sda.
So i had to rewrite the /etc/fstab and after that, the dist-upgrade could continue...

Debian dist-upgrade: Laptop suspends 30s after boot / startup

Dietrich Schroff - Tue, 2016-11-01 14:58
After i moved to jessie (Debian 8) my laptop moved to suspend to RAM about half a minute after startup.
There was no message in the logfiles and even worse the resume does not work at all...
The configuration for the suspend was in here:
/etc/systemd/logind.conf:
 [Login]
#NAutoVTs=6
#ReserveVT=6
#KillUserProcesses=no
#KillOnlyUsers=
#KillExcludeUsers=root
#InhibitDelayMaxSec=5
#HandlePowerKey=poweroff
HandleSuspendKey=ignore
#HandleHibernateKey=hibernate
HandleLidSwitch=ignore
#PowerKeyIgnoreInhibited=no
#SuspendKeyIgnoreInhibited=no
#HibernateKeyIgnoreInhibited=no
#LidSwitchIgnoreInhibited=yes
#IdleAction=ignore
#IdleActionSec=30min
#RuntimeDirectorySize=10%
#RemoveIPC=yesAfter changing the lines "HandleLidSwitch" and "HandleSuspendKey" to ignore and one more restart the laptops stays alive and no unwanted suspend is happening anymore.
That was a strange behaviour, because my first guess was overheating. But after waiting 10 minutes the laptop was really cool, so overheating was not the point...

Approaches to Consider for Your Organization’s Windchill Consolidation Project

This post comes from Fishbowl Solutions’ Senior Solutions Architect, Seth Richter.

More and more organizations need to merge multiple Windchill instances into a single Windchill instance after either acquiring another company or maybe had separate Windchill implementations based on old divisional borders. Whatever the situation, these organizations want to merge into a single Windchill instance to gain efficiencies and/or other benefits.

The first task for a company in this situation is to assemble the right team and develop the right plan. The team will need to understand the budget and begin to document key requirements and its implications. Will they hire an experienced partner like Fishbowl Solutions? If so, we recommend involving the partner early on in the process so they can help navigate the key decisions, avoid pitfalls and develop the best approach for success.

Once you start evaluating the technical process and tools to merge the Windchill instances, the most likely options are:

1. Manual Method

Moving data from one Windchill system to another manually is always an option. This method might be viable if there are small pockets of data to move in an ad-hoc manner. However, this method is extremely time consuming so proceed with caution…if you get halfway through and then move to a following method then you might have hurt the process rather than help it.

2. Third Party Tools (Fishbowl Solutions LinkExtract & LinkLoader tools)

This process can be a cost effective alternative, but it is not as robust as the Windchill Bulk Migrator so your requirements might dictate if this is viable or not.

3. PTC Windchill Bulk Migrator (WBM) tool

This is a powerful, complex tool that works great if you have an experienced team running it. Fishbowl prefers the PTC Windchill Bulk Migrator in many situations because it can complete large merge projects over a weekend and historical versions are also included in the process.

A recent Fishbowl project involved a billion-dollar manufacturing company who had acquired another business and needed to consolidate CAD data from one Windchill system into their own. The project had an aggressive timeline because it needed to be completed before the company’s seasonal rush (and also be prepared for an ERP integration). During the three-month project window, we kicked off the project, executed all of the test migrations and validations, scheduled a ‘go live’ date, and then completed the final production migration over a weekend. Users at the acquired company checked their data into their “old” Windchill system on a Friday and were able check their data out of the main corporate instance on Monday with zero engineer downtime.

Fishbowl Solutions’ PTC/PLM team has completed many Windchill merge projects such as this one. The unique advantage of working with Fishbowl is that we are  PTC Software Partners and Windchill programming experts. Often times, when other reseller/consulting partners get stuck waiting on PTC technical support, Fishbowl has been able to problem solve and keep projects on time and on budget.

If your organization is seeking to find an effective and efficient way to bulk load data from one Windchill system to another, our experts at Fishbowl Solutions are able to accomplish this on time and on budget. Urgency is a priority in these circumstances, and we want to ensure you’re able to make this transition process as hassle-free as possible with no downtime. Not sure which tool is the best fit for your Windchill migration project? Check out our website, click the “Contact Us” tab, or reach out to Rick Passolt in our business development department for more information or to request a demo.

Contact Us

Rick Passolt
Senior Account Executive
952.456.3418
mcadsales@fishbowlsolutions.com

Seth Richter is a Senior Solutions Architect at Fishbowl Solutions. Fishbowl Solutions was founded in 1999. Their areas of expertise include Oracle WebCenter, PTC’s Product Development System (PDS), and enterprise search solutions using the Google Search Appliance. Check out our website to learn more about what we do.

The post Approaches to Consider for Your Organization’s Windchill Consolidation Project appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Momentum16 – Day 1 – InfoArchive first approach

Yann Neuhaus - Tue, 2016-11-01 12:04

As Gérard explained in its first blog today was the first day not specific to the partners. I had the opportunity to attend some business centric (not really) interesting sessions in the morning. Then the morning ended and the afternoon begun with two keynotes: “Dell EMC Opening Keynote” and “Digital Transformation Keynote”. Finally I was able to attend a hands on session on InfoArchive and that’s what I will talk about in this blog since that’s the only piece of technical information I was able to get today.

 

Like every other events, there are exhibitions and exhibitors that are showing what they are doing around EMC in their booths. Of course there is also a booth regarding the InfoArchive solution if you want to talk to some EMC experts and I think that’s a pretty good way to see and understand what this solution is doing.

 

EMC InfoArchive is a unified enterprise archiving platform that stores related structured data and unstructured content in a single consolidated repository. This product enables corporations to preserve the value of enterprise information in a single, compliant, and easily accessible unified archive. Basically, that’s a place where you can store your content to be archived on a low price storage because this kind of information is usually kept only for legal constraints (read only) and don’t need to be accessed very often.

 

InfoArchive is composed of three components: an included Web Server, a Server (core of the application) and finally a Database (it is using an Xhive Database (XML), just like xPlore). Therefore you can very easily provide an XML file that will be used as an import file and that contains content to be archived by InfoArchive. Basically everything that can be transformed to an XML format (metadata/content) can be put inside InfoArchive. This solution provides some default connectors like:

  • Documentum
  • SharePoint (can archive documents and/or complete sites)
  • SAP

 

These default connectors are great but if that’s not enough, then you can just define your own with the information that you want to store and how you want to index them, transform them, aso… And of course this is defined in XML files. At the moment, this configuration can be a little bit scary since it is all done manually but I heard that a GUI configuration might be coming soon if it’s not in the version 4.2 already? InfoArchive is apparently fully web-based and  therefore based on a discussion I had with an EMC colleague, it should technically be possible to archive all the content of an SharePoint Site for example and then accessing this content from Documentum or any other location as long as it is using web-based requests to query the InfoArchive.

 

During the hands on session (first time working with InfoArchive for me), I had to create a new application/holding that can be used to archive Tweets. At the end of the one and a half hour, I had successfully created my application and I was able to search for Tweets based on their creationDate, userName, hashTags, retweetCount, aso… That was done actually pretty easily by following the help guide provided by EMC (specific to this use case) but if you don’t have this help guide, you better be an InfoArchive expert because you need to know each and every one of the XML tags that need to be added and where to add them to get something working properly.

 

See you tomorrow for the next blog with hopefully more technical stuff to share.

 

Cet article Momentum16 – Day 1 – InfoArchive first approach est apparu en premier sur Blog dbi services.

Momentum16 – Day1 – Feelings

Yann Neuhaus - Tue, 2016-11-01 12:00

This first day at Momentum 2016

Normally I should write the second one as we started yesterday with a partner session where we got some information. One of these news was that EMC had more than 400 partners a few years ago and today this has been reduced to less than 80 and dbi services is still one of them. For us it is a good news, I hope this is also one for our current and future customers.

 

Today the different sessions, a part of the key notes hold by Rohit Ghai, were more related to customer experience, solutions ECD partners can propose, business presentations, description of particular challenges that companies had to face and how they dealt with it without presenting technical details.
As I am more on the technical side, this was more for my culture, I would say.

 

In the keynote we learned that with Documentum 7.3 cost saving will increase. For instance, PostgreSQL can be used with Document 7.3, the upgrade will be faster, aso… Since time is money…
PostgreSQL can be an interesting subject as dbi services is also active in this database and I will have to work with our DB experts to see what we have to test, how and find out the pro and cons using PostgreSQL on a technical point of view, as for sure the license cost will decrease. I planned, no I have, to go to the technical session tomorrow about “What’s new in Documentum 7.3″.

 

I also took the opportunity to discuss with some Dell EMC partners to learn more about the solutions they propose. For instance I was able to talk with Neotys people to understand what their product can bring us compared to JMeter or LoadRunner which we or our customers are using to do the load tests. Having a better view of possible solutions in this area can help me in case some customers have specific requirements to help him choose the best tool.
I also had a chat with Aerow and they showed me how ARender4Documentum is working and how fast “big” documents can be displayed in their html5 viewer. So even if the fist day cannot be viewed as a technical day, I actually learned a lot.
In this kind of event, what I find cool too, is that you can meet people, for instance at lunch time around a table and start talking about your/their experiences, your/their concerns, solutions, aso… So today we had a talk about cloud (private, public) and what does this means in case you have a validated system.

 

So let’s see what will happen tomorrow, the day where more technical information will be shared.

Note: Read Morgan’s blog where you can find technical stuff. You know I felt Morgan frustrated today as he could not “eat” technical food :-)

 

Cet article Momentum16 – Day1 – Feelings est apparu en premier sur Blog dbi services.

Oracle Positioned as a Leader in the Gartner Magic Quadrant for Horizontal Portals, 2016

WebCenter Team - Tue, 2016-11-01 11:44

Summary

Consumerization, convergence, continuously evolving technology and a shift toward business influence are changing the horizontal portal market profoundly. Leaders of portal and other digital experience initiatives face more complex and more consequential decisions.

Market Definition/Description

Gartner defines "portal" as a personalized point of access to relevant information, business processes and other people. Portals address various audiences, including employees, customers, partners and citizens, and support a wide range of vertical markets and business activities. As a product, a horizontal portal is a software application or service used to create and manage portals for a wide range of purposes.

The requirements of digital business are driving waves of innovation and drawing new vendors into the portal market. The evolved landscape is increasingly populated by vendors eschewing traditional portal standards and practices in favor of more flexible, leaner and lighter-weight technology. Vendors with roots in areas adjacent to the portal market, especially web content management (WCM), increasingly offer capability suitable for portal use cases.

Vendor revenue in the portal and digital engagement technologies market declined more than 5% between 2014 and 2015, when estimated revenue was at about $1.64 billion. But Gartner expects a revenue resurgence as organizations see the need to expand and improve portal capabilities as an essential part of broader digital experience initiatives. As a result, Gartner expects the market for portal and digital engagement technologies to grow at a 2.83% compound annual growth rate (CAGR) between 2015 and 2016, then to rebound to a healthier growth rate of about 5% over the next five years (see "Forecast: Enterprise Software Markets Worldwide, 2013-2020, 3Q16 Update" ).

Figure 1. Magic Quadrant for Horizontal Portals

Source: Gartner (2016)

Oracle was positioned as a Leader in the Gartner Magic Quadrant for Horizontal Portals for its Oracle WebCenter Portal offering. 

Oracle WebCenter Portal is a portal and composite applications solution that delivers intuitive user experiences for the enterprise that are seamlessly integrated with enterprise applications. Oracle WebCenter Portal optimizes the connections between people, information and applications, provides business activity streams so users can navigate, discover and access content in context, and offers dynamic personalization of applications, portals and sites so users have a customized experience.

With social, mobile and analytics driving the next wave of digital innovation, businesses require that portals provide intuitive yet personalized user experiences with integrated social, collaboration and content management capabilities. Oracle WebCenter Portal is the complete, open and integrated enterprise portal and composite applications solution that enables the development and deployment of internal and external portals and websites, composite applications, self-service portals and mash-ups with integrated social and collaboration services and enterprise content management capabilities.

With Oracle WebCenter Portal, organizations can:

  • Improve business productivity by providing employees, customers and partners with a modern user experience to access contextual information in a rich, personalized and collaborative environment.
  • Speed development by providing developers with a comprehensive and flexible user experience platform that includes an extensive library of reusable components.
  • Increase business agility by extending and integrating their existing SaaS and on-premise applications such as Oracle Marketing Cloud, Oracle Sales Cloud, Oracle E-Business Suite; Siebel, PeopleSoft, and JD Edwards; and SAP seamlessly.

Oracle is pleased to be named a Leader in the 2016 Gartner Magic Quadrant for Horizontal Portals. You can access the full report here.

Source: "Gartner Magic Quadrant for Horizontal Portals", Jim Murphy, Gene Phifer, Gavin Tay, Magnus Revang, 17 October 2016.

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Oracle.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

More Information

The full report can be found here: http://www.gartner.com/doc/reprints?id=1-3KSXYZ1&ct=161027&st=sb 

Pages

Subscribe to Oracle FAQ aggregator