Feed aggregator

Documentum story – User gets kicked out of D2 when navigating in D2Config

Yann Neuhaus - 8 hours 44 min ago

On a customer site, one of the users complained about being kicked out each time he wanted to create new documents in D2. This issue is happening in a default deployment of D2 and D2Config in a WebLogic Domain.We found out that the user sessions for D2Config and D2 are conflicting together.
This issue occurs if D2Config and D2 applications are opened in the same browser using different TABs and if the user is navigating from D2 to D2Config and vice versa.
The error message is misleading as it shows a session time out and the user just signed in   .

D2 session timeout

Using a HTTP Header tracing tool, we saw that the JSESSIONID cookie which is the cookie storing the HTTP Session for the Java applications is changing when switching from one application to the other. This showed us that both Java Application were using the same session cookie which conduct to session lost.

Workaround or Fix:
An easy fix for this is to update the D2 weblogic.xml file included in the D2.war file with a section defining a new session cookie name as shown below:


To proceed, follow the steps below:

  1. Extract the weblogic.xml file from the war file
    mkdir mytemp
    cd mytemp
    -- Put the D2.war file here
    jar xvf D2.war WEB-INF/weblogic.xml
  2. Edit the file and add the session-descriptor bloc above just after the Description closing TAG.
  3. Update the D2.war file with the new weblogic.xml
    jar uvf D2.war WEB-INF/weblogic.xml
  4. And finally redeploy the D2.war file to the WebLogic Server.

This fix has been submitted to and validated by EMC support.


Cet article Documentum story – User gets kicked out of D2 when navigating in D2Config est apparu en premier sur Blog dbi services.

Documentum story – Setup the DSearch & IndexAgent in HTTPS (xPlore)

Yann Neuhaus - 11 hours 15 min ago

In this blog, I will show you how to setup the Primary DSearch and IndexAgent in HTTPS for xPlore 1.5. The documentation about that is available on the EMC WebSite as always and the pdf name is: “xPlore 1.5 Installation Guide.pdf”. The reason why I wanted to wrote this blog is because the documentation is not too bad but there are still some missing parts and without these parts, your configuration will just not work properly. Moreover, I think it is better to have a concrete an complete example rather than just reading a PDF with some information spread on 40 different pages.


So let’s begin the configuration. The first thing to do is obviously to login to the Full Text Server where you xPlore 1.5 is installed. For this blog, I will use /app/xPlore as the installation folder of xPlore. I will also use a Self-Signed SSL Certificate with a Certificate Chain composed of a Root and Gold CA. So let’s import the Certificate Chain in xPlore (following commands assume all certificates are available under “/app/xPlore/jboss7.1.1/certs”):

[xplore@xplore_server_01 ~]$ /app/xPlore/java64/1.7.0_72/bin/keytool -import -trustcacerts -alias root_ca -keystore /app/xPlore/java64/1.7.0_72/jre/lib/security/cacerts -file /app/xPlore/jboss7.1.1/certs/Internal_Root_CA.cer
Enter keystore password:
[xplore@xplore_server_01 ~]$ /app/xPlore/java64/1.7.0_72/bin/keytool -import -trustcacerts -alias gold_ca -keystore /app/xPlore/java64/1.7.0_72/jre/lib/security/cacerts -file /app/xPlore/jboss7.1.1/certs/Internal_Gold_CA1.cer
Enter keystore password:


All java processes using /app/xPlore/java64/1.7.0_72/bin/java will now trust the Self-Signed SSL Certificate because the Certificate Chain is trusted. When this is done, shutdown all xPlore processes (Primary DSearch and IndexAgent(s)) and let’s configure the Primary DSearch in HTTPS:

[xplore@xplore_server_01 ~]$ /app/xPlore/scripts/startstop stop
  ** Indexagent_DOCBASE1 has been stopped successfully
  ** PrimaryDsearch has been stopped successfully
[xplore@xplore_server_01 ~]$ cd /app/xPlore/dsearch/admin
[xplore@xplore_server_01 admin]$ ./xplore.sh -f scripts/ConfigSSL.groovy -enable -component IS \
    -alias ft_alias -keystore "/app/xPlore/jboss7.1.1/certs/xplore_server_01.jks" \
    -storepass K3ySt0r3P4ssw0rd -indexserverconfig "/app/xPlore/config/indexserverconfig.xml" \
    -isname PrimaryDsearch -ianame Indexagent_DOCBASE1


Some remarks:

  • “-enable” means that HTTPS will be enabled and HTTP will be disabled. If you want both to be enabled, use the “-dual” option instead
  • “-component” defines which component should be configured with this command. It can be “IS” (IndexServer), “IA” (IndexAgent) or “ALL” (IndexServer and IndexAgent)
  • “-isname” defines the name of the IndexServer/Primary DSearch that you installed
  • “-ianame” defines the name of the IndexAgent that you installed


Now what happen if you have more than one IndexAgent on the same server? Well the script isn’t smart enough for that and that’s the reason why I didn’t put “ALL” above but just “IS”. You might also noticed that I defined the “-ianame” parameter with “Indexagent_DOCBASE1”. This is because even if we are configuring the Primary DSearch in HTTPS, all IndexAgents have a reference in a configuration file that defines which port and protocol the IA should use to connect to the DSearch and if this isn’t setup properly, the IA will not be able to start.


Now the IndexServer is configured in HTTPS so let’s do the same thing for the IndexAgent:

[xplore@xplore_server_01 admin]$ ./xplore.sh -f scripts/ConfigSSL.groovy -enable -component IA \
    -alias ft_alias -keystore "/app/xPlore/jboss7.1.1/certs/xplore_server_01.jks" \
    -storepass K3ySt0r3P4ssw0rd -indexserverconfig "/app/xPlore/config/indexserverconfig.xml" \
    -ianame Indexagent_DOCBASE1 -iaport 9200


As you can see above, this time no need to add the “-isname” parameter, it is not needed for the IndexAgent(s). Let’s say that you have a second IndexAgent for the docbase named DOCBASE2 , then you also have to execute the above command for this second indexAgent:

[xplore@xplore_server_01 admin]$ ./xplore.sh -f scripts/ConfigSSL.groovy -enable -component IA \
    -alias ft_alias -keystore "/app/xPlore/jboss7.1.1/certs/xplore_server_01.jks" \
    -storepass K3ySt0r3P4ssw0rd -indexserverconfig "/app/xPlore/config/indexserverconfig.xml" \
    -ianame Indexagent_DOCBASE2 -iaport 9220


In case you didn’t know, yes each IndexAgent need at least 20 consecutive ports (so 9200 to 9219 for Indexagent_DOCBASE1 // 9220 to 9239 for Indexagent_DOCBASE2).

When configuring the IndexServer in HTTPS, I specified the “-ianame”. This is, like I said before, because there is a reference somewhere to the Protocol/Port used. This reference has been updated properly for Indexagent_DOCBASE1 normally but not for Indexagent_DOCBASE2. Therefore you need to do that manually:

[xplore@xplore_server_01 admin]$ grep -B1 -A10 dsearch_qrserver_protocol /app/xPlore/jboss7.1.1/server/DctmServer_Indexagent_DOCBASE2/deployments/IndexAgent.war/WEB-INF/classes/indexagent.xml


Just open this file and update the few lines that I printed above by replacing “HTTP” with “HTTPS” and “9300” with “9302” and that’s it. If you have several IndexAgents, then you need to do that for all of them.


The next step is to login to the Content Server (e.g.: ssh dmadmin@content_server_01) and update some properties in the docbase:

[dmadmin@content_server_01 ~]$ iapi DOCBASE1 -Udmadmin -Pxxx

        EMC Documentum iapi - Interactive API interface
        (c) Copyright EMC Corp., 1992 - 2015
        All rights reserved.
        Client Library Release 7.2.0050.0084

Connecting to Server using docbase DOCBASE1
[DM_SESSION_I_SESSION_START]info:  "Session 013f245a8014087a started for user dmadmin."

Connected to Documentum Server running Release 7.2.0050.0214  Linux64.Oracle
Session id is s0
API> ?,c,select r_object_id from dm_ftengine_config where any lower(param_value) = lower('xplore_server_01');
(1 row affected)

API> fetch,c,083f245a800052ae
API> dump,c,l

  object_name                     : DSearch Fulltext Engine Configuration
  param_name                   [0]: dsearch_qrserver_protocol
                               [1]: dsearch_qrygen_mode
                               [2]: dsearch_qrserver_target
                               [3]: dsearch_qrserver_port
                               [4]: dsearch_config_port
                               [5]: dsearch_qrserver_host
                               [6]: dsearch_domain
                               [7]: dsearch_config_host
                               [8]: query_plugin_mapping_file
                               [9]: load_balancer_enabled
                              [10]: ft_wildcards_mode
  param_value                  [0]: HTTP
                               [1]: both
                               [2]: /dsearch/IndexServerServlet
                               [3]: 9300
                               [4]: 9300
                               [5]: xplore_server_01
                               [6]: DOCBASE1
                               [7]: xplore_server_01
                               [8]: /app/dctm/server/fulltext/dsearch/dm_AttributeMapping.xml
                               [9]: true
                              [10]: trailing_implicit


You might have noticed that I specified a WHERE clause on the select to find the r_object_id of the xPlore Server. That’s basically because in this case, there are two xPlore Servers in a HA setup (the parameter load_balancer_enabled is setup to true) but I just want to update the right object. So let’s update the parameters highlighted above (“dsearch_qrserver_protocol”, “dsearch_qrserver_port” and “dsearch_config_port”):

API> set,c,l,param_value[0]
API> set,c,l,param_value[3]
SET> 9302
API> set,c,l,param_value[4]
SET> 9302
API> save,c,l


With these steps, we updated the dm_ftengine_config object. The next step is to also update the URL of the IndexAgent:

API> ?,c,select r_object_id from dm_server_config
(2 rows affected)

API> fetch,c,3d3f245a80000102
API> dump,c,l

  object_name                     : DOCBASE1
  app_server_name              [0]: do_method
                               [1]: do_mail
                               [2]: do_bpm
                               [3]: xplore_server_01_9200_IndexAgent
                               [4]: xplore_server_02_9200_IndexAgent
  app_server_uri               [0]: https://content_server_01:9082/DmMethods/servlet/DoMethod
                               [1]: https://content_server_01:9082/DmMail/servlet/DoMail
                               [2]: https://content_server_01:9082/bpm/servlet/DoMethod
                               [3]: http://xplore_server_01:9200/IndexAgent/servlet/IndexAgent
                               [4]: http://xplore_server_02:9200/IndexAgent/servlet/IndexAgent


So we setup the IndexAgent installed on xplore_server_01 in HTTPS and therefore we also need to update the URL referenced in the docbase. That’s actually one of the things that aren’t in the official documentation at the moment. This is done as before:

API> set,c,l,app_server_uri[3]
SET> https://xplore_server_01:9202/IndexAgent/servlet/IndexAgent
API> save,c,l


As you saw above, this is an environment with two dm_server_config objects (two Content Servers) and two IndexAgents. Above, we setup the Primary DSearch and IndexAgent installed on xplore_server_01 in HTTPS. The dm_ftengine_config for this Primary DSearch has been updated and the URLs defined in one dm_server_config object have been updated too. But the same references are present in the second dm_server_config and therefore you also need to do that for the second one (3d3f245a80003796 in this case). Same steps so just repeat with the other r_object_id!


Ok so now all objects in the docbase have been updated successfully. Therefore return on the xPlore Server, clean the caches and start everything again:

[xplore@xplore_server_01 ~]$ rm -rf /app/xPlore/jboss7.1.1/server/DctmServer_*/tmp/work/*
[xplore@xplore_server_01 ~]$ /app/xPlore/scripts/startstop start
  ** PrimaryDsearch has been started successfully
  ** Indexagent_DOCBASE1 has been started successfully


As said before, some of these steps aren’t described/explained in the official documentation and that will lead you to a non-working situation… In addition to that, there are some bugs impacting the proper behavior of the Primary DSearch and/or the Index Agents when configured in HTTPS. We reported these bugs to EMC which was able to provide a fix for some of them and include that in a later patch but as you know it is not always possible to upgrade or patch your environment. For example with the CS 7.2 P02 or P05, the searches will NOT work against a DSearch in HTTPS (corrected in the P08 or P09 if I remember correctly) but I will not describe that in this blog. If you are facing an issue with the IndexAgents not responding in HTTPS, please check this blog.


Cet article Documentum story – Setup the DSearch & IndexAgent in HTTPS (xPlore) est apparu en premier sur Blog dbi services.

timestamp of a inserted row

Tom Kyte - 12 hours 49 min ago
In a classic case of replication (forget the tool of replication here ) Oracle - 2 - Oracle create table abc (anum number, aname varchar2(30) , adate date default sysdate ) ; insert into abc values(1,'1test',sysdate); insert into abc val...
Categories: DBA Blogs

PL/SQL Programming

Tom Kyte - 12 hours 49 min ago
Write a PL/SQL block that uses looping to display the value of the number two (2) raised to a power. The power should be given as a local variable called lv_power. For example, if the variable is initialized to 5, the output would be 2 raised to th...
Categories: DBA Blogs

How to avoid external table's data source file be replaced when another query is using the old data source file

Tom Kyte - 12 hours 49 min ago
The external table's data source file is updated with some interval, if another query is running, replace the old data source file will cause 1)ORA-29913 and ORA-30653 if not setting reject limit UNLIMITED 2) data inconsistent, part data is old sou...
Categories: DBA Blogs

Query on Stored Procedure - MINUS clause

Tom Kyte - 12 hours 49 min ago
Dear All, Ques 1. Please advise if the use of minus clause is best when I am looking to exclude certain specific conditions or shall I be using Not IN? I am in situation where I know what to avoid, not sure what all to select and hence I needed...
Categories: DBA Blogs

When shutting down a container database

Tom Kyte - 12 hours 49 min ago
Hello, Some background info: We are working in a 12c multitenant environment on windows server 2012 R2. Container database contains about 20-25 pluggable databases. Typically we have to shutdown the container database to perform maintenan...
Categories: DBA Blogs

ORA-29532: Java call terminated by uncaught Java exception: java.lang.OutOfMemoryError

Tom Kyte - 12 hours 49 min ago
When Passing clob size more than 3 MB to java stored procedure I get java.lang.OutOfMemoryError. The data is in the JSON format which we try to deserialize into an object using a java json library in the java stored procedure. We have tried the ...
Categories: DBA Blogs

SP2-0743 and SP2-0042

Tom Kyte - 12 hours 49 min ago
Hi Sir, From the doc: SP2-0042 unknown command command_name - rest of line ignored Cause: The command entered was not valid. Action: Check the syntax of the command you used for the correct options. SP2-0734 Unknown command beginning com...
Categories: DBA Blogs

Isolate Your Code

Michael Dinh - Tue, 2016-10-25 19:19

I fail to understand anonymous PL/SQL block is used with dbms_scheduler.

Here is an example:
hawk:(SYSTEM@hawk):PRIMARY> @x.sql
hawk:(SYSTEM@hawk):PRIMARY> set echo on
  3  program_name=>'TESTING',
  4  program_action=>'DECLARE
  5  x NUMBER := 100;
  6  BEGIN
  7     FOR i IN 1..10 LOOP
  8        IF MOD(i,2) = 0 THEN
  9           INSERT INTO temp VALUES (i);
 10        ELSE
 11           INSERT INTO temp VALUES (i);
 12        END IF;
 13        x := x + 100;
 14     END LOOP;
 15     COMMIT;
 16  END;',
 17  program_type=>'PLSQL_BLOCK',
 18  number_of_arguments=>0
 19  );
 20  END;
 21  /

PL/SQL procedure successfully completed.

hawk:(SYSTEM@hawk):PRIMARY> show error
No errors.
Nothing wrong, right? What happens when we strip out and run the anonymous PL/SQL block?
hawk:(SYSTEM@hawk):PRIMARY> @y.sql
  2     x NUMBER := 100;
  3  BEGIN
  4     FOR i IN 1..10 LOOP
  5        IF MOD(i,2) = 0 THEN
  6           INSERT INTO temp VALUES (i);
  7        ELSE
  8           INSERT INTO temp VALUES (i);
  9        END IF;
 10        x := x + 100;
 11     END LOOP;
 12     COMMIT;
 13  END;
 14  /
         INSERT INTO temp VALUES (i);
ERROR at line 6:
ORA-06550: line 6, column 22:
PL/SQL: ORA-00942: table or view does not exist
ORA-06550: line 6, column 10:
PL/SQL: SQL Statement ignored
ORA-06550: line 8, column 22:
PL/SQL: ORA-00942: table or view does not exist
ORA-06550: line 8, column 10:
PL/SQL: SQL Statement ignored

hawk:(SYSTEM@hawk):PRIMARY> desc temp;
ORA-04043: object temp does not exist

Why not create stored procedure or package?
hawk:(SYSTEM@hawk):PRIMARY> @z.sql
hawk:(SYSTEM@hawk):PRIMARY> create or replace procedure SP_TESTING
  2  AS
  3  x NUMBER := 100;
  4  BEGIN
  5     FOR i IN 1..10 LOOP
  6        IF MOD(i,2) = 0 THEN
  7           INSERT INTO temp VALUES (i);
  8        ELSE
  9           INSERT INTO temp VALUES (i);
 10        END IF;
 11        x := x + 100;
 12     END LOOP;
 13     COMMIT;
 14  END;
 15  /

Warning: Procedure created with compilation errors.

hawk:(SYSTEM@hawk):PRIMARY> show error

-------- -----------------------------------------------------------------
7/10     PL/SQL: SQL Statement ignored
7/22     PL/SQL: ORA-00942: table or view does not exist
9/10     PL/SQL: SQL Statement ignored
9/22     PL/SQL: ORA-00942: table or view does not exist

hawk:(SYSTEM@hawk):PRIMARY> create table temp(id int);

Table created.

hawk:(SYSTEM@hawk):PRIMARY> alter procedure SP_TESTING compile;

Procedure altered.

hawk:(SYSTEM@hawk):PRIMARY> show error
No errors.
hawk:(SYSTEM@hawk):PRIMARY> @a.sql
  3  program_name=>'TESTING2',
  4  program_action=>'BEGIN SP_TESTING; END;',
  5  program_type=>'PLSQL_BLOCK',
  6  number_of_arguments=>0
  7  );
  8  END;
  9  /

PL/SQL procedure successfully completed.

hawk:(SYSTEM@hawk):PRIMARY> show error
No errors.
  2  /

PL/SQL procedure successfully completed.

hawk:(SYSTEM@hawk):PRIMARY> select * from temp;


10 rows selected.


I put my SQL scripts on GitHub

Bobby Durrett's DBA Blog - Tue, 2016-10-25 10:45

I created a new GitHub public repository with my SQL scripts. Here is the URL:


I’ve experimented with GitHub for my Python graphing scripts but wasn’t sure about putting the SQL out there. I don’t really have any comments in the SQL scripts. But, I have mentioned many of the scripts in blog posts so those posts form a type of documentation. Anyway, it is there so people can see it. Also, I get the benefit of using Git to version my scripts and GitHub serves as a backup of my repository.

Also, I have a pile of scripts in a directory on my laptop but I have my scripts mixed in with those that others have written. I’m pretty sure that the repository only has my stuff in it but if someone finds something that isn’t mine let me know and I’ll take it out. I don’t want to take credit for other people’s work. But, the point is to share the things that I have done with the community so that others can benefit just as I benefit from the Oracle community. I’m not selling anything and if there is someone else’s stuff in there it isn’t like I’m making money from it.

Like anything on the web use at your own risk. The repository contains scripts that I get a lot of benefits from but I make no guarantees. Try any script you get from the internet on some test system first and try to understand the script before you even run it there.

I hope that my new SQL repository helps people in their Oracle work.


Categories: DBA Blogs

Wanna become an OCM? Go on read out Kamran's new OCM practical guide - MY TAKE

Syed Jaffar - Tue, 2016-10-25 10:31
When I first heard about Kamran's new book, OCM practical guide, I said wow, because I was thinking on the same line a few years back, but, dropped the idea due to several factors, including the fact that the time and efforts required on this book.When he approached me to be one of the technical reviewers, I have accepted the deal straightaway without having a second thought. I am really honored to be part of this wonderful book which unfold the knowledge and help OCM aspirants to get what they dream in their lives.

I remember the debates at various places and at several Oracle forums where people discussed about the necessity and advantages of being Oracle certified professional. There were so many discussions and debates whether being certified professional will really add any value to one's career. Anyways, this is not the platform to discuss such things , however, in my own perspective, with real experience and having certification surely boosts the career and gives more chances to advance in the career.

First thing first, Kamran really put his heart out to come-up with such an extraordinary book in the form of practical guide. I thoroughly enjoyed reviewing every bits and bytes of the book, and the amount of practical examples demonstrate in this book is just prodigious. Only one with the real world and tremendous experience in the technology could do that. Take a bow my friend.

Each and every chapter got some great contents, and neatly explained with about 200+ practical examples. Let me walk through the chapters and give you my inputs:

Server Configuration provides a detailed step-by-step guide on Oracle 11g Database Software setup and new database creation through GUI and silent mode likewise. Also, outlines the the procedure to configure network settings, listener, tns names and etc.

Enterprise Manager Grid Control explains step-by-step procedure to install and configure OEM, and how to schedule and manage stuff.  

Managing Database Availability one of the important chapters, not only for OCM preparation perspective, to manage our production databases, deploying optimal backup and recovery strategies to secure the databases.

Data Management chapter provides detailed information about types of materialized view and materialized view log and how Oracle uses precomputed materialized view instead of querying a table with different aggregate functions and provides a quick result.

Data Warehouse Management chapter provides information about main data warehouse topics such as partitioning and managing large objects. Next, we talk about large objects and show how to use various SecureFile LOB features such as compression, deduplication, encryption, caching and logging.

Performance Tuning I particularly enjoyed this reading and reviewing this chapter, Its the heart of the book with so much explanation and practical examples. This one shouldn't be missed out. 

Grid Infrastructure and ASM contains all you wanted to know about GRID infrastructure and ASM technologies. How to install GRID and create disk groups.

Real Application Clusters explores steps to successfully create a RAC database on two nodes. With only few additional steps, you will successfully create a RAC database. Then you see the way how to create and configure ASM with command line interface. Once ASM is configured, silent RAC database creation steps are provided.

Data Guard chapter starts by creating a data guard using command line interface, OEM and data guard broker. It also provides steps on performing switchover and failover using all mentioned tools.

In nutshell, its a practice guide with a perfect recipe and ingredients to become an OCM, which evenly blended with many useful examples and extraordinary explanation. The wait is over, this is the book we all been looking for a long time. Go and place your order and get certified, become an OCM.

You can place the order through Amazon, use the below URL:


Consider Your Options for SolidWorks to Windchill Data Migrations

This post comes from Fishbowl Solutions’ Associate MCAD Consultant, Ben Sawyer.

CAD data migrations are most often seen as a huge burden. They can be lengthy, costly, messy, and a general road block to a successful project. Organizations planning on migrating SolidWorks data to PTC Windchill should consider their options when it comes to the process and tools they utilize to perform the bulk loading.

At Fishbowl Solutions, our belief is that the faster you can load all your data accurately into Windchill, the faster your company can implement critical PLM business processes and realize the results of such initiatives like a Faster NPI, Streamline Change & Configuration Management, Improved Quality, Etc.

There are two typical project scenarios we encounter with these kinds of data migration projects. SolidWorks data resides on a Network File System (NFS) or resides in either PDMWorks or EPDM.

The options for this process and the tools used will be dependent on other factors as well. The most common guiding factors to influence decisions are the quantity of data and the project completion date requirements. Here are typical project scenarios.

Scenario One: Files on a Network File System

Manual Migration

There is always an option to manually migrate SolidWorks data into Windchill. However, if an organization has thousands of files from multiple products that need to be imported, this process can be extremely daunting. When loading manually, this process involves bringing files into the Windchill workspace, carefully resolving any missing dependents, errors, duplicates, setting destination folders, revisions, lifecycles and fixing bad metadata. (Those who have tried this approach with large data quantities in the past know the pain of which we are talking about!)

Automated Solution

Years ago, Fishbowl developed its LinkLoader tool for SolidWorks as a viable solution to complete a Windchill bulk loading project with speed and accuracy.

Fishbowl’s LinkLoader solution follows a simple workflow to help identify data to be cleansed and mass loaded with accurate metadata. The steps are as follows:

1. Discovery
In this initial stage, the user chooses the mass of SolidWorks data to be loaded into Windchill. Since Windchill doesn’t allow duplicate named CAD files in the system, the software quickly identifies these duplicate files. It is up to the user to resolve the duplicate files or remove them from the data loading set.

2. Validation
The validation stage will ensure files are retrievable, attributes/parameters are extracted (for use in later stages), and relationships with other SolidWorks files are examined. LinkLoader captures all actions. The end user will need to resolve any errors or remove the data from the loading set.

3. Mapping
Moving toward the bulk loading stage, it is necessary to confirm and/or modify the attribute-mapping file as desired. The only required fields for mapping are lifecycle, revision/version, and the Windchill folder location. End users are able to leverage the attributes/parameter information from the validation as desired, or create their own ‘Instance Based Attribute’ list to map with the files.

4. Bulk Load
Once the mapping stage is completed, the loading process is ready. There is a progress indicator that displays the number of files completed and the percentage done. If there are errors with any files during the upload, it will document these in an ‘Error List Report’ and LinkLoader will simply move on to the next file.

Scenario Two: Files reside in PDMWorks or EPDM

Manual Migration

There is also an option to do a manual data migration from one system to another if files reside in PDMWorks or EPDM. However, this process can also be tedious and drawn out as much, or perhaps even more than when the files are on a NFS.

Automated Solution

Having files within PDMWorks or EPDM can make the migration process more straightforward and faster than the NFS projects. Fishbowl has created an automated solution tool that extracts the latest versions of each file from the legacy system and immediately prepares it for loading into Windchill. The steps are as follows:

1. Extraction (LinkExtract)
In this initial stage, Fishbowl uses its LinkExtract tool to pull the latest version of all SolidWorks files , determine references, and extract all the attributes for the files as defined in PDMWorks or EPDM.

2. Mapping
Before loading the files, it is necessary to confirm and or modify the attribute mapping file as desired. Admins can fully leverage the attributes/parameter information from the Extraction step, or can start from scratch if they find it to be easier. Often the destination Windchill system will have different terminology or states and it is easy to remap those as needed in this step.

3. Bulk Load
Once the mapping stage is completed, the loading process is ready. There is a progress indicator that displays the number of files completed and the percentage done. If there are errors with any files during the upload, it will document these in the Error List Report and LinkLoader will move on to the next file.

Proven Successes with LinkLoader

Many of Fishbowl’s customers have purchased and successfully ran LinkLoader themselves with little to no assistance from Fishbowl. Other customers of ours have utilized our consulting services to complete the migration project on their behalf.

With Fishbowl’s methodology centered on “Customer First”, our focus and support continuously keeps our customers satisfied. This is the same commitment and expertise we will bring to any and every data migration project.

If your organization is looking to consolidate SolidWorks CAD data to Windchill in a timely and effective manner, regardless of the size and scale of the project, our experts at Fishbowl Solutions can get it done.

For example, Fishbowl partnered with a multi-billion dollar medical device company with a short time frame to migrate over 30,000 SolidWorks files from a legacy system into Windchill. Fishbowl’s expert team took initiative and planned the process to meet their tight industry regulations and finish on time and on budget. After the Fishbowl team executed test migrations, the actual production migration process only took a few hours, thus eliminating engineering downtime.

If your organization is seeking the right team and tools to complete a SolidWorks data migration to Windchill, reach out to us at Fishbowl Solutions.

If you’d like more information about Fishbowl’s LinkLoader tool or our other products and services for PTC Windchill and Creo, check out our website, click the “Contact Us” tab, or reach out to Rick Passolt in our business development department.

Contact Us

Rick Passolt
Senior Account Executive

Ben Sawyer is an Associate MCAD Consultant at Fishbowl Solutions. Fishbowl Solutions was founded in 1999. Their areas of expertise include Oracle WebCenter, PTC’s Product Development System (PDS), and enterprise search solutions using the Google Search Appliance. Check out our website to learn more about what we do. 

The post Consider Your Options for SolidWorks to Windchill Data Migrations appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Making Culture Actionable

WebCenter Team - Tue, 2016-10-25 08:48

Authored by: Dave Gray, Entrepreneur, Author, Consultant and Founder of XPLANE

Dave Gray

I’ve never met a senior leader who didn’t acknowledge the importance of culture. We all agree it is vital. Numerous examples in multiple domains — not just business but in war, sports, and social change, have demonstrated over and over that culture is a more powerful force-multiplier than money, power or superior technology. 

Superior business culture is the most-often cited factor in business success stories, like Google, Southwest Air, and Nordstrom. Toxic or stagnant culture is most-often blamed for the catastrophic downfall of companies like Kodak, Nokia, and Motorola. Jack Welch and Lou Gerstner, who presided over two of the most effective comebacks in corporate history (GE and IBM, respectively), both cited culture as one of their top priorities.

Companies are made out of people, after all, and the most successful companies make people, and culture, a top strategic priority. And yet as leaders we struggle to get a grip on culture. It’s slippery, difficult to name, define, measure or get any kind of traction on.

Business volatility requires business to adapt.

We are entering a new business era characterized by high levels of volatility, uncertainty and disruption. There is no organization that is not being affected by these winds of change. Established companies are being disrupted by newcomers with astonishing regularity so common it now has a name — “Getting Netflixed” — a reference to the way Netflix outfoxed and outmaneuvered Blockbuster by redefining and digitizing the customer experience. What Netflix did to video rentals, companies like AirBnB, Uber, Spotify and Zipcar are doing to hospitality, transportation, music and car rentals.

When the business environment is evolving this rapidly, culture must keep up.

Culture change is hard.

Culture change is also difficult. Changing culture requires changing habits, behaviors, and routines that have solidified over decades. To imagine the scope of a culture change initiative, just imaging 5,000 people trying to quit smoking at the same time. It’s incredibly hard, and the risk of failure is high. Even success does not guarantee you will be appreciated. Jack Welch and Lou Gerstner are controversial figures to this day.

Three culture change challenges.

Leaders interested in enabling culture change face three problems:

First, getting a grip on the current culture by identifying the direct links between business results, behavior, and organizational enablers like incentives, work systems, and management practices.

Second, imagining and designing the future culture, including not just the desired business results and behaviors, but the incentives, systems, habits and practices which will enable the new culture to emerge.

Third, the hard work of shifting deeply embedded, entrenched habits and behaviors. For leaders this is especially difficult, because they will necessarily be learning and acting out new behaviors while they are also on display, watched by everyone. Not to mention that culture change, by nature, often occurs in difficult business circumstances, within organizations that will certainly include many critics, cynics and skeptics.

Three steps to a new culture.

1. Diagnose your current culture.

The first step is to build a solid understanding of the culture you have today. Historically this has been a difficult undertaking, but today it is easier, due to the emergence of business design tools for rapidly diagnosing, describing and designing business strategies and systems.

Leading the charge for business design tools are two of the top 50 business thinkers in the world, Alex Osterwalder of Strategyzer and Yves Pigneur of the University of Lausanne, designers of the Business Model Canvas, which is used by more than 5 million people in organizations around the world.

The Culture Map: A business design tool.

Alex and Yves helped us develop a new tool for understanding and designing culture, called the Culture Map.

The Culture Map links business outcomes and behaviors with the enablers and blockers that are caused or influenced by managers. 

Culture Mapping is a process that involves deep listening exercises designed to find the real underlying system behind the noise that masks many business realities.

2. Design your future culture.

This is a more difficult exercise than the current state diagnostic, because it takes not just the imagination to visualize a future state, but also the humility to be realistic about what can be achieved and how quickly it can happen.

A visual culture map depicting desired behaviors.

An important part of the design process is visualizing future behaviors in high-granularity detail, to eliminate as much doubt, uncertainty, and skepticism as possible. 

We recommend that you develop a visual culture map depicting the culture you aspire to, showing people exactly what you want them to do and say in the future you envision.

The description should be as clear, specific and detailed as possible. 

If new behaviors are not clearly and visually articulated, the most likely outcome is that the old behaviors will simply continue: business as usual but with new names.

3. Do the hard work of following through.

True culture change is a difficult endeavor, not to be taken lightly. It can take up to three years for a new culture to take root.

We like to compare culture work to gardening. Designing the garden is the easy part. For your culture change efforts to succeed, you will need the patience and dedication of a gardener. Like a garden, a new culture will grow at its pace, not your pace. There is no way to speed up this kind of change. 

People must first hear that you are committed to the change. Then, over time, they will closely observe your actions and ongoing behavior, looking for discrepancies and clues. Most people must overcome some cynicism and skepticism before they will believe that the change is real. From that point on, there is still much work to do in order for those changes in belief to become new habits, routines and behaviors.

Six best practices for driving cultural change.

Make culture a top priority.

When culture change is necessary, it must be a top priority for senior management. If there is one thing we have learned from more than 20 years working on organizational effectiveness and change initiatives, it is this: 

When culture change is necessary, if it is not one of the executive team’s top three priorities, the culture change will fail. 

In such cases, its highly likely that the organization’s strategies, and in some cases the company itself, will also fail.

Lead by example.

The tone for culture is set at the top. It is critically important and cannot be delegated. It must be lived and acted daily, in large and small ways. People may listen to what you say, but they also watch what you do. And if your actions don’t match your words, they follow the example that’s set by your actions.

Focus on one habit at a time.

We recommend that executives focus on no more than six key behaviors, and that they focus on these one at a time. For a period of three to six months, leaders focus on changing one key behavior, practicing it with each other and with employees, making the commitment privately and in public to make that a personal habit, and soliciting feedback from colleagues and employees.

Establish a rhythm and track progress.

Most leaders and managers create change most effectively when they have a number that they are trying to move up and toward the right. Culture is no different.

The most effective tool we have seen for measuring cultural improvement over time is a simple employee survey, similar to what you see on Yelp or Amazon reviews. These surveys measure employees perceptions about behaviors, and in the world of culture, perception is reality.

Culture perceptions can be measured by frequent, simple, easy surveys.

Measure employee perceptions about the habit you are focusing on. Poll and review the numbers weekly. 

Look at the numbers as part of your regular operational rhythm, such as weekly team meetings and status updates. Ask probing questions about what causes and influences those perceptions.

Track your progress over time. Talk about your culture survey results publicly and make them a topic of conversation throughout the company.

Plan for a short-term drop in performance.

Expect a “muddy middle”period, where people are shedding their old behaviors but have not yet embraced or learned the new ones. Performance will most likely drop during this period. This is where many culture change initiatives lose their resolve and snap back to old habits and routines.

Be patient.

Culture work takes time. The first habit is the hardest to break, and the most difficult days are the early days. As people begin to see progress and see that your actions match your words, resistance and skepticism will decrease and you will gain momentum over time.

With a strong commitment from leaders who are leading by personal example, asking the right questions, and tracking cultural performance over time, can make steady progress toward a revitalized culture.

Learn more about the Culture Map and see it for yourself in this webcast on October 27 at 10:00am PDT!

Dave Gray is the founder of XPLANE.

Case construct with WHERE clause

Tom Kyte - Tue, 2016-10-25 06:06
Hi Tom, I have a question and I don't know if this is possible or if i'm jsut doing something wrong because i get multiple errors like missing right paren, or missing keyword. I want to use the CASE construct after a WHERE clause to build an expre...
Categories: DBA Blogs

Missing Physical Reads

Tom Kyte - Tue, 2016-10-25 06:06
Hi Tom, Please find below the experimented done, sequentially. Scripts ---------- create table cust (cust_id number, last_name varchar2(20),first_name varchar2(20)); create index cust_idx1 on cust(last_name); SQL> set autotrace on; SQL> ...
Categories: DBA Blogs

Function with multi-dimensional array as parameter?

Tom Kyte - Tue, 2016-10-25 06:06
How would I define a function that takes a multi-dimensional array as an input parameter and returns json_tbl PIPELINED that has been defined as <code>CREATE OR REPLACE TYPE CIC3.json_t as OBJECT (JSON_TEXT varchar2(30000)); CREATE OR REPLACE TYPE ...
Categories: DBA Blogs

Depth of attributes,

Tom Kyte - Tue, 2016-10-25 06:06
Hello, I have a situation where my data (for a given sys_id) has values for multiple depths (level1 attribute, level2 attribute and so on). For a given sys_id, I have to select the rows that has the maximum depth. However, as an example, if a va...
Categories: DBA Blogs

Oracle Auditing - Syslog

Tom Kyte - Tue, 2016-10-25 06:06
Hi Guys, I have two questions with regard to Oracle database auditing via syslog. 1. When auditing via OS syslog, what is the ideal value for the AUDIT_SYSLOG_LEVEL parameter, where AUDIT_SYSLOG_LEVEL = facility.priority It is the priortity...
Categories: DBA Blogs

how to optimize a query that is concatenating fields routinely

Tom Kyte - Tue, 2016-10-25 06:06
Hi. I'm trying to find a way to optimize this situation below. Example table definition: create table rw_test (A varchar2(10), B varchar2(10), C varchar2(10), D varchar2(10), E varchar2(10), F number(10), entry_date date); ...
Categories: DBA Blogs


Subscribe to Oracle FAQ aggregator