On a customer site, one of the users complained about being kicked out each time he wanted to create new documents in D2. This issue is happening in a default deployment of D2 and D2Config in a WebLogic Domain.We found out that the user sessions for D2Config and D2 are conflicting together.
This issue occurs if D2Config and D2 applications are opened in the same browser using different TABs and if the user is navigating from D2 to D2Config and vice versa.
The error message is misleading as it shows a session time out and the user just signed in .
Using a HTTP Header tracing tool, we saw that the JSESSIONID cookie which is the cookie storing the HTTP Session for the Java applications is changing when switching from one application to the other. This showed us that both Java Application were using the same session cookie which conduct to session lost.
Workaround or Fix:
An easy fix for this is to update the D2 weblogic.xml file included in the D2.war file with a section defining a new session cookie name as shown below:
<session-descriptor> <cookie-name>JSESSIONID_D2</cookie-name> <cookie-http-only>false</cookie-http-only> </session-descriptor>
To proceed, follow the steps below:
- Extract the weblogic.xml file from the war file
mkdir mytemp cd mytemp -- Put the D2.war file here jar xvf D2.war WEB-INF/weblogic.xml
- Edit the file and add the session-descriptor bloc above just after the Description closing TAG.
- Update the D2.war file with the new weblogic.xml
jar uvf D2.war WEB-INF/weblogic.xml
- And finally redeploy the D2.war file to the WebLogic Server.
This fix has been submitted to and validated by EMC support.
Cet article Documentum story – User gets kicked out of D2 when navigating in D2Config est apparu en premier sur Blog dbi services.
In this blog, I will show you how to setup the Primary DSearch and IndexAgent in HTTPS for xPlore 1.5. The documentation about that is available on the EMC WebSite as always and the pdf name is: “xPlore 1.5 Installation Guide.pdf”. The reason why I wanted to wrote this blog is because the documentation is not too bad but there are still some missing parts and without these parts, your configuration will just not work properly. Moreover, I think it is better to have a concrete an complete example rather than just reading a PDF with some information spread on 40 different pages.
So let’s begin the configuration. The first thing to do is obviously to login to the Full Text Server where you xPlore 1.5 is installed. For this blog, I will use /app/xPlore as the installation folder of xPlore. I will also use a Self-Signed SSL Certificate with a Certificate Chain composed of a Root and Gold CA. So let’s import the Certificate Chain in xPlore (following commands assume all certificates are available under “/app/xPlore/jboss7.1.1/certs”):
[xplore@xplore_server_01 ~]$ /app/xPlore/java64/1.7.0_72/bin/keytool -import -trustcacerts -alias root_ca -keystore /app/xPlore/java64/1.7.0_72/jre/lib/security/cacerts -file /app/xPlore/jboss7.1.1/certs/Internal_Root_CA.cer Enter keystore password: [xplore@xplore_server_01 ~]$ /app/xPlore/java64/1.7.0_72/bin/keytool -import -trustcacerts -alias gold_ca -keystore /app/xPlore/java64/1.7.0_72/jre/lib/security/cacerts -file /app/xPlore/jboss7.1.1/certs/Internal_Gold_CA1.cer Enter keystore password:
All java processes using /app/xPlore/java64/1.7.0_72/bin/java will now trust the Self-Signed SSL Certificate because the Certificate Chain is trusted. When this is done, shutdown all xPlore processes (Primary DSearch and IndexAgent(s)) and let’s configure the Primary DSearch in HTTPS:
[xplore@xplore_server_01 ~]$ /app/xPlore/scripts/startstop stop ** Indexagent_DOCBASE1 has been stopped successfully ** PrimaryDsearch has been stopped successfully [xplore@xplore_server_01 ~]$ cd /app/xPlore/dsearch/admin [xplore@xplore_server_01 admin]$ ./xplore.sh -f scripts/ConfigSSL.groovy -enable -component IS \ -alias ft_alias -keystore "/app/xPlore/jboss7.1.1/certs/xplore_server_01.jks" \ -storepass K3ySt0r3P4ssw0rd -indexserverconfig "/app/xPlore/config/indexserverconfig.xml" \ -isname PrimaryDsearch -ianame Indexagent_DOCBASE1
- “-enable” means that HTTPS will be enabled and HTTP will be disabled. If you want both to be enabled, use the “-dual” option instead
- “-component” defines which component should be configured with this command. It can be “IS” (IndexServer), “IA” (IndexAgent) or “ALL” (IndexServer and IndexAgent)
- “-isname” defines the name of the IndexServer/Primary DSearch that you installed
- “-ianame” defines the name of the IndexAgent that you installed
Now what happen if you have more than one IndexAgent on the same server? Well the script isn’t smart enough for that and that’s the reason why I didn’t put “ALL” above but just “IS”. You might also noticed that I defined the “-ianame” parameter with “Indexagent_DOCBASE1”. This is because even if we are configuring the Primary DSearch in HTTPS, all IndexAgents have a reference in a configuration file that defines which port and protocol the IA should use to connect to the DSearch and if this isn’t setup properly, the IA will not be able to start.
Now the IndexServer is configured in HTTPS so let’s do the same thing for the IndexAgent:
[xplore@xplore_server_01 admin]$ ./xplore.sh -f scripts/ConfigSSL.groovy -enable -component IA \ -alias ft_alias -keystore "/app/xPlore/jboss7.1.1/certs/xplore_server_01.jks" \ -storepass K3ySt0r3P4ssw0rd -indexserverconfig "/app/xPlore/config/indexserverconfig.xml" \ -ianame Indexagent_DOCBASE1 -iaport 9200
As you can see above, this time no need to add the “-isname” parameter, it is not needed for the IndexAgent(s). Let’s say that you have a second IndexAgent for the docbase named DOCBASE2 , then you also have to execute the above command for this second indexAgent:
[xplore@xplore_server_01 admin]$ ./xplore.sh -f scripts/ConfigSSL.groovy -enable -component IA \ -alias ft_alias -keystore "/app/xPlore/jboss7.1.1/certs/xplore_server_01.jks" \ -storepass K3ySt0r3P4ssw0rd -indexserverconfig "/app/xPlore/config/indexserverconfig.xml" \ -ianame Indexagent_DOCBASE2 -iaport 9220
In case you didn’t know, yes each IndexAgent need at least 20 consecutive ports (so 9200 to 9219 for Indexagent_DOCBASE1 // 9220 to 9239 for Indexagent_DOCBASE2).
When configuring the IndexServer in HTTPS, I specified the “-ianame”. This is, like I said before, because there is a reference somewhere to the Protocol/Port used. This reference has been updated properly for Indexagent_DOCBASE1 normally but not for Indexagent_DOCBASE2. Therefore you need to do that manually:
[xplore@xplore_server_01 admin]$ grep -B1 -A10 dsearch_qrserver_protocol /app/xPlore/jboss7.1.1/server/DctmServer_Indexagent_DOCBASE2/deployments/IndexAgent.war/WEB-INF/classes/indexagent.xml <parameter> <parameter_name>dsearch_qrserver_protocol</parameter_name> <parameter_value>HTTP</parameter_value> </parameter> <parameter> <parameter_name>dsearch_config_host</parameter_name> <parameter_value>xplore_server_01</parameter_value> </parameter> <parameter> <parameter_name>dsearch_config_port</parameter_name> <parameter_value>9300</parameter_value> </parameter>
Just open this file and update the few lines that I printed above by replacing “HTTP” with “HTTPS” and “9300” with “9302” and that’s it. If you have several IndexAgents, then you need to do that for all of them.
The next step is to login to the Content Server (e.g.: ssh dmadmin@content_server_01) and update some properties in the docbase:
[dmadmin@content_server_01 ~]$ iapi DOCBASE1 -Udmadmin -Pxxx EMC Documentum iapi - Interactive API interface (c) Copyright EMC Corp., 1992 - 2015 All rights reserved. Client Library Release 7.2.0050.0084 Connecting to Server using docbase DOCBASE1 [DM_SESSION_I_SESSION_START]info: "Session 013f245a8014087a started for user dmadmin." Connected to Documentum Server running Release 7.2.0050.0214 Linux64.Oracle Session id is s0 API> ?,c,select r_object_id from dm_ftengine_config where any lower(param_value) = lower('xplore_server_01'); r_object_id ---------------- 083f245a800052ae (1 row affected) API> fetch,c,083f245a800052ae ... OK API> dump,c,l ... USER ATTRIBUTES object_name : DSearch Fulltext Engine Configuration ... param_name : dsearch_qrserver_protocol : dsearch_qrygen_mode : dsearch_qrserver_target : dsearch_qrserver_port : dsearch_config_port : dsearch_qrserver_host : dsearch_domain : dsearch_config_host : query_plugin_mapping_file : load_balancer_enabled : ft_wildcards_mode param_value : HTTP : both : /dsearch/IndexServerServlet : 9300 : 9300 : xplore_server_01 : DOCBASE1 : xplore_server_01 : /app/dctm/server/fulltext/dsearch/dm_AttributeMapping.xml : true : trailing_implicit ...
You might have noticed that I specified a WHERE clause on the select to find the r_object_id of the xPlore Server. That’s basically because in this case, there are two xPlore Servers in a HA setup (the parameter load_balancer_enabled is setup to true) but I just want to update the right object. So let’s update the parameters highlighted above (“dsearch_qrserver_protocol”, “dsearch_qrserver_port” and “dsearch_config_port”):
API> set,c,l,param_value SET> HTTPS ... OK API> set,c,l,param_value SET> 9302 ... OK API> set,c,l,param_value SET> 9302 ... OK API> save,c,l ... OK
With these steps, we updated the dm_ftengine_config object. The next step is to also update the URL of the IndexAgent:
API> ?,c,select r_object_id from dm_server_config r_object_id ---------------- 3d3f245a80000102 3d3f245a80003796 (2 rows affected) API> fetch,c,3d3f245a80000102 ... OK API> dump,c,l ... USER ATTRIBUTES object_name : DOCBASE1 ... app_server_name : do_method : do_mail : do_bpm : xplore_server_01_9200_IndexAgent : xplore_server_02_9200_IndexAgent app_server_uri : https://content_server_01:9082/DmMethods/servlet/DoMethod : https://content_server_01:9082/DmMail/servlet/DoMail : https://content_server_01:9082/bpm/servlet/DoMethod : http://xplore_server_01:9200/IndexAgent/servlet/IndexAgent : http://xplore_server_02:9200/IndexAgent/servlet/IndexAgent ...
So we setup the IndexAgent installed on xplore_server_01 in HTTPS and therefore we also need to update the URL referenced in the docbase. That’s actually one of the things that aren’t in the official documentation at the moment. This is done as before:
API> set,c,l,app_server_uri SET> https://xplore_server_01:9202/IndexAgent/servlet/IndexAgent ... OK API> save,c,l ... OK
As you saw above, this is an environment with two dm_server_config objects (two Content Servers) and two IndexAgents. Above, we setup the Primary DSearch and IndexAgent installed on xplore_server_01 in HTTPS. The dm_ftengine_config for this Primary DSearch has been updated and the URLs defined in one dm_server_config object have been updated too. But the same references are present in the second dm_server_config and therefore you also need to do that for the second one (3d3f245a80003796 in this case). Same steps so just repeat with the other r_object_id!
Ok so now all objects in the docbase have been updated successfully. Therefore return on the xPlore Server, clean the caches and start everything again:
[xplore@xplore_server_01 ~]$ rm -rf /app/xPlore/jboss7.1.1/server/DctmServer_*/tmp/work/* [xplore@xplore_server_01 ~]$ /app/xPlore/scripts/startstop start ** PrimaryDsearch has been started successfully ** Indexagent_DOCBASE1 has been started successfully
As said before, some of these steps aren’t described/explained in the official documentation and that will lead you to a non-working situation… In addition to that, there are some bugs impacting the proper behavior of the Primary DSearch and/or the Index Agents when configured in HTTPS. We reported these bugs to EMC which was able to provide a fix for some of them and include that in a later patch but as you know it is not always possible to upgrade or patch your environment. For example with the CS 7.2 P02 or P05, the searches will NOT work against a DSearch in HTTPS (corrected in the P08 or P09 if I remember correctly) but I will not describe that in this blog. If you are facing an issue with the IndexAgents not responding in HTTPS, please check this blog.
Cet article Documentum story – Setup the DSearch & IndexAgent in HTTPS (xPlore) est apparu en premier sur Blog dbi services.
How to avoid external table's data source file be replaced when another query is using the old data source file
I fail to understand anonymous PL/SQL block is used with dbms_scheduler.Here is an example:
hawk:(SYSTEM@hawk):PRIMARY> @x.sql hawk:(SYSTEM@hawk):PRIMARY> set echo on hawk:(SYSTEM@hawk):PRIMARY> BEGIN 2 DBMS_SCHEDULER.CREATE_PROGRAM( 3 program_name=>'TESTING', 4 program_action=>'DECLARE 5 x NUMBER := 100; 6 BEGIN 7 FOR i IN 1..10 LOOP 8 IF MOD(i,2) = 0 THEN 9 INSERT INTO temp VALUES (i); 10 ELSE 11 INSERT INTO temp VALUES (i); 12 END IF; 13 x := x + 100; 14 END LOOP; 15 COMMIT; 16 END;', 17 program_type=>'PLSQL_BLOCK', 18 number_of_arguments=>0 19 ); 20 END; 21 / PL/SQL procedure successfully completed. hawk:(SYSTEM@hawk):PRIMARY> show error No errors. hawk:(SYSTEM@hawk):PRIMARY> -- exec DBMS_SCHEDULER.DROP_PROGRAM('TESTING');Nothing wrong, right? What happens when we strip out and run the anonymous PL/SQL block?
hawk:(SYSTEM@hawk):PRIMARY> @y.sql hawk:(SYSTEM@hawk):PRIMARY> DECLARE 2 x NUMBER := 100; 3 BEGIN 4 FOR i IN 1..10 LOOP 5 IF MOD(i,2) = 0 THEN 6 INSERT INTO temp VALUES (i); 7 ELSE 8 INSERT INTO temp VALUES (i); 9 END IF; 10 x := x + 100; 11 END LOOP; 12 COMMIT; 13 END; 14 / INSERT INTO temp VALUES (i); * ERROR at line 6: ORA-06550: line 6, column 22: PL/SQL: ORA-00942: table or view does not exist ORA-06550: line 6, column 10: PL/SQL: SQL Statement ignored ORA-06550: line 8, column 22: PL/SQL: ORA-00942: table or view does not exist ORA-06550: line 8, column 10: PL/SQL: SQL Statement ignored hawk:(SYSTEM@hawk):PRIMARY> desc temp; ERROR: ORA-04043: object temp does not exist hawk:(SYSTEM@hawk):PRIMARY>Why not create stored procedure or package?
hawk:(SYSTEM@hawk):PRIMARY> @z.sql hawk:(SYSTEM@hawk):PRIMARY> create or replace procedure SP_TESTING 2 AS 3 x NUMBER := 100; 4 BEGIN 5 FOR i IN 1..10 LOOP 6 IF MOD(i,2) = 0 THEN 7 INSERT INTO temp VALUES (i); 8 ELSE 9 INSERT INTO temp VALUES (i); 10 END IF; 11 x := x + 100; 12 END LOOP; 13 COMMIT; 14 END; 15 / Warning: Procedure created with compilation errors. hawk:(SYSTEM@hawk):PRIMARY> show error Errors for PROCEDURE SP_TESTING: LINE/COL ERROR -------- ----------------------------------------------------------------- 7/10 PL/SQL: SQL Statement ignored 7/22 PL/SQL: ORA-00942: table or view does not exist 9/10 PL/SQL: SQL Statement ignored 9/22 PL/SQL: ORA-00942: table or view does not exist hawk:(SYSTEM@hawk):PRIMARY> create table temp(id int); Table created. hawk:(SYSTEM@hawk):PRIMARY> alter procedure SP_TESTING compile; Procedure altered. hawk:(SYSTEM@hawk):PRIMARY> show error No errors. hawk:(SYSTEM@hawk):PRIMARY> @a.sql hawk:(SYSTEM@hawk):PRIMARY> BEGIN 2 DBMS_SCHEDULER.CREATE_PROGRAM( 3 program_name=>'TESTING2', 4 program_action=>'BEGIN SP_TESTING; END;', 5 program_type=>'PLSQL_BLOCK', 6 number_of_arguments=>0 7 ); 8 END; 9 / PL/SQL procedure successfully completed. hawk:(SYSTEM@hawk):PRIMARY> show error No errors. hawk:(SYSTEM@hawk):PRIMARY> BEGIN SP_TESTING; END; 2 / PL/SQL procedure successfully completed. hawk:(SYSTEM@hawk):PRIMARY> select * from temp; ID ---------- 1 2 3 4 5 6 7 8 9 10 10 rows selected. hawk:(SYSTEM@hawk):PRIMARY>
I created a new GitHub public repository with my SQL scripts. Here is the URL:
I’ve experimented with GitHub for my Python graphing scripts but wasn’t sure about putting the SQL out there. I don’t really have any comments in the SQL scripts. But, I have mentioned many of the scripts in blog posts so those posts form a type of documentation. Anyway, it is there so people can see it. Also, I get the benefit of using Git to version my scripts and GitHub serves as a backup of my repository.
Also, I have a pile of scripts in a directory on my laptop but I have my scripts mixed in with those that others have written. I’m pretty sure that the repository only has my stuff in it but if someone finds something that isn’t mine let me know and I’ll take it out. I don’t want to take credit for other people’s work. But, the point is to share the things that I have done with the community so that others can benefit just as I benefit from the Oracle community. I’m not selling anything and if there is someone else’s stuff in there it isn’t like I’m making money from it.
Like anything on the web use at your own risk. The repository contains scripts that I get a lot of benefits from but I make no guarantees. Try any script you get from the internet on some test system first and try to understand the script before you even run it there.
I hope that my new SQL repository helps people in their Oracle work.
I remember the debates at various places and at several Oracle forums where people discussed about the necessity and advantages of being Oracle certified professional. There were so many discussions and debates whether being certified professional will really add any value to one's career. Anyways, this is not the platform to discuss such things , however, in my own perspective, with real experience and having certification surely boosts the career and gives more chances to advance in the career.
First thing first, Kamran really put his heart out to come-up with such an extraordinary book in the form of practical guide. I thoroughly enjoyed reviewing every bits and bytes of the book, and the amount of practical examples demonstrate in this book is just prodigious. Only one with the real world and tremendous experience in the technology could do that. Take a bow my friend.
Each and every chapter got some great contents, and neatly explained with about 200+ practical examples. Let me walk through the chapters and give you my inputs:
Server Configuration provides a detailed step-by-step guide on Oracle 11g Database Software setup and new database creation through GUI and silent mode likewise. Also, outlines the the procedure to configure network settings, listener, tns names and etc.
Enterprise Manager Grid Control explains step-by-step procedure to install and configure OEM, and how to schedule and manage stuff.
Managing Database Availability one of the important chapters, not only for OCM preparation perspective, to manage our production databases, deploying optimal backup and recovery strategies to secure the databases.
Data Management chapter provides detailed information about types of materialized view and materialized view log and how Oracle uses precomputed materialized view instead of querying a table with different aggregate functions and provides a quick result.
Data Warehouse Management chapter provides information about main data warehouse topics such as partitioning and managing large objects. Next, we talk about large objects and show how to use various SecureFile LOB features such as compression, deduplication, encryption, caching and logging.
Performance Tuning I particularly enjoyed this reading and reviewing this chapter, Its the heart of the book with so much explanation and practical examples. This one shouldn't be missed out.
Grid Infrastructure and ASM contains all you wanted to know about GRID infrastructure and ASM technologies. How to install GRID and create disk groups.
Real Application Clusters explores steps to successfully create a RAC database on two nodes. With only few additional steps, you will successfully create a RAC database. Then you see the way how to create and configure ASM with command line interface. Once ASM is configured, silent RAC database creation steps are provided.
Data Guard chapter starts by creating a data guard using command line interface, OEM and data guard broker. It also provides steps on performing switchover and failover using all mentioned tools.
In nutshell, its a practice guide with a perfect recipe and ingredients to become an OCM, which evenly blended with many useful examples and extraordinary explanation. The wait is over, this is the book we all been looking for a long time. Go and place your order and get certified, become an OCM.
You can place the order through Amazon, use the below URL:
This post comes from Fishbowl Solutions’ Associate MCAD Consultant, Ben Sawyer.
CAD data migrations are most often seen as a huge burden. They can be lengthy, costly, messy, and a general road block to a successful project. Organizations planning on migrating SolidWorks data to PTC Windchill should consider their options when it comes to the process and tools they utilize to perform the bulk loading.
At Fishbowl Solutions, our belief is that the faster you can load all your data accurately into Windchill, the faster your company can implement critical PLM business processes and realize the results of such initiatives like a Faster NPI, Streamline Change & Configuration Management, Improved Quality, Etc.
There are two typical project scenarios we encounter with these kinds of data migration projects. SolidWorks data resides on a Network File System (NFS) or resides in either PDMWorks or EPDM.
The options for this process and the tools used will be dependent on other factors as well. The most common guiding factors to influence decisions are the quantity of data and the project completion date requirements. Here are typical project scenarios.
Scenario One: Files on a Network File System
There is always an option to manually migrate SolidWorks data into Windchill. However, if an organization has thousands of files from multiple products that need to be imported, this process can be extremely daunting. When loading manually, this process involves bringing files into the Windchill workspace, carefully resolving any missing dependents, errors, duplicates, setting destination folders, revisions, lifecycles and fixing bad metadata. (Those who have tried this approach with large data quantities in the past know the pain of which we are talking about!)
Years ago, Fishbowl developed its LinkLoader tool for SolidWorks as a viable solution to complete a Windchill bulk loading project with speed and accuracy.
Fishbowl’s LinkLoader solution follows a simple workflow to help identify data to be cleansed and mass loaded with accurate metadata. The steps are as follows:
In this initial stage, the user chooses the mass of SolidWorks data to be loaded into Windchill. Since Windchill doesn’t allow duplicate named CAD files in the system, the software quickly identifies these duplicate files. It is up to the user to resolve the duplicate files or remove them from the data loading set.
The validation stage will ensure files are retrievable, attributes/parameters are extracted (for use in later stages), and relationships with other SolidWorks files are examined. LinkLoader captures all actions. The end user will need to resolve any errors or remove the data from the loading set.
Moving toward the bulk loading stage, it is necessary to confirm and/or modify the attribute-mapping file as desired. The only required fields for mapping are lifecycle, revision/version, and the Windchill folder location. End users are able to leverage the attributes/parameter information from the validation as desired, or create their own ‘Instance Based Attribute’ list to map with the files.
4. Bulk Load
Once the mapping stage is completed, the loading process is ready. There is a progress indicator that displays the number of files completed and the percentage done. If there are errors with any files during the upload, it will document these in an ‘Error List Report’ and LinkLoader will simply move on to the next file.
Scenario Two: Files reside in PDMWorks or EPDM
There is also an option to do a manual data migration from one system to another if files reside in PDMWorks or EPDM. However, this process can also be tedious and drawn out as much, or perhaps even more than when the files are on a NFS.
Having files within PDMWorks or EPDM can make the migration process more straightforward and faster than the NFS projects. Fishbowl has created an automated solution tool that extracts the latest versions of each file from the legacy system and immediately prepares it for loading into Windchill. The steps are as follows:
1. Extraction (LinkExtract)
In this initial stage, Fishbowl uses its LinkExtract tool to pull the latest version of all SolidWorks files , determine references, and extract all the attributes for the files as defined in PDMWorks or EPDM.
Before loading the files, it is necessary to confirm and or modify the attribute mapping file as desired. Admins can fully leverage the attributes/parameter information from the Extraction step, or can start from scratch if they find it to be easier. Often the destination Windchill system will have different terminology or states and it is easy to remap those as needed in this step.
3. Bulk Load
Once the mapping stage is completed, the loading process is ready. There is a progress indicator that displays the number of files completed and the percentage done. If there are errors with any files during the upload, it will document these in the Error List Report and LinkLoader will move on to the next file.
Proven Successes with LinkLoader
Many of Fishbowl’s customers have purchased and successfully ran LinkLoader themselves with little to no assistance from Fishbowl. Other customers of ours have utilized our consulting services to complete the migration project on their behalf.
With Fishbowl’s methodology centered on “Customer First”, our focus and support continuously keeps our customers satisfied. This is the same commitment and expertise we will bring to any and every data migration project.
If your organization is looking to consolidate SolidWorks CAD data to Windchill in a timely and effective manner, regardless of the size and scale of the project, our experts at Fishbowl Solutions can get it done.
For example, Fishbowl partnered with a multi-billion dollar medical device company with a short time frame to migrate over 30,000 SolidWorks files from a legacy system into Windchill. Fishbowl’s expert team took initiative and planned the process to meet their tight industry regulations and finish on time and on budget. After the Fishbowl team executed test migrations, the actual production migration process only took a few hours, thus eliminating engineering downtime.
If your organization is seeking the right team and tools to complete a SolidWorks data migration to Windchill, reach out to us at Fishbowl Solutions.
If you’d like more information about Fishbowl’s LinkLoader tool or our other products and services for PTC Windchill and Creo, check out our website, click the “Contact Us” tab, or reach out to Rick Passolt in our business development department.
Senior Account Executive
Ben Sawyer is an Associate MCAD Consultant at Fishbowl Solutions. Fishbowl Solutions was founded in 1999. Their areas of expertise include Oracle WebCenter, PTC’s Product Development System (PDS), and enterprise search solutions using the Google Search Appliance. Check out our website to learn more about what we do.
The post Consider Your Options for SolidWorks to Windchill Data Migrations appeared first on Fishbowl Solutions' C4 Blog.
Authored by: Dave Gray, Entrepreneur, Author, Consultant and Founder of XPLANE
I’ve never met a senior leader who didn’t acknowledge the importance of culture. We all agree it is vital. Numerous examples in multiple domains — not just business but in war, sports, and social change, have demonstrated over and over that culture is a more powerful force-multiplier than money, power or superior technology.
Superior business culture is the most-often cited factor in business success stories, like Google, Southwest Air, and Nordstrom. Toxic or stagnant culture is most-often blamed for the catastrophic downfall of companies like Kodak, Nokia, and Motorola. Jack Welch and Lou Gerstner, who presided over two of the most effective comebacks in corporate history (GE and IBM, respectively), both cited culture as one of their top priorities.
Companies are made out of people, after all, and the most successful companies make people, and culture, a top strategic priority. And yet as leaders we struggle to get a grip on culture. It’s slippery, difficult to name, define, measure or get any kind of traction on.
Business volatility requires business to adapt.
We are entering a new business era characterized by high levels of volatility, uncertainty and disruption. There is no organization that is not being affected by these winds of change. Established companies are being disrupted by newcomers with astonishing regularity so common it now has a name — “Getting Netflixed” — a reference to the way Netflix outfoxed and outmaneuvered Blockbuster by redefining and digitizing the customer experience. What Netflix did to video rentals, companies like AirBnB, Uber, Spotify and Zipcar are doing to hospitality, transportation, music and car rentals.
When the business environment is evolving this rapidly, culture must keep up.
Culture change is also difficult. Changing culture requires changing habits, behaviors, and routines that have solidified over decades. To imagine the scope of a culture change initiative, just imaging 5,000 people trying to quit smoking at the same time. It’s incredibly hard, and the risk of failure is high. Even success does not guarantee you will be appreciated. Jack Welch and Lou Gerstner are controversial figures to this day.
Three culture change challenges.
Leaders interested in enabling culture change face three problems:
First, getting a grip on the current culture by identifying the direct links between business results, behavior, and organizational enablers like incentives, work systems, and management practices.
Second, imagining and designing the future culture, including not just the desired business results and behaviors, but the incentives, systems, habits and practices which will enable the new culture to emerge.
Third, the hard work of shifting deeply embedded, entrenched habits and behaviors. For leaders this is especially difficult, because they will necessarily be learning and acting out new behaviors while they are also on display, watched by everyone. Not to mention that culture change, by nature, often occurs in difficult business circumstances, within organizations that will certainly include many critics, cynics and skeptics.
Three steps to a new culture.
1. Diagnose your current culture.
The first step is to build a solid understanding of the culture you have today. Historically this has been a difficult undertaking, but today it is easier, due to the emergence of business design tools for rapidly diagnosing, describing and designing business strategies and systems.
Leading the charge for business design tools are two of the top 50 business thinkers in the world, Alex Osterwalder of Strategyzer and Yves Pigneur of the University of Lausanne, designers of the Business Model Canvas, which is used by more than 5 million people in organizations around the world.
The Culture Map: A business design tool.
Alex and Yves helped us develop a new tool for understanding and designing culture, called the Culture Map.
The Culture Map links business outcomes and behaviors with the enablers and blockers that are caused or influenced by managers.
Culture Mapping is a process that involves deep listening exercises designed to find the real underlying system behind the noise that masks many business realities.
2. Design your future culture.
This is a more difficult exercise than the current state diagnostic, because it takes not just the imagination to visualize a future state, but also the humility to be realistic about what can be achieved and how quickly it can happen.
A visual culture map depicting desired behaviors.
An important part of the design process is visualizing future behaviors in high-granularity detail, to eliminate as much doubt, uncertainty, and skepticism as possible.
We recommend that you develop a visual culture map depicting the culture you aspire to, showing people exactly what you want them to do and say in the future you envision.
The description should be as clear, specific and detailed as possible.
If new behaviors are not clearly and visually articulated, the most likely outcome is that the old behaviors will simply continue: business as usual but with new names.
3. Do the hard work of following through.
True culture change is a difficult endeavor, not to be taken lightly. It can take up to three years for a new culture to take root.
We like to compare culture work to gardening. Designing the garden is the easy part. For your culture change efforts to succeed, you will need the patience and dedication of a gardener. Like a garden, a new culture will grow at its pace, not your pace. There is no way to speed up this kind of change.
People must first hear that you are committed to the change. Then, over time, they will closely observe your actions and ongoing behavior, looking for discrepancies and clues. Most people must overcome some cynicism and skepticism before they will believe that the change is real. From that point on, there is still much work to do in order for those changes in belief to become new habits, routines and behaviors.
Six best practices for driving cultural change.
Make culture a top priority.
When culture change is necessary, it must be a top priority for senior management. If there is one thing we have learned from more than 20 years working on organizational effectiveness and change initiatives, it is this:
In such cases, its highly likely that the organization’s strategies, and in some cases the company itself, will also fail.
Lead by example.
The tone for culture is set at the top. It is critically important and cannot be delegated. It must be lived and acted daily, in large and small ways. People may listen to what you say, but they also watch what you do. And if your actions don’t match your words, they follow the example that’s set by your actions.
Focus on one habit at a time.
We recommend that executives focus on no more than six key behaviors, and that they focus on these one at a time. For a period of three to six months, leaders focus on changing one key behavior, practicing it with each other and with employees, making the commitment privately and in public to make that a personal habit, and soliciting feedback from colleagues and employees.
Establish a rhythm and track progress.
The most effective tool we have seen for measuring cultural improvement over time is a simple employee survey, similar to what you see on Yelp or Amazon reviews. These surveys measure employees perceptions about behaviors, and in the world of culture, perception is reality.
Culture perceptions can be measured by frequent, simple, easy surveys.
Measure employee perceptions about the habit you are focusing on. Poll and review the numbers weekly.
Look at the numbers as part of your regular operational rhythm, such as weekly team meetings and status updates. Ask probing questions about what causes and influences those perceptions.
Track your progress over time. Talk about your culture survey results publicly and make them a topic of conversation throughout the company.
Plan for a short-term drop in performance.
Expect a “muddy middle”period, where people are shedding their old behaviors but have not yet embraced or learned the new ones. Performance will most likely drop during this period. This is where many culture change initiatives lose their resolve and snap back to old habits and routines.
Culture work takes time. The first habit is the hardest to break, and the most difficult days are the early days. As people begin to see progress and see that your actions match your words, resistance and skepticism will decrease and you will gain momentum over time.
With a strong commitment from leaders who are leading by personal example, asking the right questions, and tracking cultural performance over time, can make steady progress toward a revitalized culture.