Feed aggregator

Query on Stored Procedure - MINUS clause

Tom Kyte - Wed, 2016-10-26 00:26
Dear All, Ques 1. Please advise if the use of minus clause is best when I am looking to exclude certain specific conditions or shall I be using Not IN? I am in situation where I know what to avoid, not sure what all to select and hence I needed...
Categories: DBA Blogs

When shutting down a container database

Tom Kyte - Wed, 2016-10-26 00:26
Hello, Some background info: We are working in a 12c multitenant environment on windows server 2012 R2. Container database contains about 20-25 pluggable databases. Typically we have to shutdown the container database to perform maintenan...
Categories: DBA Blogs

ORA-29532: Java call terminated by uncaught Java exception: java.lang.OutOfMemoryError

Tom Kyte - Wed, 2016-10-26 00:26
When Passing clob size more than 3 MB to java stored procedure I get java.lang.OutOfMemoryError. The data is in the JSON format which we try to deserialize into an object using a java json library in the java stored procedure. We have tried the ...
Categories: DBA Blogs

SP2-0743 and SP2-0042

Tom Kyte - Wed, 2016-10-26 00:26
Hi Sir, From the doc: SP2-0042 unknown command command_name - rest of line ignored Cause: The command entered was not valid. Action: Check the syntax of the command you used for the correct options. SP2-0734 Unknown command beginning com...
Categories: DBA Blogs

Isolate Your Code

Michael Dinh - Tue, 2016-10-25 19:19

I fail to understand anonymous PL/SQL block is used with dbms_scheduler.

Here is an example:
hawk:(SYSTEM@hawk):PRIMARY> @x.sql
hawk:(SYSTEM@hawk):PRIMARY> set echo on
hawk:(SYSTEM@hawk):PRIMARY> BEGIN
  2  DBMS_SCHEDULER.CREATE_PROGRAM(
  3  program_name=>'TESTING',
  4  program_action=>'DECLARE
  5  x NUMBER := 100;
  6  BEGIN
  7     FOR i IN 1..10 LOOP
  8        IF MOD(i,2) = 0 THEN
  9           INSERT INTO temp VALUES (i);
 10        ELSE
 11           INSERT INTO temp VALUES (i);
 12        END IF;
 13        x := x + 100;
 14     END LOOP;
 15     COMMIT;
 16  END;',
 17  program_type=>'PLSQL_BLOCK',
 18  number_of_arguments=>0
 19  );
 20  END;
 21  /

PL/SQL procedure successfully completed.

hawk:(SYSTEM@hawk):PRIMARY> show error
No errors.
hawk:(SYSTEM@hawk):PRIMARY> -- exec DBMS_SCHEDULER.DROP_PROGRAM('TESTING');
Nothing wrong, right? What happens when we strip out and run the anonymous PL/SQL block?
hawk:(SYSTEM@hawk):PRIMARY> @y.sql
hawk:(SYSTEM@hawk):PRIMARY> DECLARE
  2     x NUMBER := 100;
  3  BEGIN
  4     FOR i IN 1..10 LOOP
  5        IF MOD(i,2) = 0 THEN
  6           INSERT INTO temp VALUES (i);
  7        ELSE
  8           INSERT INTO temp VALUES (i);
  9        END IF;
 10        x := x + 100;
 11     END LOOP;
 12     COMMIT;
 13  END;
 14  /
         INSERT INTO temp VALUES (i);
                     *
ERROR at line 6:
ORA-06550: line 6, column 22:
PL/SQL: ORA-00942: table or view does not exist
ORA-06550: line 6, column 10:
PL/SQL: SQL Statement ignored
ORA-06550: line 8, column 22:
PL/SQL: ORA-00942: table or view does not exist
ORA-06550: line 8, column 10:
PL/SQL: SQL Statement ignored


hawk:(SYSTEM@hawk):PRIMARY> desc temp;
ERROR:
ORA-04043: object temp does not exist


hawk:(SYSTEM@hawk):PRIMARY>
Why not create stored procedure or package?
hawk:(SYSTEM@hawk):PRIMARY> @z.sql
hawk:(SYSTEM@hawk):PRIMARY> create or replace procedure SP_TESTING
  2  AS
  3  x NUMBER := 100;
  4  BEGIN
  5     FOR i IN 1..10 LOOP
  6        IF MOD(i,2) = 0 THEN
  7           INSERT INTO temp VALUES (i);
  8        ELSE
  9           INSERT INTO temp VALUES (i);
 10        END IF;
 11        x := x + 100;
 12     END LOOP;
 13     COMMIT;
 14  END;
 15  /

Warning: Procedure created with compilation errors.

hawk:(SYSTEM@hawk):PRIMARY> show error
Errors for PROCEDURE SP_TESTING:

LINE/COL ERROR
-------- -----------------------------------------------------------------
7/10     PL/SQL: SQL Statement ignored
7/22     PL/SQL: ORA-00942: table or view does not exist
9/10     PL/SQL: SQL Statement ignored
9/22     PL/SQL: ORA-00942: table or view does not exist

hawk:(SYSTEM@hawk):PRIMARY> create table temp(id int);

Table created.

hawk:(SYSTEM@hawk):PRIMARY> alter procedure SP_TESTING compile;

Procedure altered.

hawk:(SYSTEM@hawk):PRIMARY> show error
No errors.
hawk:(SYSTEM@hawk):PRIMARY> @a.sql
hawk:(SYSTEM@hawk):PRIMARY> BEGIN
  2  DBMS_SCHEDULER.CREATE_PROGRAM(
  3  program_name=>'TESTING2',
  4  program_action=>'BEGIN SP_TESTING; END;',
  5  program_type=>'PLSQL_BLOCK',
  6  number_of_arguments=>0
  7  );
  8  END;
  9  /

PL/SQL procedure successfully completed.

hawk:(SYSTEM@hawk):PRIMARY> show error
No errors.
hawk:(SYSTEM@hawk):PRIMARY> BEGIN SP_TESTING; END;
  2  /

PL/SQL procedure successfully completed.

hawk:(SYSTEM@hawk):PRIMARY> select * from temp;

        ID
----------
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10

10 rows selected.

hawk:(SYSTEM@hawk):PRIMARY>

I put my SQL scripts on GitHub

Bobby Durrett's DBA Blog - Tue, 2016-10-25 10:45

I created a new GitHub public repository with my SQL scripts. Here is the URL:

https://github.com/bobbydurrett/OracleDatabaseTuningSQL

I’ve experimented with GitHub for my Python graphing scripts but wasn’t sure about putting the SQL out there. I don’t really have any comments in the SQL scripts. But, I have mentioned many of the scripts in blog posts so those posts form a type of documentation. Anyway, it is there so people can see it. Also, I get the benefit of using Git to version my scripts and GitHub serves as a backup of my repository.

Also, I have a pile of scripts in a directory on my laptop but I have my scripts mixed in with those that others have written. I’m pretty sure that the repository only has my stuff in it but if someone finds something that isn’t mine let me know and I’ll take it out. I don’t want to take credit for other people’s work. But, the point is to share the things that I have done with the community so that others can benefit just as I benefit from the Oracle community. I’m not selling anything and if there is someone else’s stuff in there it isn’t like I’m making money from it.

Like anything on the web use at your own risk. The repository contains scripts that I get a lot of benefits from but I make no guarantees. Try any script you get from the internet on some test system first and try to understand the script before you even run it there.

I hope that my new SQL repository helps people in their Oracle work.

Bobby

Categories: DBA Blogs

Wanna become an OCM? Go on read out Kamran's new OCM practical guide - MY TAKE

Syed Jaffar - Tue, 2016-10-25 10:31
When I first heard about Kamran's new book, OCM practical guide, I said wow, because I was thinking on the same line a few years back, but, dropped the idea due to several factors, including the fact that the time and efforts required on this book.When he approached me to be one of the technical reviewers, I have accepted the deal straightaway without having a second thought. I am really honored to be part of this wonderful book which unfold the knowledge and help OCM aspirants to get what they dream in their lives.

I remember the debates at various places and at several Oracle forums where people discussed about the necessity and advantages of being Oracle certified professional. There were so many discussions and debates whether being certified professional will really add any value to one's career. Anyways, this is not the platform to discuss such things , however, in my own perspective, with real experience and having certification surely boosts the career and gives more chances to advance in the career.

First thing first, Kamran really put his heart out to come-up with such an extraordinary book in the form of practical guide. I thoroughly enjoyed reviewing every bits and bytes of the book, and the amount of practical examples demonstrate in this book is just prodigious. Only one with the real world and tremendous experience in the technology could do that. Take a bow my friend.

Each and every chapter got some great contents, and neatly explained with about 200+ practical examples. Let me walk through the chapters and give you my inputs:

Server Configuration provides a detailed step-by-step guide on Oracle 11g Database Software setup and new database creation through GUI and silent mode likewise. Also, outlines the the procedure to configure network settings, listener, tns names and etc.

Enterprise Manager Grid Control explains step-by-step procedure to install and configure OEM, and how to schedule and manage stuff.  

Managing Database Availability one of the important chapters, not only for OCM preparation perspective, to manage our production databases, deploying optimal backup and recovery strategies to secure the databases.

Data Management chapter provides detailed information about types of materialized view and materialized view log and how Oracle uses precomputed materialized view instead of querying a table with different aggregate functions and provides a quick result.

Data Warehouse Management chapter provides information about main data warehouse topics such as partitioning and managing large objects. Next, we talk about large objects and show how to use various SecureFile LOB features such as compression, deduplication, encryption, caching and logging.

Performance Tuning I particularly enjoyed this reading and reviewing this chapter, Its the heart of the book with so much explanation and practical examples. This one shouldn't be missed out. 

Grid Infrastructure and ASM contains all you wanted to know about GRID infrastructure and ASM technologies. How to install GRID and create disk groups.

Real Application Clusters explores steps to successfully create a RAC database on two nodes. With only few additional steps, you will successfully create a RAC database. Then you see the way how to create and configure ASM with command line interface. Once ASM is configured, silent RAC database creation steps are provided.

Data Guard chapter starts by creating a data guard using command line interface, OEM and data guard broker. It also provides steps on performing switchover and failover using all mentioned tools.


In nutshell, its a practice guide with a perfect recipe and ingredients to become an OCM, which evenly blended with many useful examples and extraordinary explanation. The wait is over, this is the book we all been looking for a long time. Go and place your order and get certified, become an OCM.

You can place the order through Amazon, use the below URL:

https://www.amazon.com/Oracle-Certified-Master-Study-Guide/dp/1536800791/ref=sr_1_fkmr1_1?ie=UTF8&qid=1477409700&sr=8-1-fkmr1&keywords=Kamran+ocmds=Kamran+ocm





Consider Your Options for SolidWorks to Windchill Data Migrations

This post comes from Fishbowl Solutions’ Associate MCAD Consultant, Ben Sawyer.

CAD data migrations are most often seen as a huge burden. They can be lengthy, costly, messy, and a general road block to a successful project. Organizations planning on migrating SolidWorks data to PTC Windchill should consider their options when it comes to the process and tools they utilize to perform the bulk loading.

At Fishbowl Solutions, our belief is that the faster you can load all your data accurately into Windchill, the faster your company can implement critical PLM business processes and realize the results of such initiatives like a Faster NPI, Streamline Change & Configuration Management, Improved Quality, Etc.

There are two typical project scenarios we encounter with these kinds of data migration projects. SolidWorks data resides on a Network File System (NFS) or resides in either PDMWorks or EPDM.

The options for this process and the tools used will be dependent on other factors as well. The most common guiding factors to influence decisions are the quantity of data and the project completion date requirements. Here are typical project scenarios.

Scenario One: Files on a Network File System

Manual Migration

There is always an option to manually migrate SolidWorks data into Windchill. However, if an organization has thousands of files from multiple products that need to be imported, this process can be extremely daunting. When loading manually, this process involves bringing files into the Windchill workspace, carefully resolving any missing dependents, errors, duplicates, setting destination folders, revisions, lifecycles and fixing bad metadata. (Those who have tried this approach with large data quantities in the past know the pain of which we are talking about!)

Automated Solution

Years ago, Fishbowl developed its LinkLoader tool for SolidWorks as a viable solution to complete a Windchill bulk loading project with speed and accuracy.

Fishbowl’s LinkLoader solution follows a simple workflow to help identify data to be cleansed and mass loaded with accurate metadata. The steps are as follows:

1. Discovery
In this initial stage, the user chooses the mass of SolidWorks data to be loaded into Windchill. Since Windchill doesn’t allow duplicate named CAD files in the system, the software quickly identifies these duplicate files. It is up to the user to resolve the duplicate files or remove them from the data loading set.

2. Validation
The validation stage will ensure files are retrievable, attributes/parameters are extracted (for use in later stages), and relationships with other SolidWorks files are examined. LinkLoader captures all actions. The end user will need to resolve any errors or remove the data from the loading set.

3. Mapping
Moving toward the bulk loading stage, it is necessary to confirm and/or modify the attribute-mapping file as desired. The only required fields for mapping are lifecycle, revision/version, and the Windchill folder location. End users are able to leverage the attributes/parameter information from the validation as desired, or create their own ‘Instance Based Attribute’ list to map with the files.

4. Bulk Load
Once the mapping stage is completed, the loading process is ready. There is a progress indicator that displays the number of files completed and the percentage done. If there are errors with any files during the upload, it will document these in an ‘Error List Report’ and LinkLoader will simply move on to the next file.

Scenario Two: Files reside in PDMWorks or EPDM

Manual Migration

There is also an option to do a manual data migration from one system to another if files reside in PDMWorks or EPDM. However, this process can also be tedious and drawn out as much, or perhaps even more than when the files are on a NFS.

Automated Solution

Having files within PDMWorks or EPDM can make the migration process more straightforward and faster than the NFS projects. Fishbowl has created an automated solution tool that extracts the latest versions of each file from the legacy system and immediately prepares it for loading into Windchill. The steps are as follows:

1. Extraction (LinkExtract)
In this initial stage, Fishbowl uses its LinkExtract tool to pull the latest version of all SolidWorks files , determine references, and extract all the attributes for the files as defined in PDMWorks or EPDM.

2. Mapping
Before loading the files, it is necessary to confirm and or modify the attribute mapping file as desired. Admins can fully leverage the attributes/parameter information from the Extraction step, or can start from scratch if they find it to be easier. Often the destination Windchill system will have different terminology or states and it is easy to remap those as needed in this step.

3. Bulk Load
Once the mapping stage is completed, the loading process is ready. There is a progress indicator that displays the number of files completed and the percentage done. If there are errors with any files during the upload, it will document these in the Error List Report and LinkLoader will move on to the next file.

Proven Successes with LinkLoader

Many of Fishbowl’s customers have purchased and successfully ran LinkLoader themselves with little to no assistance from Fishbowl. Other customers of ours have utilized our consulting services to complete the migration project on their behalf.

With Fishbowl’s methodology centered on “Customer First”, our focus and support continuously keeps our customers satisfied. This is the same commitment and expertise we will bring to any and every data migration project.

If your organization is looking to consolidate SolidWorks CAD data to Windchill in a timely and effective manner, regardless of the size and scale of the project, our experts at Fishbowl Solutions can get it done.

For example, Fishbowl partnered with a multi-billion dollar medical device company with a short time frame to migrate over 30,000 SolidWorks files from a legacy system into Windchill. Fishbowl’s expert team took initiative and planned the process to meet their tight industry regulations and finish on time and on budget. After the Fishbowl team executed test migrations, the actual production migration process only took a few hours, thus eliminating engineering downtime.

If your organization is seeking the right team and tools to complete a SolidWorks data migration to Windchill, reach out to us at Fishbowl Solutions.

If you’d like more information about Fishbowl’s LinkLoader tool or our other products and services for PTC Windchill and Creo, check out our website, click the “Contact Us” tab, or reach out to Rick Passolt in our business development department.

Contact Us

Rick Passolt
Senior Account Executive
952.465.3418
mcadsales@fishbowlsolutions.com

Ben Sawyer is an Associate MCAD Consultant at Fishbowl Solutions. Fishbowl Solutions was founded in 1999. Their areas of expertise include Oracle WebCenter, PTC’s Product Development System (PDS), and enterprise search solutions using the Google Search Appliance. Check out our website to learn more about what we do. 

The post Consider Your Options for SolidWorks to Windchill Data Migrations appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Making Culture Actionable

WebCenter Team - Tue, 2016-10-25 08:48

Authored by: Dave Gray, Entrepreneur, Author, Consultant and Founder of XPLANE

Dave Gray

I’ve never met a senior leader who didn’t acknowledge the importance of culture. We all agree it is vital. Numerous examples in multiple domains — not just business but in war, sports, and social change, have demonstrated over and over that culture is a more powerful force-multiplier than money, power or superior technology. 

Superior business culture is the most-often cited factor in business success stories, like Google, Southwest Air, and Nordstrom. Toxic or stagnant culture is most-often blamed for the catastrophic downfall of companies like Kodak, Nokia, and Motorola. Jack Welch and Lou Gerstner, who presided over two of the most effective comebacks in corporate history (GE and IBM, respectively), both cited culture as one of their top priorities.

Companies are made out of people, after all, and the most successful companies make people, and culture, a top strategic priority. And yet as leaders we struggle to get a grip on culture. It’s slippery, difficult to name, define, measure or get any kind of traction on.

Business volatility requires business to adapt.

We are entering a new business era characterized by high levels of volatility, uncertainty and disruption. There is no organization that is not being affected by these winds of change. Established companies are being disrupted by newcomers with astonishing regularity so common it now has a name — “Getting Netflixed” — a reference to the way Netflix outfoxed and outmaneuvered Blockbuster by redefining and digitizing the customer experience. What Netflix did to video rentals, companies like AirBnB, Uber, Spotify and Zipcar are doing to hospitality, transportation, music and car rentals.

When the business environment is evolving this rapidly, culture must keep up.

Culture change is hard.

Culture change is also difficult. Changing culture requires changing habits, behaviors, and routines that have solidified over decades. To imagine the scope of a culture change initiative, just imaging 5,000 people trying to quit smoking at the same time. It’s incredibly hard, and the risk of failure is high. Even success does not guarantee you will be appreciated. Jack Welch and Lou Gerstner are controversial figures to this day.

Three culture change challenges.

Leaders interested in enabling culture change face three problems:

First, getting a grip on the current culture by identifying the direct links between business results, behavior, and organizational enablers like incentives, work systems, and management practices.

Second, imagining and designing the future culture, including not just the desired business results and behaviors, but the incentives, systems, habits and practices which will enable the new culture to emerge.

Third, the hard work of shifting deeply embedded, entrenched habits and behaviors. For leaders this is especially difficult, because they will necessarily be learning and acting out new behaviors while they are also on display, watched by everyone. Not to mention that culture change, by nature, often occurs in difficult business circumstances, within organizations that will certainly include many critics, cynics and skeptics.

Three steps to a new culture.

1. Diagnose your current culture.

The first step is to build a solid understanding of the culture you have today. Historically this has been a difficult undertaking, but today it is easier, due to the emergence of business design tools for rapidly diagnosing, describing and designing business strategies and systems.

Leading the charge for business design tools are two of the top 50 business thinkers in the world, Alex Osterwalder of Strategyzer and Yves Pigneur of the University of Lausanne, designers of the Business Model Canvas, which is used by more than 5 million people in organizations around the world.


The Culture Map: A business design tool.

Alex and Yves helped us develop a new tool for understanding and designing culture, called the Culture Map.

The Culture Map links business outcomes and behaviors with the enablers and blockers that are caused or influenced by managers. 

Culture Mapping is a process that involves deep listening exercises designed to find the real underlying system behind the noise that masks many business realities.

2. Design your future culture.

This is a more difficult exercise than the current state diagnostic, because it takes not just the imagination to visualize a future state, but also the humility to be realistic about what can be achieved and how quickly it can happen.


A visual culture map depicting desired behaviors.

An important part of the design process is visualizing future behaviors in high-granularity detail, to eliminate as much doubt, uncertainty, and skepticism as possible. 

We recommend that you develop a visual culture map depicting the culture you aspire to, showing people exactly what you want them to do and say in the future you envision.

The description should be as clear, specific and detailed as possible. 

If new behaviors are not clearly and visually articulated, the most likely outcome is that the old behaviors will simply continue: business as usual but with new names.

3. Do the hard work of following through.

True culture change is a difficult endeavor, not to be taken lightly. It can take up to three years for a new culture to take root.

We like to compare culture work to gardening. Designing the garden is the easy part. For your culture change efforts to succeed, you will need the patience and dedication of a gardener. Like a garden, a new culture will grow at its pace, not your pace. There is no way to speed up this kind of change. 

People must first hear that you are committed to the change. Then, over time, they will closely observe your actions and ongoing behavior, looking for discrepancies and clues. Most people must overcome some cynicism and skepticism before they will believe that the change is real. From that point on, there is still much work to do in order for those changes in belief to become new habits, routines and behaviors.

Six best practices for driving cultural change.

Make culture a top priority.

When culture change is necessary, it must be a top priority for senior management. If there is one thing we have learned from more than 20 years working on organizational effectiveness and change initiatives, it is this: 

When culture change is necessary, if it is not one of the executive team’s top three priorities, the culture change will fail. 

In such cases, its highly likely that the organization’s strategies, and in some cases the company itself, will also fail.

Lead by example.

The tone for culture is set at the top. It is critically important and cannot be delegated. It must be lived and acted daily, in large and small ways. People may listen to what you say, but they also watch what you do. And if your actions don’t match your words, they follow the example that’s set by your actions.

Focus on one habit at a time.

We recommend that executives focus on no more than six key behaviors, and that they focus on these one at a time. For a period of three to six months, leaders focus on changing one key behavior, practicing it with each other and with employees, making the commitment privately and in public to make that a personal habit, and soliciting feedback from colleagues and employees.

Establish a rhythm and track progress.

Most leaders and managers create change most effectively when they have a number that they are trying to move up and toward the right. Culture is no different.

The most effective tool we have seen for measuring cultural improvement over time is a simple employee survey, similar to what you see on Yelp or Amazon reviews. These surveys measure employees perceptions about behaviors, and in the world of culture, perception is reality.


Culture perceptions can be measured by frequent, simple, easy surveys.

Measure employee perceptions about the habit you are focusing on. Poll and review the numbers weekly. 

Look at the numbers as part of your regular operational rhythm, such as weekly team meetings and status updates. Ask probing questions about what causes and influences those perceptions.

Track your progress over time. Talk about your culture survey results publicly and make them a topic of conversation throughout the company.

Plan for a short-term drop in performance.

Expect a “muddy middle”period, where people are shedding their old behaviors but have not yet embraced or learned the new ones. Performance will most likely drop during this period. This is where many culture change initiatives lose their resolve and snap back to old habits and routines.

Be patient.

Culture work takes time. The first habit is the hardest to break, and the most difficult days are the early days. As people begin to see progress and see that your actions match your words, resistance and skepticism will decrease and you will gain momentum over time.

With a strong commitment from leaders who are leading by personal example, asking the right questions, and tracking cultural performance over time, can make steady progress toward a revitalized culture.

Learn more about the Culture Map and see it for yourself in this webcast on October 27 at 10:00am PDT!

Dave Gray is the founder of XPLANE.

Case construct with WHERE clause

Tom Kyte - Tue, 2016-10-25 06:06
Hi Tom, I have a question and I don't know if this is possible or if i'm jsut doing something wrong because i get multiple errors like missing right paren, or missing keyword. I want to use the CASE construct after a WHERE clause to build an expre...
Categories: DBA Blogs

Missing Physical Reads

Tom Kyte - Tue, 2016-10-25 06:06
Hi Tom, Please find below the experimented done, sequentially. Scripts ---------- create table cust (cust_id number, last_name varchar2(20),first_name varchar2(20)); create index cust_idx1 on cust(last_name); SQL> set autotrace on; SQL> ...
Categories: DBA Blogs

Function with multi-dimensional array as parameter?

Tom Kyte - Tue, 2016-10-25 06:06
How would I define a function that takes a multi-dimensional array as an input parameter and returns json_tbl PIPELINED that has been defined as <code>CREATE OR REPLACE TYPE CIC3.json_t as OBJECT (JSON_TEXT varchar2(30000)); CREATE OR REPLACE TYPE ...
Categories: DBA Blogs

Depth of attributes,

Tom Kyte - Tue, 2016-10-25 06:06
Hello, I have a situation where my data (for a given sys_id) has values for multiple depths (level1 attribute, level2 attribute and so on). For a given sys_id, I have to select the rows that has the maximum depth. However, as an example, if a va...
Categories: DBA Blogs

Oracle Auditing - Syslog

Tom Kyte - Tue, 2016-10-25 06:06
Hi Guys, I have two questions with regard to Oracle database auditing via syslog. 1. When auditing via OS syslog, what is the ideal value for the AUDIT_SYSLOG_LEVEL parameter, where AUDIT_SYSLOG_LEVEL = facility.priority It is the priortity...
Categories: DBA Blogs

how to optimize a query that is concatenating fields routinely

Tom Kyte - Tue, 2016-10-25 06:06
Hi. I'm trying to find a way to optimize this situation below. Example table definition: create table rw_test (A varchar2(10), B varchar2(10), C varchar2(10), D varchar2(10), E varchar2(10), F number(10), entry_date date); ...
Categories: DBA Blogs

Number to Hours and minutes.

Tom Kyte - Tue, 2016-10-25 06:06
Hello- I have a field which totals the number of hours worked; and for example returns a figure of 41.75 What is the best way to represent this number as 42 hours and 15 minutes? Thanks Venkat
Categories: DBA Blogs

Performing a 12c Upgrade with a New Install

Rittman Mead Consulting - Tue, 2016-10-25 04:00

Software updates often include new features, and while useful, these new features are often the only driving factors in upgrading software. There's no harm in wanting to play around with the shiny new toy but many software updates also include much more significant changes, such as resolving bugs or security vulnurabilities.

In fact, bug fixes and security patches are usually released on a more frequent schedule than new feature sets. These changes are necessary to maintain a healthy environment. For this reason, Rittman Mead usually suggests environments are always as up to date as possible with the current releases available.

OBIEE 12.2.1.1 was released this past summer, and it seems to have resolved many issues that plagued early 12C adopters. Recently, OBIEE 12.2.1.2 was also released, resolving even more issues with the early 12C versions. With all of the improvements and fixes available in these versions, an upgrade plan should be a priority to anyone currently on one of the earlier releases of 12c (especially 12.2.1.0).

Okay, so how do I upgrade?

Spencer McGhin has already posted a fantastic blog going over how to perform an in-place upgrade for the 12.2.1.1 release. Even though it was for the previous release, the process is very similar. For those interested in reading a step by step guide, or looking to see what would go into the process, I would suggest reading his post here.

However, with OBIEE 12C's new BAR files, we could take another approach to performing an upgrade. Instead of the traditional "in-place" upgrades, we could perform an upgrade using a different process. We could simply perform a brand new install of this OBIEE version and migrate the existing content using a variety of tools Oracle provides us. Robin Moffatt covered this kind of installation for OBIEE 11.1.1.7 here, before the new BAR files existed, and now the BAR files will make this process much more straightforward.

If you choose to "upgrade" your environment by performing a fresh install, implementing the upgrade process will comprise of exporting the required files from OBIEE, removing the old version of OBIEE (if you are using the same machine), installing the new version of OBIEE, and then deploying the previously exported content. This process resembles that of a migration, and can be thought of that way, but migrating between 12C environments seems to be much simpler than migrating to a 12C environment from an older environment.

So an upgrade process could instead look like a brand new installation of the new OBIEE version, and then the execution of a handful of commands provided by Oracle to return the environment to its previous state.

But what would we gain from following this process, rather than a traditional in-place upgrade?

It's worth noting that either approach requires careful planning and testing. Performing a brand new install does not remove the necessity of planning an upgrade process, gathering requirements, identifying all content that must be migrated, testing the installation, testing the migration, and user acceptance and validation testing. The proper process should never be ignored, regardless of the implementation method.

Is there any advantage to a fresh install?

For starters, you won't need to pollute your system with old or deprecated scripts/directories. In Spencer's aforementioned blog, he found that after his upgrade process he had to maintain a second middleware home directory. If you upgrade your environment throughout the years, you may end up with hundreds of unused/deprecated scripts and files. Who enjoys the thought that their environment is full of old and useless junk? A fresh install would cull most of these superfluous and defunct files on a regular basis.

Additionally, there is the occasional bug that seems to reappear in upgraded environments. These bugs usually appear to be from environments that were patched, and then upgraded to a new version, which causes the previously fixed bug to reappear. While these bugs are fixed in future patches, fresh installs are usually free from these kind of issues.

Finally, I would argue a fresh installation can occasionally be simpler than performing the upgrade process. By saving response files used in an installation, the same installation can be performed again extremely easily. You could perform an install in as little as three lines, if not fewer:
/home/oracle/files/bi_platform-12.2.1.2.0_linux64.bin -silent -responseFile /home/oracle/files/obiee.rsp /home/oracle/Oracle/Middleware/Oracle_Home/oracle_common/bin/rcu -silent -createRepository -databaseType ORACLE -connectString localhost:1521/ORCL -dbUser sys -dbRole sysdba -schemaPrefix DEV -component BIPLATFORM -component MDS -component WLS -component STB -component OPSS -component IAU -component IAU_APPEND -component IAU_VIEWER -f < /home/oracle/files/db_passwords.txt /home/oracle/Oracle/Middleware/Oracle_Home/bi/bin/config.sh -silent -responseFile /home/oracle/files/configure_obiee.rsp

If this is the case, you can just save the response files set up during the first installation, and reuse them to install each new OBIEE version. Of course the required response file structure could change between versions, but I doubt any changes would be significant.

How do I migrate everything over?

So you've chosen to do a fresh install, you've saved the response files for future use, and you have a brand new OBIEE 12.2.1.2 environment up and running. Now, how do we get this environment back to a state where it can be used?

Before performing the upgrade or uninstall, we need to gather a few things from the current environment. The big things we need to make sure we get is the catalog, RPD, and the security model. We may need additional content (like a custom style/skin or deployments on the Weblogic Server, configurations, etc.) but I will ignore those for brevity. To move some these, I expect you would be required to use the WLST.

Catalog, RPD, and Security Model

Lucky for us, the Catalog, RPD, and Security Model are all included in the BAR export we can create using the exportServiceInstance() function in the WLST. You can then import these to a 12C environment using the importServiceInstance() function. Easy enough, right?

Users

If your users are maintained in the embedded Weblogic LDAP, you must export them and then re-import them. This process can be done manually or through the WLST using the Current Management Object.

If users are maintained through an external Active Directory source, then the configurations will be pulled in with the Security Model in the BAR file.

Testing the migration

The final step is, of course, to make sure everything works! And what better way than to use Oracle's new Baseline Validation Tool. This tool is included in OBIEE 12C, and is perfect for testing migrations between environments.

For those unfamiliar, the basic process is this:

  • Configure and run the Baseline Validation Tool against your content.
  • Perform the upgrade (be sure to preserve the previously gathered test results)!
  • Run the Baseline Validation Tool again to gather the new output, and display the compared results.

The output should be an HTML file that, when opened in a browser, will let you know what has changed since the last time it was run. If everything was migrated properly, then there should be no major discrepancies.

Final Thoughts

Is it better to do an in-place upgrade, or a fresh install and migrate current content? The answer, as always, depends on the business. One method adds complexity but allows for more customization possibilities, while the other is likely faster and a more standard approach. Use whichever works for your specific requirements.

It's an interesting idea to install a new version of OBIEE every so often, rather than perform an upgrade, but maybe for some organizations it will simplify the process and alleviate common upgrade issues. If you or your organization are often stuck on older versions of OBIEE because you are uncomfortable or unfamiliar with the typical upgrade process, maybe you can provision an additional environment and attempt this alternative method.

As previously stated, it is imperative for environments to be as up to date as possible, and this method is simply another, albeit unconventional, avenue to make that happen.

Categories: BI & Warehousing

Quarterly EBS Upgrade Recommendations: October 2016 Edition

Steven Chan - Tue, 2016-10-25 02:05

We've previously provided advice on the general priorities for applying EBS updates and creating a comprehensive maintenance strategy.   

Here are our latest upgrade recommendations for E-Business Suite updates and technology stack components.  These quarterly recommendations are based upon the latest updates to Oracle's product strategies, latest support timelines, and newly-certified releases

You can research these yourself using this Note:

Upgrade Recommendations for October 2016


EBS 12.2
 EBS 12.1
 EBS 12.0
 EBS 11.5.10
Check your EBS support status and patching baseline

Apply the minimum 12.2 patching baseline
(EBS 12.2.3 + latest technology stack updates listed below)

In Premier Support to September 30, 2023

Apply the minimum 12.1 patching baseline
(12.1.3 Family Packs for products in use + latest technology stack updates listed below)

In Premier Support to December 31, 2021

In Sustaining Support. No new patches available.

Upgrade to 12.1.3 or 12.2

Before upgrading, 12.0 users should be on the minimum 12.0 patching baseline

In Sustaining Support. No new patches available.

Upgrade to 12.1.3 or 12.2

Before upgrading, 11i users should be on the minimum 11i patching baseline

Apply the latest EBS suite-wide RPC or RUP

12.2.6
Sept. 2016

12.1.3 RPC5
Aug. 2016

12.0.6

11.5.10.2
Use the latest Rapid Install

StartCD 51
Feb. 2016

StartCD 13
Aug. 2011

12.0.6

11.5.10.2

Apply the latest EBS technology stack, tools, and libraries

AD/TXK Delta 8
Sept. 2016

FND
Aug. 2016

EBS 12.2.5 OAF Update 6
Sept. 2016

EBS 12.2.4 OAF Update 11
Jul. 2016

FMW 11.1.1.9

12.1.3 RPC5

OAF Bundle 5
Note 1931412.1
Jun. 2016

JTT Update 4
Oct. 2016



Apply the latest security updates

Oct. 2016 Critical Patch Update

SHA-2 PKI Certificates

SHA-2 Update for Web ADI & Report Manager

Switch from SSL to TLS

Sign JAR files

Oct. 2016 Critical Patch Update

SHA-2 PKI Certificates

SHA-2 Update for Web ADI & Report Manager

Switch from SSL to TLS

Sign JAR files

Oct. 2015 Critical Patch Update
April 2016 Critical Patch Update
Use the latest certified desktop components

Use the latest JRE 1.8, 1.7, or 1.6 release that meets your requirements

Upgrade to IE 11

Upgrade to Firefox ESR 45

Upgrade Office 2003 to later Office versions (e.g. Office 2016)

Upgrade Windows XP and Vista to later versions (e.g. Windows 10)

Use the latest JRE 1.8, 1.7, or 1.6 release that meets your requirements

Upgrade to IE 11

Upgrade to Firefox ESR 45

Upgrade Office 2003 to later Office versions (e.g. Office 2016)

Upgrade Windows XP and Vista to later versions (e.g. Windows 10)



Upgrade to the latest database Database 11.2.0.4 or 12.1.0.2 Database 11.2.0.4 or 12.1.0.2 Database 11.2.0.4 or 12.1.0.2 Database 11.2.0.4 or 12.1.0.2 If you're using Oracle Identity Management

Upgrade to Oracle Access Manager 11.1.2.3

Upgrade to Oracle Internet Directory 11.1.1.9

Migrate from Oracle SSO to OAM 11.1.2.3.0

Upgrade to Oracle Internet Directory 11.1.1.9



If you're using Oracle Discoverer
Migrate to Oracle Business Intelligence Enterprise Edition (OBIEE), Oracle Business Intelligence Applications (OBIA), or Discoverer 11.1.1.7. Migrate to Oracle Business Intelligence Enterprise Edition (OBIEE), Oracle Business Intelligence Applications (OBIA), or Discoverer 11.1.1.7.

If you're using Oracle Portal
Migrate to Oracle WebCenter  11.1.1.9
Migrate to Oracle WebCenter 11.1.1.9 or upgrade to Portal 11.1.1.6.



Categories: APPS Blogs

Documentum story – Thumbnail Server not starting if docbases are down

Yann Neuhaus - Tue, 2016-10-25 02:00

In this blog, I will talk about the Thumbnail Server. It’s a component of Documentum that you can install on the Content Server to generate… thumbnails! Basically what it does is that it will work in correlation with the ADTS/CTS in order to generate different kinds of previews of your document. For example jpeg_lres (low resolution) or jpeg_story (StoryBoard). You can define how many previews you want per document (1 preview per page, only the first page, aso…) and of which type. Then D2 can use these previews in a widget for the end-users to see what the document looks like. Apparently with D2 4.6, the Thumbnail Server and ADTS/CTS aren’t needed anymore to generate thumbnails but it was the case for D2 4.5 and previous versions.

 

That’s my first blog related to the Thumbnail Server because that’s actually a (the?) component of Documentum that is working pretty well without much issue so I absolutely wanted to explain the issue I faced and what has been done to solve that.

 

So let’s start with some background: I was working on a project where the TEST and PROD environments contain only one docbase (DOCBASE1) in addition to the Global Registry. In the DEV environment, there were three docbases for development purposes but the additional two (DOCBASE2 and DOCBASE3) were stopped for a few days because we were running a lot of Performance Tests with EMC and we wanted the results to reflect the TEST/PROD environments. At this point, the Thumbnail Server was working properly, previews were generated successfully, aso… Then to apply a change related to the Performance Tests, we had to restart the whole Content Server, including the Thumbnail Server since it has been installed on all Content Servers (HA environment). The change wasn’t related to the Thumbnail Server at all but we discovered a small bug because of this: after the restart, the Thumbnail Server wasn’t working anymore. Just like the JMS/ACS, there is a way to very quickly know if the Thumbnail Server is up & running or not and that can be checked by entering the following URL in a web browser: http(s)://content_server_01:port/thumbsrv/getThumbnail?

 

In our case, this URL wasn’t working anymore after the restart while it was working properly before and therefore I had to look at the Thumbnail log file. One important thing to note here is that the Thumbnail Server is bundled with Tomcat. For those of you who are used to work with Tomcat, your first reaction might be to open the file $TOMCAT_HOME/logs/catalina.out. For the Thumbnail, this would be the file: $DM_HOME/thumbsrv/container/logs/catalina.out. That’s what I did… But there were absolutely no useful information because only the Tomcat initialization is displayed in this file by default. The actual useful information for the Thumbnail Server are stored in the localhost log file… That’s the file I’m usually completely ignoring because there is less inside it than in the catalina.out file but apparently EMC took a difference approach!

 

So let’s take a look at what the log file is providing:

May 04, 2016 2:07:08 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [DM_TS_T_INIT_RESOURCES] Initialized resource strings
May 04, 2016 2:07:08 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: Loaded Manifest entries from $DOCUMENTUM/product/7.2/thumbsrv/container/webapps/thumbsrv/WEB-INF/lib/thumbsrv.jar
May 04, 2016 2:07:08 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail:
        Initializing Documentum Thumbnail Server 7.2.0000 , build: 0154
        Created on
        debug             = false
        ticket_timeout    = 300
        application path  = $DOCUMENTUM/product/7.2/thumbsrv/container/webapps/thumbsrv/
May 04, 2016 2:07:08 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: Read path to configuration file: $DOCUMENTUM/product/7.2/thumbsrv/conf/user.dat
May 04, 2016 2:07:08 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: Initializing Storage Area Manager...
May 04, 2016 2:07:17 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: Initializing Connection Manager...
May 04, 2016 2:07:17 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [DM_TS_T_INIT_STORAGE_AREA_MGR] Initialized storage area manager
May 04, 2016 2:07:17 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: Initializing crypto classes, key file at $DOCUMENTUM/dba/secure/aek.key
May 04, 2016 2:07:17 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [WARN] getRepositoryVersion cannot find the repository - DOCBASE2
May 04, 2016 2:07:23 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [WARN] getRepositoryVersion cannot find the repository - DOCBASE2
May 04, 2016 2:07:30 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [WARN] getRepositoryVersion cannot find the repository - DOCBASE2
  ...
May 04, 2016 2:10:05 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [WARN] getRepositoryVersion cannot find the repository - DOCBASE2
May 04, 2016 2:10:11 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [WARN] getRepositoryVersion cannot find the repository - DOCBASE2
May 04, 2016 2:10:17 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: initDefaultThumbnailsFromRepository: DOCBASE2 - 0.0
May 04, 2016 2:10:18 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [WARN] getRepositoryVersion cannot find the repository - DOCBASE3
May 04, 2016 2:10:24 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [WARN] getRepositoryVersion cannot find the repository - DOCBASE3
May 04, 2016 2:10:30 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [WARN] getRepositoryVersion cannot find the repository - DOCBASE3
  ...
May 04, 2016 2:13:06 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [WARN] getRepositoryVersion cannot find the repository - DOCBASE3
May 04, 2016 2:13:12 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [WARN] getRepositoryVersion cannot find the repository - DOCBASE3
May 04, 2016 2:13:18 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: initDefaultThumbnailsFromRepository: DOCBASE3 - 0.0
May 04, 2016 2:13:19 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: getRepositoryVersion: DOCBASE1 - 1666666 - 7.2.0050.0214  Linux64.Oracle
May 04, 2016 2:13:19 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: initDefaultThumbnailsFromRepository: DOCBASE1 - 7.2
May 04, 2016 2:13:20 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: createDefaultThumnailsInRespoitory: repo=/System/ThumbnailServer/thumbnails, local=$DOCUMENTUM/product/7.2/thumbsrv/container/webapps/thumbsrv/thumbnails
May 04, 2016 2:13:22 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: createDefaultThumnailsInRespoitory: repo=/System/ThumbnailServer/thumbnails/formats, local=$DOCUMENTUM/product/7.2/thumbsrv/container/webapps/thumbsrv/thumbnails/formats
May 04, 2016 2:13:22 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: createDefaultThumnailsInRespoitory: repo=/System/ThumbnailServer/thumbnails/types, local=$DOCUMENTUM/product/7.2/thumbsrv/container/webapps/thumbsrv/thumbnails/types
May 04, 2016 2:13:26 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: checkRuleVersion: the default thumbnail rule version - 6.0.0.101
May 04, 2016 2:13:26 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: initRepositoryRules: repoitory=DOCBASE1, id=1666666
May 04, 2016 2:13:34 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [DM_TS_T_INIT_DEF_THUMB_MGR] Initialized default thumbnails manager
May 04, 2016 2:13:34 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: Failed to get a session for DOCBASE2: DfNoServersException:: THREAD: pool-2-thread-1; MSG: [DM_DOCBROKER_E_NO_SERVERS_FOR_DOCBASE]error:  "The DocBroker running on host (content_server_01:1489) does not know of a server for the specified docbase (DOCBASE2)"; ERRORCODE: 100; NEXT: null
May 04, 2016 2:13:34 PM org.apache.catalina.core.ApplicationContext log
SEVERE: getThumbnail: [DM_TS_E_INIT_FORMATS_MGR] Falied to initialize formats.
DfNoServersException:: THREAD: pool-2-thread-1; MSG: [DM_DOCBROKER_E_NO_SERVERS_FOR_DOCBASE]error:  "The DocBroker running on host (content_server_01:1489) does not know of a server for the specified docbase (DOCBASE2)"; ERRORCODE: 100; NEXT: null
        at com.documentum.fc.client.impl.docbroker.ServerMapBuilder.__AW_getDataFromDocbroker(ServerMapBuilder.java:171)
        at com.documentum.fc.client.impl.docbroker.ServerMapBuilder.getDataFromDocbroker(ServerMapBuilder.java)
        at com.documentum.fc.client.impl.docbroker.ServerMapBuilder.__AW_getMap(ServerMapBuilder.java:60)
        at com.documentum.fc.client.impl.docbroker.ServerMapBuilder.getMap(ServerMapBuilder.java)
        at com.documentum.fc.client.impl.docbroker.DocbrokerClient.getServerMap(DocbrokerClient.java:152)
        at com.documentum.fc.client.impl.connection.docbase.ServerChoiceManager.__AW_updateServerChoices(ServerChoiceManager.java:159)
        at com.documentum.fc.client.impl.connection.docbase.ServerChoiceManager.updateServerChoices(ServerChoiceManager.java)
        at com.documentum.fc.client.impl.connection.docbase.ServerChoiceManager.updateServerChoicesIfNecessary(ServerChoiceManager.java:148)
        at com.documentum.fc.client.impl.connection.docbase.ServerChoiceManager.getServerChoices(ServerChoiceManager.java:47)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.getServerChoices(DocbaseConnection.java:273)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.__AW_establishNewRpcClient(DocbaseConnection.java:227)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.establishNewRpcClient(DocbaseConnection.java)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.__AW_open(DocbaseConnection.java:126)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.open(DocbaseConnection.java)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.<init>(DocbaseConnection.java:100)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.<init>(DocbaseConnection.java:60)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnectionFactory.newDocbaseConnection(DocbaseConnectionFactory.java:26)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnectionManager.createNewConnection(DocbaseConnectionManager.java:180)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnectionManager.getDocbaseConnection(DocbaseConnectionManager.java:110)
        at com.documentum.fc.client.impl.session.SessionFactory.newSession(SessionFactory.java:23)
        at com.documentum.fc.client.impl.session.PrincipalAwareSessionFactory.newSession(PrincipalAwareSessionFactory.java:44)
        at com.documentum.fc.client.impl.session.PooledSessionFactory.__AW_newSession(PooledSessionFactory.java:49)
        at com.documentum.fc.client.impl.session.PooledSessionFactory.newSession(PooledSessionFactory.java)
        at com.documentum.fc.client.impl.session.SessionManager.getSessionFromFactory(SessionManager.java:134)
        at com.documentum.fc.client.impl.session.SessionManager.newSession(SessionManager.java:72)
        at com.documentum.fc.client.impl.session.SessionManager.getSession(SessionManager.java:191)
        at com.documentum.thumbsrv.docbase.DocbaseConnectionMgr.getSessionForSection(DocbaseConnectionMgr.java:197)
        at com.documentum.thumbsrv.docbase.FormatMapperMgr.__AW_loadFormatsFromRepositories(FormatMapperMgr.java:95)
        at com.documentum.thumbsrv.docbase.FormatMapperMgr.loadFormatsFromRepositories(FormatMapperMgr.java)
        at com.documentum.thumbsrv.docbase.FormatMapperMgr.<init>(FormatMapperMgr.java:62)
        at com.documentum.thumbsrv.getThumbnail.__AW_init(getThumbnail.java:214)
        at com.documentum.thumbsrv.getThumbnail.init(getThumbnail.java)
        at org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1266)
        at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1185)
        at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:1080)
        at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:5015)
        at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5302)
        at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
        at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:895)
        at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:871)
        at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:615)
        at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1095)
        at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1617)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.__AW_run(FutureTask.java:262)
        at java.util.concurrent.FutureTask.run(FutureTask.java)
        at java.util.concurrent.ThreadPoolExecutor.__AW_runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
 
May 04, 2016 2:13:34 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [DM_TS_E_INIT_FAILED] Failed to initialize Documentum Thumbnail Server
May 04, 2016 2:13:34 PM org.apache.catalina.core.ApplicationContext log
INFO: Marking servlet getThumbnail as unavailable
May 04, 2016 2:13:34 PM org.apache.catalina.core.StandardContext loadOnStartup
SEVERE: Servlet /thumbsrv threw load() exception
javax.servlet.UnavailableException: [DM_TS_E_INIT_FAILED] Failed to initialize Documentum Thumbnail Server
        at com.documentum.thumbsrv.getThumbnail.__AW_init(getThumbnail.java:221)
        at com.documentum.thumbsrv.getThumbnail.init(getThumbnail.java)
        at org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1266)
        at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1185)
        at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:1080)
        at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:5015)
        at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5302)
        at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
        at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:895)
        at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:871)
        at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:615)
        at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1095)
        at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1617)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.__AW_run(FutureTask.java:262)
        at java.util.concurrent.FutureTask.run(FutureTask.java)
        at java.util.concurrent.ThreadPoolExecutor.__AW_runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
 

May 04, 2016 2:17:16 PM org.apache.catalina.core.StandardWrapperValve invoke
INFO: Servlet getThumbnail is currently unavailable
May 04, 2016 2:22:16 PM org.apache.catalina.core.StandardWrapperValve invoke
INFO: Servlet getThumbnail is currently unavailable
May 04, 2016 2:27:16 PM org.apache.catalina.core.StandardWrapperValve invoke
INFO: Servlet getThumbnail is currently unavailable
May 04, 2016 2:32:16 PM org.apache.catalina.core.StandardWrapperValve invoke
INFO: Servlet getThumbnail is currently unavailable
May 04, 2016 2:37:16 PM org.apache.catalina.core.StandardWrapperValve invoke
INFO: Servlet getThumbnail is currently unavailable
May 04, 2016 2:42:16 PM org.apache.catalina.core.StandardWrapperValve invoke
INFO: Servlet getThumbnail is currently unavailable
May 04, 2016 2:47:16 PM org.apache.catalina.core.StandardWrapperValve invoke
INFO: Servlet getThumbnail is currently unavailable

 

Ok that’s quite a long trace and I already cut some parts because it was really too long. As you can see above, the very beginning of the startup is going on properly and then the Thumbnail Server is trying to contact the DOCBASE2. Hum why exactly is it trying to contact the DOCBASE2 while this docbase isn’t running? That’s a first strange thing. Then I cut a lot of lines but you can see that it is actually trying to do that for 3 minutes and it is not doing anything else during that time. Once the three minutes are over, it is trying to contact the DOCBASE3 for 3 minutes too (and failed again) and finally it contacted the DOCBASE1 which was the only docbase running at that time. This last one succeeded so the Thumbnail Server should have started properly… But actually it couldn’t because the first docbase that it tried fail (DOCBASE2) and that’s the docbase that will be used by the Thumbnail Server to open a session to retrieve some information.

 

So I did more tests to try to understand where the issue was and what could be done to solve that. First of all, the important thing to understand here is that the Thumbnail Server isn’t trying to contact all docbases ever installed on this Content Server. It will only try to contact the docbases that have been configured for it. To be more precise, the configuration of a docbase for the Thumbnail Server will update the file user.dat ($DM_HOME/thumbsrv/conf/user.dat) and add inside it the configuration for this specific docbase. When the Thumbnail Server will start, it will parse this file and see which docbases should be contacted during the startup.

 

Therefore my first test was to simply comment all lines related to DOCBASE2 and DOCBASE3 and then restart the Thumbnail Server. This is what I got:

May 04, 2016 3:23:32 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [DM_TS_T_INIT_RESOURCES] Initialized resource strings
May 04, 2016 3:23:32 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: Loaded Manifest entries from $DOCUMENTUM/product/7.2/thumbsrv/container/webapps/thumbsrv/WEB-INF/lib/thumbsrv.jar
May 04, 2016 3:23:32 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail:
        Initializing Documentum Thumbnail Server 7.2.0000 , build: 0154
        Created on
        debug             = false
        ticket_timeout    = 300
        application path  = $DOCUMENTUM/product/7.2/thumbsrv/container/webapps/thumbsrv/
May 04, 2016 3:23:32 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: Read path to configuration file: $DOCUMENTUM/product/7.2/thumbsrv/conf/user.dat
May 04, 2016 3:23:32 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: Initializing Storage Area Manager...
May 04, 2016 3:23:40 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: Initializing Connection Manager...
May 04, 2016 3:23:40 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [DM_TS_T_INIT_STORAGE_AREA_MGR] Initialized storage area manager
May 04, 2016 3:23:40 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: Initializing crypto classes, key file at $DOCUMENTUM/dba/secure/aek.key
May 04, 2016 3:23:40 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: getRepositoryVersion: DOCBASE1 - 1666666 - 7.2.0050.0214  Linux64.Oracle
May 04, 2016 3:23:40 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: initDefaultThumbnailsFromRepository: DOCBASE1 - 7.2
May 04, 2016 3:23:41 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: createDefaultThumnailsInRespoitory: repo=/System/ThumbnailServer/thumbnails, local=$DOCUMENTUM/product/7.2/thumbsrv/container/webapps/thumbsrv/thumbnails
May 04, 2016 3:23:43 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: createDefaultThumnailsInRespoitory: repo=/System/ThumbnailServer/thumbnails/formats, local=$DOCUMENTUM/product/7.2/thumbsrv/container/webapps/thumbsrv/thumbnails/formats
May 04, 2016 3:23:43 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: createDefaultThumnailsInRespoitory: repo=/System/ThumbnailServer/thumbnails/types, local=$DOCUMENTUM/product/7.2/thumbsrv/container/webapps/thumbsrv/thumbnails/types
May 04, 2016 3:23:47 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: checkRuleVersion: the default thumbnail rule version - 6.0.0.101
May 04, 2016 3:23:47 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: initRepositoryRules: repoitory=DOCBASE1, id=1666666
May 04, 2016 3:23:54 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [DM_TS_T_INIT_DEF_THUMB_MGR] Initialized default thumbnails manager
May 04, 2016 3:23:54 PM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [DM_TS_T_INIT_COMPLETE] Documentum Thumbnail Server initialization complete!

 

As you can see, the content is exactly the same except that the Thumbnail Server is now only contacting the DOCBASE1 (successfully again) but this time, the Thumbnail Server is up & running properly. Therefore commenting the lines in the file user.dat solved this issue… But that’s not enough. For me, this was clearly a bug and therefore I did more tests.

 

For the next test, I restored the lines related to DOCBASE2 and DOCBASE3 in the file user.dat and I tried to change the order of the lines inside this file… Because DOCBASE1 was the first docbase in this file, DOCBASE2 the second and DOCBASE3 the last one. So I thought that maybe the second docbase was the first to be initialized? And if the first docbase to be initialized (2nd in the file?) is responding, will the Thumbnail Server work?

 

Therefore I changed the order:

  • Switching DOCBASE1 in second position, DOCBASE2 in first position, DOCBASE3 still in last => Not working, same issue, DOCBASE2 is the first to be initialised
  • Switching DOCBASE1 in second position, DOCBASE2 in last position, DOCBASE3 in first position => Not working, same issue, DOCBASE3 is the first to be initialised

 

This test wasn’t successful because there is absolutely no logic behind which docbase will be initialized first… Therefore I restored the initial values (1 in first, 2 in second, 3 in last) and I performed a last test: stopping the DOCBASE1 and starting the DOCBASE2 so that there is still one docbase to be running which would be the first to be initialized. After doing that, I restarted a last time the Thumbnail Server and the first docbase to be initialized was indeed the DOCBASE2, which was running. Then it tried to initialise the DOCBASE1 and DOCBASE3 which weren’t running and therefore it failed. BUT in the end, the Thumbnail Server was able to start properly, even if it took 6 minutes + ~20 seconds instead of 20 seconds to start.

 

With all these results, I opened a Service Request on the EMC website and they were able to find the root cause for both issues (why it is looping for 3 minutes on nothing and why it is sometimes not able to start when one docbase is properly responding). In the end, they provided us a hotfix that has been incorporated in newer versions of the software normally and this hotfix fixed both issues properly.

 

 

Cet article Documentum story – Thumbnail Server not starting if docbases are down est apparu en premier sur Blog dbi services.

Cloud to Ground Mashup Webinar

Jim Marion - Mon, 2016-10-24 23:45

At 11:00 AM Pacific on Tuesday, October 25th (tomorrow), I have the privilege of talking about Cloud and on-premise (ground) integration. Whether cloud to cloud, cloud to ground, or ground to ground, integration is probably one of the most difficult aspects of any implementation. Integration comes in two flavors:

  • Back-end
  • Front-end

Back-end integration is the most common. Back-end integration involves integrating data between two systems either for processing or presenting a common user experience.

Front-end integration is about combining the user experience of two separate applications to create a common user experience. I often find that I can eliminate some of the back-end integrations if I can appropriately mashup front-end applications. In this webinar you will learn enterprise mashup strategies that allow you to present a seamless user experience to your users across cloud and ground applications. No modifications. Just tailoring and configuration.

Pages

Subscribe to Oracle FAQ aggregator