Feed aggregator

Parallel First_rows()

Jonathan Lewis - Mon, 2017-04-10 09:53

A recent posting on OTN raised the question of whether or not the “parallel” hint and the “first_rows(n)” hint were mutually incompatible. This reminded me that from time to time other posters on OTN (copying information from various websites, perhaps) have claimed that “parallel doesn’t work with first rows” or, conversely, “first rows doesn’t work with parallel”. This is one of those funny little myths that is so old that the script I’ve got to demonstrate the misconception is dated 2003 with a first test version of 8.1.7.4.

Since I haven’t run the test on any version of Oracle newer than 9.2.0.4 I thought it was time to dust it down, modernise it slightly, and run it again. So here’s the bit that creates a sample data set:


create table t1 (
        id      number,
        v1      varchar2(10),
        padding varchar2(100),
        constraint      t_pk primary key(id) using index local
)
partition by range(id) (
        partition p1000 values less than (1000),
        partition p2000 values less than (2000),
        partition p3000 values less than (3000),
        partition p4000 values less than (4000),
        partition p5000 values less than (5000)
)
;

insert into t1
select
        rownum - 1,
        rpad(rownum-1,10),
        rpad('x',100)
from
        all_objects
where
        rownum <= 5000 -- > hint to avoid WordPress formatting issue
order by 
        dbms_random.value
;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1', 
                method_opt       => 'for all columns size 1'
        );
end;
/

Now I’m going to run a simple query, hinted in 4 different ways:

  • no hints
  • parallel hint only: /*+ parallel */
  • first_rows(1) hint only: /*+ first_rows(1) */
  • parallel and first_rows(1): /*+ parallel first_rows(1) */

Here’s the version of the query that has both hints in place:


set serveroutput off
set linesize 156
set pagesize 60
set trimspool on

select
        /*+ parallel first_rows(1) */
        v1
from
        t1
where
        id between 1500 and 2000
;

select * from table(dbms_xplan.display_cursor(null,null,'cost outline'));

I’ve actually run the query and used the display_cursor() option to pull the plan from memory – in the original (8i) script I used autotrace and the old (deprecated, backwards compatibility only) first_rows hint. To do any other tests just clone and edit. Here are the 4 outputs from the call to display_cursor() – with a little cosmetic editing:


SQL_ID  63qnzam9b8m9g, child number 0
=====================================
select  /*+ */  v1 from  t1 where  id between 1500 and 2000

Plan hash value: 277861402

-------------------------------------------------------------------------------------------------
| Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |      |       |       |    15 (100)|          |       |       |
|   1 |  PARTITION RANGE ITERATOR|      |   502 |  7530 |    15   (0)| 00:00:01 |     2 |     3 |
|*  2 |   TABLE ACCESS FULL      | T1   |   502 |  7530 |    15   (0)| 00:00:01 |     2 |     3 |
-------------------------------------------------------------------------------------------------

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('11.2.0.4')
      DB_VERSION('11.2.0.4')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      FULL(@"SEL$1" "T1"@"SEL$1")
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("ID"<=2000 AND "ID">=1500))


SQL_ID  ahary3u8q88mq, child number 1
=====================================
select  /*+ parallel */  v1 from  t1 where  id between 1500 and 2000

Plan hash value: 9959369

------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |          |       |       |     8 (100)|          |       |       |        |      |            |
|   1 |  PX COORDINATOR      |          |       |       |            |          |       |       |        |      |            |
|   2 |   PX SEND QC (RANDOM)| :TQ10000 |   502 |  7530 |     8   (0)| 00:00:01 |       |       |  Q1,00 | P->S | QC (RAND)  |
|   3 |    PX BLOCK ITERATOR |          |   502 |  7530 |     8   (0)| 00:00:01 |     2 |     3 |  Q1,00 | PCWC |            |
|*  4 |     TABLE ACCESS FULL| T1       |   502 |  7530 |     8   (0)| 00:00:01 |     2 |     3 |  Q1,00 | PCWP |            |
------------------------------------------------------------------------------------------------------------------------------

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('11.2.0.4')
      DB_VERSION('11.2.0.4')
      ALL_ROWS
      SHARED(2)
      OUTLINE_LEAF(@"SEL$1")
      FULL(@"SEL$1" "T1"@"SEL$1")
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access(:Z>=:Z AND :Z<=:Z)
       filter(("ID"<=2000 AND "ID">=1500))

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 2


SQL_ID  3m6mnk9b337dd, child number 0
=====================================
select  /*+ first_rows(1) */  v1 from  t1 where  id between 1500 and
2000

Plan hash value: 1044541683

-----------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                   |      |       |       |     6 (100)|          |       |       |
|   1 |  PARTITION RANGE ITERATOR          |      |     4 |    60 |     6   (0)| 00:00:01 |     2 |     3 |
|   2 |   TABLE ACCESS BY LOCAL INDEX ROWID| T1   |     4 |    60 |     6   (0)| 00:00:01 |     2 |     3 |
|*  3 |    INDEX RANGE SCAN                | T_PK |       |       |     2   (0)| 00:00:01 |     2 |     3 |
-----------------------------------------------------------------------------------------------------------

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('11.2.0.4')
      DB_VERSION('11.2.0.4')
      FIRST_ROWS(1)
      OUTLINE_LEAF(@"SEL$1")
      INDEX_RS_ASC(@"SEL$1" "T1"@"SEL$1" ("T1"."ID"))
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("ID">=1500 AND "ID"<=2000) -- > needs edit to avoid WordPress formatting issue


SQL_ID  9asm7t1zbv4q8, child number 1
=====================================
select  /*+ parallel first_rows(1) */  v1 from  t1 where  id between
1500 and 2000

Plan hash value: 4229065483

----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |          |       |       |     3 (100)|          |       |       |        |      |            |
|   1 |  PX COORDINATOR                      |          |       |       |            |          |       |       |        |      |            |
|   2 |   PX SEND QC (RANDOM)                | :TQ10000 |     4 |    60 |     3   (0)| 00:00:01 |       |       |  Q1,00 | P->S | QC (RAND)  |
|   3 |    PX PARTITION RANGE ITERATOR       |          |     4 |    60 |     3   (0)| 00:00:01 |     2 |     3 |  Q1,00 | PCWC |            |
|   4 |     TABLE ACCESS BY LOCAL INDEX ROWID| T1       |     4 |    60 |     3   (0)| 00:00:01 |     2 |     3 |  Q1,00 | PCWP |            |
|*  5 |      INDEX RANGE SCAN                | T_PK     |       |       |     1   (0)| 00:00:01 |     2 |     3 |  Q1,00 | PCWP |            |
----------------------------------------------------------------------------------------------------------------------------------------------

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('11.2.0.4')
      DB_VERSION('11.2.0.4')
      FIRST_ROWS(1)
      SHARED(2)
      OUTLINE_LEAF(@"SEL$1")
      INDEX_RS_ASC(@"SEL$1" "T1"@"SEL$1" ("T1"."ID"))
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   5 - access("ID">=1500 AND "ID"<=2000)

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 2

Critically we get four different execution plans from the four different strategies – so clearly the optimizer is perfectly happy to accept the parallel and first_rows() hints simultaneously. Note, particularly, how the first_rows(1) hint when combined with the parallel hint moved us from a parallel full tablescan to a parallel index range scan.

Whether or not it’s sensible to use the hint combination in this way is a matter for careful consideration, of course, but there could be circumstances where the combination really is the best way to get the starting row(s) from a query that otherwise has to return a large amount of data.


Getting Started with Pivotal Cloud Cache on Pivotal Cloud Foundry

Pas Apicella - Sun, 2017-04-09 22:57
Recently we announced the new cache service Pivotal Cloud Cache (PCC) for Pivotal Cloud Foundry (PCC). In short Pivotal Cloud Cache (PCC) is a opinionated, distributed, highly available, high speed key/value caching service. PCC can be easily horizontally scaled for capacity and performance.

In this post we will show how you would provision a service, login to the Pulse UI dashboard, connect using GFSH etc. I won't create a spring boot application to use the service at this stage BUT that will follow in a post soon enough.

Steps

1. First you will need the PCC service and if it's been installed it will look like this


2. Now let's view the current plans we have in place as shown below

pasapicella@pas-macbook:~$ cf marketplace -s p-cloudcache
Getting service plan information for service p-cloudcache as papicella@pivotal.io...
OK

service plan   description          free or paid
extra-small    Plan 1 Description   free
extra-large    Plan 5 Description   free

3. Now let's create a service as shown below

pasapicella@pas-macbook:~$ cf create-service p-cloudcache extra-small pas-pcc
Creating service instance pas-pcc in org pivot-papicella / space development as papicella@pivotal.io...
OK

Create in progress. Use 'cf services' or 'cf service pas-pcc' to check operation status.

4. At this point it will asynchronously create the GemFire cluster which is essentially what PCC is. For more Information on GemFire see the docs link here.

You can check the progress one of two ways.

1. Using Pivotal Apps manager as shown below


2. Using a command as follows

pasapicella@pas-macbook:~$ cf service pas-pcc

Service instance: pas-pcc
Service: p-cloudcache
Bound apps:
Tags:
Plan: extra-small
Description: Pivotal CloudCache offers the ability to deploy a GemFire cluster as a service in Pivotal Cloud Foundry.
Documentation url: http://docs.pivotal.io/gemfire/index.html
Dashboard: http://gemfire-yyyyy.run.pez.pivotal.io/pulse

Last Operation
Status: create in progress
Message: Instance provisioning in progress
Started: 2017-04-10T01:34:58Z
Updated: 2017-04-10T01:36:59Z

5. Once complete it will look as follows


6. Now in order to log into both GFSH and Pulse we are going to need to create a service key for the service we just created, which we do as shown below.

pasapicella@pas-macbook:~/pivotal/PCF/services/PCC$ cf create-service-key pas-pcc pas-pcc-key
Creating service key pas-pcc-key for service instance pas-pcc as papicella@pivotal.io...
OK

7. Retrieve service keys as shown below

pasapicella@pas-macbook:~$ cf service-key pas-pcc pas-pcc-key
Getting key pas-pcc-key for service instance pas-pcc as papicella@pivotal.io...

{
 "locators": [
  "0.0.0.0[55221]",
  "0.0.0.0[55221]",
  "0.0.0.0[55221]"
 ],
 "urls": {
  "gfsh": "http://gemfire-yyyy.run.pez.pivotal.io/gemfire/v1",
  "pulse": "http://gemfire-yyyy.run.pez.pivotal.io/pulse"
 },
 "users": [
  {
   "password": "password",
   "username": "developer"
  },
  {
   "password": "password",
   "username": "operator"
  }
 ]
}

8. Now lets log into Pulse. The URL is available as part of the output above

Login Page


Pulse Dashboard : You can see from the dashboard page it shows how many locators and cache server members we have as part of this default cluster



9. Now lets log into GFSH. Once again the URL is as per the output above

- First we will need to download Pivotal GemFire so we have the GFSH client, download the zip at the link below and extract to your file system

  https://network.pivotal.io/products/pivotal-gemfire

- Invoke as follows using the path to the extracted ZIP file

$GEMFIRE_HOME/bin/gfsh

pasapicella@pas-macbook:~/pivotal/software/gemfire/pivotal-gemfire-9.0.3/bin$ ./gfsh
    _________________________     __
   / _____/ ______/ ______/ /____/ /
  / /  __/ /___  /_____  / _____  /
 / /__/ / ____/  _____/ / /    / /
/______/_/      /______/_/    /_/    9.0.3

Monitor and Manage Pivotal GemFire
gfsh>connect --use-http --url=http://gemfire-yyyy.run.pez.pivotal.io/gemfire/v1 --user=operator --password=password
Successfully connected to: GemFire Manager HTTP service @ http://gemfire-yyyy.run.pez.pivotal.io/gemfire/v1

gfsh>

10. Now lets create a region which will use to store some cache data

$ create region --name=demoregion --type=PARTITION_HEAP_LRU --redundant-copies=1
  
gfsh>create region --name=demoregion --type=PARTITION_HEAP_LRU --redundant-copies=1
Member | Status
----------------------------------- | ---------------------------------------------------------------------
cacheserver-PCF-PEZ-Heritage-RP04-1 | Region "/demoregion" created on "cacheserver-PCF-PEZ-Heritage-RP04-1"
cacheserver-PCF-PEZ-Heritage-RP04-0 | Region "/demoregion" created on "cacheserver-PCF-PEZ-Heritage-RP04-0"
cacheserver-PCF-PEZ-Heritage-RP04-2 | Region "/demoregion" created on "cacheserver-PCF-PEZ-Heritage-RP04-2"
cacheserver-PCF-PEZ-Heritage-RP04-3 | Region "/demoregion" created on "cacheserver-PCF-PEZ-Heritage-RP04-3"

Note: Understanding the region types you can create exist at the Pivotal GemFire docs but basically in the example above we create a partitioned region where primary and backup data is stored among the cache servers. As you can see we asked for a single backup copy of each region entry to be placed on a separate cache server itself for redundancy

http://gemfire.docs.pivotal.io/geode/developing/region_options/region_types.html#region_types

11. If we return to the Pulse Dashboard UI we will see from the "Data Browser" tab we have a region


12. Now lets just add some data , few entries which are simple String key/value pairs only
  
gfsh>put --region=/demoregion --key=1 --value="value 1"
Result : true
Key Class : java.lang.String
Key : 1
Value Class : java.lang.String
Old Value : <NULL>


gfsh>put --region=/demoregion --key=2 --value="value 2"
Result : true
Key Class : java.lang.String
Key : 2
Value Class : java.lang.String
Old Value : <NULL>


gfsh>put --region=/demoregion --key=3 --value="value 3"
Result : true
Key Class : java.lang.String
Key : 3
Value Class : java.lang.String
Old Value : <NULL>

13. Finally lets query the data we have in the cache
  
gfsh>query --query="select * from /demoregion"

Result : true
startCount : 0
endCount : 20
Rows : 3

Result
-------
value 3
value 1
value 2

NEXT_STEP_NAME : END

13. We can return to Pulse and invoke the same query from the "Data Browser" tab as shown below.



Of course storing data in a cache isn't useful unless we actually have an application on PCF that can use the Cache BUT that will come in a separate post. Basically we will BIND to this service, connect as a GemFire Client using the locators we are given as part of the service key and then extract the cache data we have just created above by invoking a query.

More Information

Download PCC for PCF
https://network.pivotal.io/products/cloud-cache

Data Sheet for PCC
https://content.pivotal.io/datasheets/pivotal-cloud-cache
Categories: Fusion Middleware

ADF Multi Task Flow Binding and Tab Order

Andrejus Baranovski - Sun, 2017-04-09 19:47
I had a post while ago about ADF multi task flow binding and loading dynamic regions - Building Custom UI Shell with ADF 11g R2. In that sample, new region was opened in the first tab position. Meaning tab order was from right to left. It is more natural to have left to right tab opening order. This can be done too, check updated sample app - we need to control disclosed property and add new region to the end of array.

Sample app - MultiTaskFlowApp_TabOrder.zip. Sample app contains four regions that can be opened dynamically. Let's say user opens region Locations:


With improved tab order, next region will be opened in the tab on the right (before it was on the left, in the first position):


Tab closing works in the way. Let's say user wants to close second tab:


When tab with Departments is closed, next tab on the right is opened - Employees tab:


Key thing in implementation for this requirement - disclosed property in dynamic tab:


It calls bean method, where it evaluates current tab to be disclosed. If given tab is matching the value - it will be disclosed. Disclosed property for all other tabs will be reset:


Each time when new tab is loaded, it is loaded to the end of the array. New tab is set to be disclosed:


When user selects tab - currently disclosed tab property is updated too, to make sure info about new disclosed tab is stored:


One more case - tab closure. When tab is closed - next tab is selected, unless current tab was the last one:


When tab is selected programmatically, we update information about current selected tab too:

how to decrease the database table size

Tom Kyte - Sun, 2017-04-09 16:06
to reduce the size of the overall db space, we are deleting unused rows/data from our tables which occupies more memory. But even after deleting around half of the existing rows, the memory size didnot decrease but increase. Before delete: Rows...
Categories: DBA Blogs

Rebuild Index of partition after updating partition of record

Tom Kyte - Sun, 2017-04-09 16:06
Hi, I got the huge table with me ,consider it as table1. And I want to delete old record from it around 2 billion. This table is partition on date using range partition. So I have updated the it's date field and pulled identified record to one...
Categories: DBA Blogs

GRANTING PRIVILEGES TO OTHER USER WITHOUT USING SCHEMANAME.....

Tom Kyte - Sun, 2017-04-09 16:06
QUESTION 1 ------------ I have two user i.e. user1 and user2. user1 have 1000 object. This all objects need to be accessed by user2, for that i have to grant access privilege to user2. after that i want user2 to access user1 objects without usin...
Categories: DBA Blogs

database duplicate using rman

Tom Kyte - Sun, 2017-04-09 16:06
How i can duplicate database from production database? I have a little confusion as below. A) on standby database , Is install oracle database or software only? B) If software only then how i can connect to rman because on standby database does no...
Categories: DBA Blogs

can SQL loader read from zipped file .gz as input/data ?

Tom Kyte - Sun, 2017-04-09 16:06
im using SQL loader on unix solaries i have a file.txt like below which can use it normally as input / data file my problem is i need to use the file in .gz stat for desk space issues i've tried the below example as test & it fails, i need to ...
Categories: DBA Blogs

Documentum – Deactivation of a docbase without uninstallation

Yann Neuhaus - Sun, 2017-04-09 03:19

At some of our customers, we often install new docbases for development purposes which are used only for a short time to avoid cross-team interactions/interferences and this kind of things. Creating new docbases is quite easy with Documentum but it still takes some time (unless you use silent installations or Docker components). Therefore installing/removing docbases over and over can be a pain. For this purpose, we often install new docbases but then we don’t uninstall it, we simply “deactivate” it. By deactivate I mean updating configuration files and scripts to act just like if this docbase has never been created in the first place. As said above, some docbases are there only temporarily but we might need them again in a near future and therefore we don’t want to remove them completely.

In this blog, I will show you which files should be updated and how to simulate a “deactivation” so that the Documentum components will just act as if the docbase wasn’t there. I will describe the steps for the different applications of the Content Server including the JMS and Thumbnail Server, Web Application Server (D2/DA for example), Full Text Server and ADTS.

On this blog, I will use a Documentum 7.2 environment in LINUX of course (except for the ADTS…) which is therefore using JBoss 7.1.1 (for the JMS and xPlore 1.5). In all our environments we also have a custom script that can be used to stop or start all components installed in the host. Therefore in this blog, I will assume that you do have a similar script (let’s say that this script is named “startstop”) which include a variable named “DOCBASES=” that contains the list of docbases/repositories installed on the local Content Server (DOCBASES=”DOCBASE1 DOCBASE2 DOCBASE3″). For the Full Text Server, this variable will be “INDEXAGENTS=” and it will contain the name of the Index Agents installed on the local FT (INDEXAGENTS=”Indexagent_DOCBASE1 Indexagent_DOCBASE2 Indexagent_DOCBASE3″). If you don’t have such kind of script or if it is setup differently, then just adapt the needed steps below. I will put this custom startstop script at the following locations: $DOCUMENTUM/scripts/startstop in the Content Server and $XPLORE_HOME/scripts/startstop in the Full Text Server.

In the steps below, I will also assume that the docbase that need to be deactivated is “DOCBASE1″ and that we have two additional docbases installed on our environment (“DOCBASE2″ and “DOCBASE3″) that need to stay up&running. If you have some High Availability environments, then the steps below will apply to the Primary Content Server but for Remote Content Servers, you will need to adapt the name of the Docbase start and shutdown scripts which are placed under $DOCUMENTUM/dba: the correct name for Remote CSs should be $DOCUMENTUM/dba/dm_shutdown_DOCBASE1_<ServiceName@RemoteCSs>.

 

1. Content Server

Ok so let’s start with the deactivation of the docbase on the Content Server. Obviously the first thing to do is to stop the docbase if it is running:

ps -ef | grep "docbase_name DOCBASE1 " | grep -v grep
$DOCUMENTUM/dba/dm_shutdown_DOCBASE1

 

Once done and since we don’t want the docbase to be inadvertently restarted, then we need to update the custom script that I mentioned above. In addition to that, we should also rename the Docbase start script so an installer won’t start the docbase too.

mv $DOCUMENTUM/dba/dm_start_DOCBASE1 $DOCUMENTUM/dba/dm_start_DOCBASE1_deactivated
vi $DOCUMENTUM/scripts/startstop
    ==> Duplicate the line starting with "DOCBASES=..."
    ==> Comment one of the two lines and remove the docbase DOCBASE1 from the list that isn't commented
    ==> In the end, you should have something like:
        DOCBASES="DOCBASE2 DOCBASE3"
        #DOCBASES="DOCBASE1 DOCBASE2 DOCBASE3"

 

Ok so now the docbase has been stopped and can’t be started anymore so let’s start to check all the clients that were able to connect to this docbase. If you have a monitoring running on the Content Server (using the crontab for example), don’t forget to disable the monitoring too since the docbase isn’t running anymore. In the crontab, you can just comment the lines for example (using “crontab -e”). On the Java MethodServer (JMS) side, there are at least two applications you should take a look at (ServerApps and the ACS). To deactivate the docbase DOCBASE1 for these two applications, you should apply the following steps:

cd $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments
vi ServerApps.ear/DmMethods.war/WEB-INF/web.xml
    ==> Comment the 4 lines related to DOCBASE1 as follow:
        <!--init-param>
            <param-name>docbase-DOCBASE1</param-name>
            <param-value>DOCBASE1</param-value>
        </init-param-->

vi acs.ear/lib/configs.jar/config/acs.properties
    ==> Reorder the “repository.name.X=” properties for DOCBASE1 to have the biggest number (X is a number which goes from 1 to 3 in this case since I have 3 docbases)
    ==> Reorder the “repository.acsconfig.X=” properties for DOCBASE1 to have the biggest number (X is a number which goes from 1 to 3 in this case since I have 3 docbases)
    ==> Comment the “repository.name.Y=” property with the biggest number (Y is the number for DOCBASE1 so should be 3 now)
    ==> Comment the “repository.acsconfig.Y=” property with the biggest number (Y is the number for DOCBASE1 so should be 3 now)
    ==> Comment the “repository.login.Y=” property with the biggest number (Y is the number for DOCBASE1 so should be 3 now)
    ==> Comment the “repository.password.Y=” property with the biggest number (Y is the number for DOCBASE1 so should be 3 now)

 

So what has been done above? In the file web.xml, there is a reference to all docbases that are configured for the applications. Therefore commenting these lines in the file simply avoid the JMS to try to contact the docbase DOCBASE1 because it’s not running anymore. For the ACS, the update of the file acs.properties is a little bit more complex. What I usually do in this file is reordering the properties so that the docbases that aren’t running have the biggest index. Since we have DOCBASE1, DOCBASE2 and DOCBASE3, DOCBASE1 being the first docbase installed, therefore it will have by default the index N°1 inside the acs.properties (e.g.: repository.name.1=DOCBASE1.DOCBASE1 // repository.name.2=DOCBASE2.DOCBASE2 // …). Reordering the properties will simply allow you to just comment the highest number (3 in this case) for all properties and you will keep the numbers 1 and 2 enabled.

In addition to the above, you might also have a BPM (xCP) installed, in which case you also need to apply the following step:

vi bpm.ear/bpm.war/WEB-INF/web.xml
    ==> Comment the 4 lines related to DOCBASE1 as follow:
        <!--init-param>
            <param-name>docbase-DOCBASE1</param-name>
            <param-value>DOCBASE1</param-value>
        </init-param-->

 

Once the steps have been applied, you can restart the JMS using your preferred method. This is an example:

$DOCUMENTUM_SHARED/jboss7.1.1/server/stopMethodServer.sh
ps -ef | grep "MethodServer" | grep -v grep
nohup $DOCUMENTUM_SHARED/jboss7.1.1/server/startMethodServer.sh >> $DOCUMENTUM_SHARED/jboss7.1.1/server/nohup-JMS.out 2>&1 &

 

After the restart of the JMS, it won’t contain any errors anymore related to connection problems to DOCBASE1. For example if you don’t update the ACS file (acs.properties), it will still try to project itself to all docbases and it will therefore fail for DOCBASE1.

The next component I wanted to describe isn’t a component that is installed by default on all Content Servers but you might have it if you need document previews: the Thumbnail Server. To deactivate the docbase DOCBASE1 inside the Thumbnail Server, it’s pretty easy too:

vi $DM_HOME/thumbsrv/conf/user.dat
    ==> Comment the 5 lines related to DOCBASE1:
        #[DOCBASE1]
        #user=dmadmin
        #local_folder=thumbnails
        #repository_folder=/System/ThumbnailServer
        #pfile.txt=/app/dctm/server/product/7.2/thumbsrv/conf/DOCBASE1/pfile.txt

sh -c "$DM_HOME/thumbsrv/container/bin/shutdown.sh"
ps -ef | grep "thumbsrv" | grep -v grep
sh -c "$DM_HOME/thumbsrv/container/bin/startup.sh"

 

If you don’t do that, the Thumbnail Server will try to contact all docbases configured in the “user.dat” file and because of a bug with certain versions of the Thumbnail (see this blog for more information), your Thumbnail Server might even fail to start. Therefore commenting the lines related to DOCBASE1 inside this file is quite important.

 

2. Web Application Server

For the Web Application Server hosting your Documentum Administrator and D2/D2-Config clients, the steps are pretty simple: usually nothing or almost nothing has to be done. If you really want to be clean, then there might be a few things to do, it all depends on what you configured… On this part, I will consider that you are using non-exploded applications (which means: war files). I will put these WAR files under $WS_HOME/applications/. In case your applications are exploded (meaning your D2 is a folder and not a war file), then you don’t have to extract the files (no need to execute the jar commands). If you are using a Tomcat Application Server, then the applications will usually be exploded (folder) and will be placed under $TOMCAT_HOME/webapps/.

 – D2:

If you defined the LoadOnStartup property for DOCBASE1, then you might need to execute the following commands to extract the file, comment the line for the DOCBASE1 inside it and update the file back into the war file:

jar -xvf $WS_HOME/applications/D2.war WEB-INF/classes/D2FS.properties
sed -i 's,^LoadOnStartup.DOCBASE1.\(username\|domain\)=.*,#&,' WEB-INF/classes/D2FS.properties
jar -uvf $WS_HOME/applications/D2.war WEB-INF/classes/D2FS.properties

 

Also if you defined which docbase should be the default one in D2 and that this docbase is DOCBASE1 then you need to change the default docbase to DOCBASE2 or DOCBASE3. In my case, I will use DOCBASE2 as new default docbase:

jar -xvf $WS_HOME/applications/D2.war WEB-INF/classes/config.properties
sed -i 's,^defaultRepository=.*,defaultRepository=DOCBASE2,' WEB-INF/classes/config.properties
jar -uvf $WS_HOME/applications/D2.war WEB-INF/classes/config.properties

 

Finally if you are using Single Sign-On, you will have a SSO User. This is defined inside the d2fs-trust.properties file with recent versions of D2 while it was defined in the shiro.ini file before. Since I’m using a D2 4.5, the commands would be:

jar -xvf $WS_HOME/applications/D2.war WEB-INF/classes/d2fs-trust.properties
sed -i 's,^DOCBASE1.user=.*,#&,' WEB-INF/classes/d2fs-trust.properties
jar -uvf $WS_HOME/applications/D2.war WEB-INF/classes/d2fs-trust.properties

 

 – D2-Config:

Usually nothing is needed. Only running docbases will be available through D2-Config.

 

 – DA:

Usually nothing is needed, unless you have specific customization for DA, in which case you probably need to take a look at the files under the “custom” folder.

 

3. Full Text Server

For the Full Text Server, the steps are also relatively easy. The only thing that needs to be done is to stop the Index Agent related to the docbase DOCBASE1 and prevent it from starting again. In our environments, since we sometimes have several docbases installed on the same Content Server and several Index Agents installed on the same Full Text, then we need to differentiate the name of the Index Agents. We usually only add the name of the docbase at the end: Indexagent_DOCBASE1. So let’s start with stopping the Index Agent:

ps -ef | grep "Indexagent_DOCBASE1" | grep -v grep
$XPLORE_HOME/jboss7.1.1/server/stopIndexagent_DOCBASE1.sh

 

Once done and if I use the global startstop script I mentioned earlier in this blog, then the only remaining step is preventing the Index Agent to start again and that can be done in the following way:

mv $XPLORE_HOME/jboss7.1.1/server/startIndexagent_DOCBASE1.sh $XPLORE_HOME/jboss7.1.1/server/startIndexagent_DOCBASE1.sh_deactivated
vi $XPLORE_HOME/scripts/startstop
    ==> Duplicate the line starting with "INDEXAGENTS=..."
    ==> Comment one of the two lines and remove the Index Agent related to DOCBASE1 from the list that isn't commented
    ==> In the end, you should have something like:
        INDEXAGENTS="Indexagent_DOCBASE2 Indexagent_DOCBASE3"
        #INDEXAGENTS="Indexagent_DOCBASE1 Indexagent_DOCBASE2 Indexagent_DOCBASE3"

 

If you have a monitoring running on the Full Text Server for this Index Agent, don’t forget to disable it.

 

4. ADTS

The last section of this blog will talk about the ADTS (Advanced Document Transformation Services), also called the Rendition Server. The ADTS is fairly similar to all other Documentum components: first you start with installing the different binaries and then you can configure a docbase to use/be supported by the ADTS. By doing that, the ADTS will update some configuration files that therefore need to be updated again if you want to deactivate a docbase. As you know, the ADTS is a Windows Server so I won’t show you commands to be executed in this section, I will just point you to the configuration files that need to be edited and what to update inside them. In this section, I will use %ADTS_HOME% as the folder under which the ADTS has been installed. It’s usually a good idea to install the ADTS under a specific/separated drive (not the OS drive) like D:\CTS\.

So the first thing to do is to prevent the different profiles for a docbase to be loaded:

Open the file "%ADTS_HOME%\CTS\config\CTSProfileService.xml"
    ==> Comment the whole "ProfileManagerContext" XML tag related to DOCBASE1
    ==> In the end, you should have something like:
        <!--ProfileManagerContext DocbaseName="DOCBASE1" ProcessExternally="false">
            <CTSServerProfile CTSProfileValue="%ADTS_HOME%\CTS\\docbases\\DOCBASE1\\config\\profiles\\lightWeightProfiles" CTSProfileName="lightWeightProfile"/>
            <CTSServerProfile CTSProfileValue="%ADTS_HOME%\CTS\\docbases\\DOCBASE1\\config\\profiles\\lightWeightSystemProfiles" CTSProfileName="lightWeightSystemProfile"/>
            <CTSServerProfile CTSProfileValue="%ADTS_HOME%\CTS\\docbases\\DOCBASE1\\config\\profiles\\heavyWeightProfiles" CTSProfileName="heavyWeightProfile"/>
            <CTSServerProfile CTSProfileValue="/System/Media Server/Profiles" CTSProfileName="lightWeightProfileFolder"/>
            <CTSServerProfile CTSProfileValue="/System/Media Server/System Profiles" CTSProfileName="lightWeightSystemProfileFolder"/>
            <CTSServerProfile CTSProfileValue="/System/Media Server/Command Line Files" CTSProfileName="heavyWeightProfileFolder"/>
            <CTSServerProfile CTSProfileValue="%ADTS_HOME%\CTS\docbases\DOCBASE1\config\temp_profiles" CTSProfileName="tempFileDir"/>
            <CTSServerProfile CTSProfileValue="ProfileSchema.dtd" CTSProfileName="lwProfileDTD"/>
            <CTSServerProfile CTSProfileValue="MP_PROPERTIES.dtd" CTSProfileName="hwProfileDTD"/>
            <ForClients>XCP</ForClients>
        </ProfileManagerContext-->

 

Once that is done, the queue processors need to be disabled too:

Open the file "%ADTS_HOME%\CTS\config\CTSServerService.xml"
    ==> Comment the two "QueueProcessorContext" XML tags related to DOCBASE1
    ==> In the end, you should have something like (I'm not displaying the whole XML tags since they are quite long...):
        <!--QueueProcessorContext DocbaseName="DOCBASE1">
            <CTSServer AttributeName="queueItemName" AttributeValue="dm_mediaserver"/>
            <CTSServer AttributeName="queueInterval" AttributeValue="10"/>
            <CTSServer AttributeName="maxThreads" AttributeValue="10"/>
            ...
            <CTSServer AttributeName="processOnlyParked" AttributeValue=""/>
            <CTSServer AttributeName="parkingServerName" AttributeValue=""/>
            <CTSServer AttributeName="notifyFailureMessageAdmin" AttributeValue="No"/>
        </QueueProcessorContext-->
        <!--QueueProcessorContext DocbaseName="DOCBASE1">
            <CTSServer AttributeName="queueItemName" AttributeValue="dm_autorender_win31"/>
            <CTSServer AttributeName="queueInterval" AttributeValue="10"/>
            <CTSServer AttributeName="maxThreads" AttributeValue="10"/>
            ...
            <CTSServer AttributeName="processOnlyParked" AttributeValue=""/>
            <CTSServer AttributeName="parkingServerName" AttributeValue=""/>
            <CTSServer AttributeName="notifyFailureMessageAdmin" AttributeValue="No"/>
        </QueueProcessorContext-->

 

After that, there is only one last configuration file to be updated and that’s the session manager which is the one responsible for the errors printed during startup of the ADTS because it defines which docbases the ADTS should try to contact, using which user/password and how many tries should be perform:

Open the file "%ADTS_HOME%\CTS\config\SessionService.xml"
    ==> Comment the whole "LoginContext" XML tag related to DOCBASE1
    ==> In the end, you should have something like:
        <!--LoginContext DocbaseName="DOCBASE1" Domain="" isPerformanceLogRepository="false">
            <CTSServer AttributeName="userName" AttributeValue="adtsuser"/>
            <CTSServer AttributeName="passwordFile" AttributeValue="%ADTS_HOME%\CTS\docbases\DOCBASE1\config\pfile\mspassword.txt"/>
            <CTSServer AttributeName="maxConnectionRetries" AttributeValue="10"/>
        </LoginContext-->

 

Once the configuration files have been updated, simply restart the ADTS services for the changes to be applied.

 

And here we go, you should have a clean environment with one less docbase configured without having to remove it on all servers. As a final note, if you ever want to reactivate the docbase, simply uncomment everything that was commented above, restore the default line from the custom “startstop” scripts and rename the Documentum start scripts with their original names (without the “_deactivated”) on the Content Server and Full Text Server.

 

 

Cet article Documentum – Deactivation of a docbase without uninstallation est apparu en premier sur Blog dbi services.

12cR1 RAC Posts -- 8f : Accessing data in the PDB in the Standby

Hemant K Chitale - Sun, 2017-04-09 03:03
Apparently, the error :
< br />
alter pluggable database pdb open
*
ERROR at line 1:
ORA-44309: unknown failure
ORA-44777: Pluggable database service cannot be started.

in my previous post was a spurious error.

I am able to access the PDB in the Standby.  Thus :

[oracle@oem132 ~]$ . oraenv
ORACLE_SID = [STBY] ? STBY
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@oem132 ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 12.1.0.2.0 Production on Sun Apr 9 15:57:27 2017

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup nomount;
ORACLE instance started.

Total System Global Area 838860800 bytes
Fixed Size 2929936 bytes
Variable Size 230689520 bytes
Database Buffers 599785472 bytes
Redo Buffers 5455872 bytes
SQL> alter database mount standby database;

Database altered.

SQL> rem here I notice the alert log showing :
SQL> rem ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT NODELAY
SQL> rem Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT NODELAY
SQL> recover managed standby database cancel;
Media recovery complete.
SQL> rem I cancel the automatic recovery as I do not want to run Active Data Guard
SQL> alter database open read only;

Database altered.

SQL> alter pluggable database pdb open;
alter pluggable database pdb open
*
ERROR at line 1:
ORA-44309: unknown failure
ORA-44777: Pluggable database service cannot be started.


SQL>
SQL> select con_id, name, open_mode, open_Time from v$pdbs;

CON_ID NAME OPEN_MODE
---------- ------------------------------ ----------
OPEN_TIME
---------------------------------------------------------------------------
2 PDB$SEED READ ONLY
09-APR-17 04.00.48.232 PM +08:00

3 PDB READ ONLY
09-APR-17 04.00.59.105 PM +08:00


SQL>
SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@oem132 ~]$ sqlplus hemant/hemant@PDB

SQL*Plus: Release 12.1.0.2.0 Production on Sun Apr 9 16:02:02 2017

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Mon Apr 03 2017 22:49:41 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select table_name from user_tables;

TABLE_NAME
--------------------------------------------------------------------------------
MY_DATA

SQL> select count(*) from my_data;

COUNT(*)
----------
100

SQL> show con_id;

CON_ID
------------------------------
3
SQL>


Thus, the PDB on the Standby *does* OPEN READ ONLY and I can see the data that had been populated from the Primary.
.
.
.
Categories: DBA Blogs

Download and Install Vagrant Box Locally

Michael Dinh - Sat, 2017-04-08 14:20
References:

Vagrant Box from oravirt

GitHub from oravirt

Blog Post from oravirt

Windows binaries of GNU Wget

Copy wget to C:\Users\dinh\AppData\Local\Programs\Git\mingw64\bin (My install location)

You might be thinking, why download box?

  1. Box may get removed.
  2. Box is staged at preferred versus default location.
dinh@CMWPHV1 MINGW64 /f/Vagrant
$ git clone --recursive https://github.com/oravirt/vagrant-vbox-si-fs.git
Cloning into 'vagrant-vbox-si-fs'...
remote: Counting objects: 59, done.
remote: Compressing objects: 100% (3/3), done.
remote: Total 59 (delta 0), reused 0 (delta 0), pack-reused 56
Unpacking objects: 100% (59/59), done.
Submodule 'ansible-oracle' (http://github.com/oravirt/ansible-oracle) registered for path 'ansible-oracle'
Cloning into 'F:/Vagrant/vagrant-vbox-si-fs/ansible-oracle'...
warning: redirecting to https://github.com/oravirt/ansible-oracle/
remote: Counting objects: 2169, done.
remote: Compressing objects: 100% (110/110), done.
remote: Total 2169 (delta 56), reused 0 (delta 0), pack-reused 2052
Receiving objects: 100% (2169/2169), 542.42 KiB | 0 bytes/s, done.
Resolving deltas: 100% (1009/1009), done.
Submodule path 'ansible-oracle': checked out 'd311447608e28a6df7816aeb05aa38d610254931'


dinh@CMWPHV1 MINGW64 /f/Vagrant
$ cd vagrant-vbox-si-fs/


dinh@CMWPHV1 MINGW64 /f/Vagrant/vagrant-vbox-si-fs (master)
$ wget -q https://atlas.hashicorp.com/oravirt/boxes/ol73/versions/20170110/providers/virtualbox.box


dinh@CMWPHV1 MINGW64 /f/Vagrant/vagrant-vbox-si-fs (master)
$ ls
ansible.cfg  ansible-oracle/  hosts.yml  README.md  swrepo/  Vagrantfile  virtualbox.box


dinh@CMWPHV1 MINGW64 /f/Vagrant/vagrant-vbox-si-fs (master)
$ vagrant box add oravirt/ol73 file:///F:/Vagrant/vagrant-vbox-si-fs/virtualbox.box
==> box: Box file was not detected as metadata. Adding it directly...
==> box: Adding box 'oravirt/ol73' (v0) for provider:
    box: Unpacking necessary files from: file:///F:/Vagrant/vagrant-vbox-si-fs/virtualbox.box
    box:
==> box: Successfully added box 'oravirt/ol73' (v0) for 'virtualbox'!


dinh@CMWPHV1 MINGW64 /f/Vagrant/vagrant-vbox-si-fs (master)
$ vagrant up
Bringing machine 'db1-fs' up with 'virtualbox' provider...
==> db1-fs: Importing base box 'oravirt/ol73'...
==> db1-fs: Matching MAC address for NAT networking...


dinh@CMWPHV1 MINGW64 /f/Vagrant/vagrant-vbox-si-fs (master)
$ vagrant ssh

----------------------------------------
Welcome to db1-fs
OracleLinux 7.3 x86_64

FQDN: db1-fs
IP:   10.0.2.15

Processor: Intel(R) Core(TM) i7-2640M CPU @ 2.80GHz
#CPU's:    1
Memory:    2749 MB
Kernel:    4.1.12-61.1.18.el7uek.x86_64

----------------------------------------

[vagrant@db1-fs ~]$ ifconfig -a
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.2.15  netmask 255.255.255.0  broadcast 10.0.2.255
        inet6 fe80::a00:27ff:fe4f:4c27  prefixlen 64  scopeid 0x20
        ether 08:00:27:4f:4c:27  txqueuelen 1000  (Ethernet)
        RX packets 1125  bytes 118982 (116.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 814  bytes 127567 (124.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.9.61  netmask 255.255.255.0  broadcast 192.168.9.255
        inet6 fe80::a00:27ff:fe84:e732  prefixlen 64  scopeid 0x20
        ether 08:00:27:84:e7:32  txqueuelen 1000  (Ethernet)
        RX packets 76  bytes 8068 (7.8 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 25  bytes 1826 (1.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 0  (Local Loopback)
        RX packets 96  bytes 8148 (7.9 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 96  bytes 8148 (7.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[vagrant@db1-fs ~]$

Import data from oracle into word#s template field

Tom Kyte - Sat, 2017-04-08 03:26
Hello, i want to know how i can import data from oracle ( not a full table just specific information) and then fill MS Word's template field with this data. Have you any idea how i realize that? thanks for the answer.
Categories: DBA Blogs

how to merge two records and show the results based on that

Tom Kyte - Sat, 2017-04-08 03:26
Need a bit guidance in how to merge two records and show the results based on that. So here i want to show what actual Qty is transferred from one location to another and show result based on that. Here we can show debit qty or credit qty in resul...
Categories: DBA Blogs

Using dbms_scheduler to run sqlldr using credentials for executable and database

Tom Kyte - Sat, 2017-04-08 03:26
Do you have an example of using dbms_scheduler to run sqlldr on a local server? The ideal solution will use a credential to supply to sqlldr userid=me/pw@service, so that I do not need to hard code the password anywhere. Using Oracle EE, 12c, with p...
Categories: DBA Blogs

11g to 12c

Tom Kyte - Sat, 2017-04-08 03:26
We will be migrating our databases from 11g to 12c in the next few months. We have a 1 TB database which has about 10 tables which have xml data (clob) in them. What is the best way that's proven to migrate from 11g to 12c ? In the past we have tried...
Categories: DBA Blogs

Change VARCHAR to VARCHAR2 in oracle 12c without deleting data

Tom Kyte - Sat, 2017-04-08 03:26
Hi , How can I change the data type from VARCHAR to VARCHAR2 in 12c DB without deleting records? Please advice. Regards, Prasun
Categories: DBA Blogs

Indexes on foreign keys

Tom Kyte - Sat, 2017-04-08 03:26
I have read several books which have repeatedly mentioned creating indexes on foreign keys. I know one advantage is that it eliminates table-level locks and, I have seen the benefit since I have encountered a similar problem. However, I would like to...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator