Feed aggregator

Apache Spark for Oracle Developers

Gerger Consulting - Tue, 2017-04-11 01:27
In the last few years, business problems that have been traditionally solved using databases are now being solved using other technologies. Apache Spark is one of the emerging technologies which is used for large scale data processing.

Attend the free webinar by Ivan Lokic and learn how you can use Apache Spark in Big Data applications.  


About the Webinar:
Apache Spark is a general computing engine designed for large-scale data processing. Many well-known companies use it to process petabytes of data on 8000+ nodes with long running jobs measured in weeks.


In this webinar, you will learn about:
  • Apache Spark and how Spark relates to (traditional) Hadoop MapReduce technology
  • What makes Spark so fast
  • How to use the rich set of Spark API’s to design and run your ETL jobs
  • Apache Spark streaming capabilities for near real-time updates and its role in Big Data processing scenarios
  • Structured Streaming, a scalable and fault tolerant stream processing engine which makes near real-time processing scenarios even easier
The webinar is free but space is limited.

Categories: Development

The Role of Data Visualization for Digital Marketing

Nilesh Jethwa - Mon, 2017-04-10 14:07

The field of marketing used to be a matter of simple advertising but times have changed. Back in the day, best advertisement got the most sales but today's buyers are not so suggestible as in days of yore.

With the advent of Internet, even uneducated consumers have become wary of overt advertising. These days marketers have to appeal to a much more discerning base of buyers, and they also have to be able to demonstrate to their clients that the "newfangled" ways of marketing actually perform.

Data visualization is making the difference on both sides of this dilemma.

Data Visualization: the Breakdown

So what is data visualization? To answer that, we first have to answer the question, "what is data?"

Read more at http://www.infocaptor.com/dashboard/importance-of-data-visualization-for-digital-marketing

Ask Jonathan

Jonathan Lewis - Mon, 2017-04-10 14:01

Oracle Scene, the magazine of the UK Oracle User Group is piloting a new regular feature called Ask Jonathan,  a chance to get an answer to any question you may have about how the Oracle database engine works.

I’m aiming to answer two or three questions per issue over the course of the year, using a format similar to the one Tom Kyte used in Oracle Magazine:  so if you have a question about the mechanisms, strategies, or mathematics of how Oracle does its thing then attach it as a comment to this posting.

Ideally the questions will have to be quite short (no 20MB trace files, massive schema definitions, or convoluted and exotic setup requirements or it will be too long), and I’ll aim to write something like half a page of in response.  I may summarise the question, or pick out the most interesting feature if it’s a bit too long to publish and answer in its entirety.

 


OneCommand Step 1 error

Amardeep Sidhu - Mon, 2017-04-10 11:50

Hit this silly issue while doing an Exadata deployment for a customer. Step 1 was giving the following error:

ERROR: 192.168.99.102 configured on dm01celadm01.example.com as dm01dbadm02 does not match expected value dm01dbadm02.example.com

I wasn’t able to make sense of it for quite some time until a colleague pointed out that the reverse lookup entries should be done for FQDN only. As it is clear in the above message reverse lookup of the IP 192.168.99.102 returns dm01dbadm02 instead of dm01dbadm02.example.com. Fixing this in DNS resolved the issue.

Actually the customer had done reverse lookup entries for both the hostname and FQDN. As the DNS can return the results in any order, so the error message was bit random. Whenever the the hostname was returned first, Step 1 gave an error. But when the FQDN was the first thing returned, there was no error in Step 1 for that IP.

Categories: BI & Warehousing

12cR1 RAC Posts -- 8g : Switchover from RAC Primary to SingleInstance Standby

Hemant K Chitale - Mon, 2017-04-10 10:40
Continuing this series of posts where I have a SingleInstance/FileSystem Standby database for a RAC/ASM database ...


Checking the status of the configuration :

DGMGRL> show configuration;

Configuration - rac

Protection Mode: MaxPerformance
Members:
rac - Primary database
stby - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS (status updated 28 seconds ago)

DGMGRL>


SQL> select thread#, max(sequence#)
2 from v$archived_log
3 group by thread#
4 order by 1
5 /

THREAD# MAX(SEQUENCE#)
---------- --------------
1 66
2 36

SQL> alter system archive log current;

System altered.


RFS[2]: Selected log 8 for thread 2 sequence 38 dbid 2519807290 branch 931825279
Mon Apr 10 23:03:17 2017
Archived Log entry 39 added for thread 2 sequence 37 ID 0x96312536 dest 1:
Mon Apr 10 23:03:18 2017
Media Recovery Waiting for thread 2 sequence 38 (in transit)
Mon Apr 10 23:03:18 2017
Recovery of Online Redo Log: Thread 2 Group 8 Seq 38 Reading mem 0
Mem# 0: /u01/app/oracle/oradata/STBY/onlinelog/group_8.300.937936389
Mem# 1: /u01/app/oracle/fast_recovery_area/STBY/onlinelog/group_8.306.937936389
RFS[1]: Selected log 6 for thread 1 sequence 68 dbid 2519807290 branch 931825279
Mon Apr 10 23:03:23 2017
Archived Log entry 40 added for thread 1 sequence 67 ID 0x96312536 dest 1:
Mon Apr 10 23:03:23 2017
Media Recovery Waiting for thread 1 sequence 68 (in transit)
Mon Apr 10 23:03:23 2017
Recovery of Online Redo Log: Thread 1 Group 6 Seq 68 Reading mem 0
Mem# 0: /u01/app/oracle/oradata/STBY/onlinelog/group_6.298.937936361
Mem# 1: /u01/app/oracle/fast_recovery_area/STBY/onlinelog/group_6.304.937936363


The dgmgrl status above is from node1 (collabn1) of the RAC (Primary)database.
The query on v$archived_log is from node2 (collabn2), instance RAC2 of the RAC (Primary) database.
The listing of messages in the alert log are from the Standby database.

Let me add some rows to the data I have in the PDB in the RAC database.

[oracle@collabn1 ~]$ sqlplus hemant/hemant@PDB

SQL*Plus: Release 12.1.0.2.0 Production on Mon Apr 10 23:06:31 2017

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Mon Apr 03 2017 22:49:41 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> select count(*) from my_data;

COUNT(*)
----------
100

SQL> insert into my_data select rownum+100, to_char(rownum+100)
2 from dual
3 connect by level < 201;

200 rows created.

SQL> select count(*) from my_data;

COUNT(*)
----------
300

SQL> commit;

Commit complete.

SQL>


Let me try a SWITCHOVER now.

DGMGRL> show configuration;

Configuration - rac

Protection Mode: MaxPerformance
Members:
rac - Primary database
stby - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS (status updated 102 seconds ago)

DGMGRL> switchover to stby;
Performing switchover NOW, please wait...
Operation requires a connection to instance "STBY" on database "stby"
Connecting to instance "STBY"...
Connected as SYSDBA.
New primary database "stby" is opening...
Oracle Clusterware is restarting database "rac" ...
Switchover succeeded, new primary is "stby"
DGMGRL>


Querying on the new "standby" that is RAC :

[oracle@collabn1 ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 12.1.0.2.0 Production on Mon Apr 10 23:36:12 2017

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> select open_mode, database_role from v$database;

OPEN_MODE DATABASE_ROLE
-------------------- ----------------
MOUNTED PHYSICAL STANDBY

SQL> select instance_name, host_name from gv$instance;

INSTANCE_NAME
----------------
HOST_NAME
----------------------------------------------------------------
RAC1
collabn1.racattack

RAC2
collabn2.racattack


SQL>


While, the old Standby :

[oracle@oem132 trace]$ sqlplus

SQL*Plus: Release 12.1.0.2.0 Production on Mon Apr 10 23:37:34 2017

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select open_mode, databas_role from v$database;
select open_mode, databas_role from v$database
*
ERROR at line 1:
ORA-00904: "DATABAS_ROLE": invalid identifier


SQL> select open_mode, database_role from v$database;

OPEN_MODE DATABASE_ROLE
-------------------- ----------------
READ WRITE PRIMARY

SQL> select instance_name, host_name from gv$instance;

INSTANCE_NAME
----------------
HOST_NAME
----------------------------------------------------------------
STBY
oem132.racattack


SQL>


So, host "oem132.racattack" now has the Primary database in Read Write mode. The instance name is STBY because I chose that to be the instance name through the testing.  (I could have chosen BOSTON and CHICAGO as is used in some Oracle examples / documentation).

Let me verify my data on STBY :

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 838860800 bytes
Fixed Size 2929936 bytes
Variable Size 230689520 bytes
Database Buffers 599785472 bytes
Redo Buffers 5455872 bytes
Database mounted.
Database opened.
SQL> alter pluggable database pdb open;

Pluggable database altered.

SQL> connect hemant/hemant@PDB
Connected.
SQL> select count(*) from my_data;

COUNT(*)
----------
300

SQL>


The rows inserted at the RAC Primary are now in the Pluggable Database PDB at STBY.

.
.
.
Categories: DBA Blogs

User expired status even profile life password is set to unlimited

Tom Kyte - Mon, 2017-04-10 10:26
Hi Masters! I created some profiles a week ago and add some users into that profile, for some reason one of the users got today its account expired but the profile password life time is set to unlimited, do you know what happened? Profile: R...
Categories: DBA Blogs

Redo log generation

Tom Kyte - Mon, 2017-04-10 10:26
Hi, I have doubt on arrive at the count of redo log switch in the database. From the AWR report i can see that the log switch(derived) total is 82 and per hour value 6.83. However when i query using the below sql SELECT INST_ID,to_char(compl...
Categories: DBA Blogs

Interval partition

Tom Kyte - Mon, 2017-04-10 10:26
Can you please provide an example for interval partition 7 days interval
Categories: DBA Blogs

connectivity to sql server

Tom Kyte - Mon, 2017-04-10 10:26
How to connect SqlServer through oracle using generic connectivity. what changes require in ORacle_home\hs (folder).I mean in inithsodbc,listener,tnsnames after i make System Dsn in odbc
Categories: DBA Blogs

Parallel First_rows()

Jonathan Lewis - Mon, 2017-04-10 09:53

A recent posting on OTN raised the question of whether or not the “parallel” hint and the “first_rows(n)” hint were mutually incompatible. This reminded me that from time to time other posters on OTN (copying information from various websites, perhaps) have claimed that “parallel doesn’t work with first rows” or, conversely, “first rows doesn’t work with parallel”. This is one of those funny little myths that is so old that the script I’ve got to demonstrate the misconception is dated 2003 with a first test version of 8.1.7.4.

Since I haven’t run the test on any version of Oracle newer than 9.2.0.4 I thought it was time to dust it down, modernise it slightly, and run it again. So here’s the bit that creates a sample data set:


create table t1 (
        id      number,
        v1      varchar2(10),
        padding varchar2(100),
        constraint      t_pk primary key(id) using index local
)
partition by range(id) (
        partition p1000 values less than (1000),
        partition p2000 values less than (2000),
        partition p3000 values less than (3000),
        partition p4000 values less than (4000),
        partition p5000 values less than (5000)
)
;

insert into t1
select
        rownum - 1,
        rpad(rownum-1,10),
        rpad('x',100)
from
        all_objects
where
        rownum <= 5000 -- > hint to avoid WordPress formatting issue
order by 
        dbms_random.value
;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1', 
                method_opt       => 'for all columns size 1'
        );
end;
/

Now I’m going to run a simple query, hinted in 4 different ways:

  • no hints
  • parallel hint only: /*+ parallel */
  • first_rows(1) hint only: /*+ first_rows(1) */
  • parallel and first_rows(1): /*+ parallel first_rows(1) */

Here’s the version of the query that has both hints in place:


set serveroutput off
set linesize 156
set pagesize 60
set trimspool on

select
        /*+ parallel first_rows(1) */
        v1
from
        t1
where
        id between 1500 and 2000
;

select * from table(dbms_xplan.display_cursor(null,null,'cost outline'));

I’ve actually run the query and used the display_cursor() option to pull the plan from memory – in the original (8i) script I used autotrace and the old (deprecated, backwards compatibility only) first_rows hint. To do any other tests just clone and edit. Here are the 4 outputs from the call to display_cursor() – with a little cosmetic editing:


SQL_ID  63qnzam9b8m9g, child number 0
=====================================
select  /*+ */  v1 from  t1 where  id between 1500 and 2000

Plan hash value: 277861402

-------------------------------------------------------------------------------------------------
| Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |      |       |       |    15 (100)|          |       |       |
|   1 |  PARTITION RANGE ITERATOR|      |   502 |  7530 |    15   (0)| 00:00:01 |     2 |     3 |
|*  2 |   TABLE ACCESS FULL      | T1   |   502 |  7530 |    15   (0)| 00:00:01 |     2 |     3 |
-------------------------------------------------------------------------------------------------

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('11.2.0.4')
      DB_VERSION('11.2.0.4')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      FULL(@"SEL$1" "T1"@"SEL$1")
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("ID"<=2000 AND "ID">=1500))


SQL_ID  ahary3u8q88mq, child number 1
=====================================
select  /*+ parallel */  v1 from  t1 where  id between 1500 and 2000

Plan hash value: 9959369

------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |          |       |       |     8 (100)|          |       |       |        |      |            |
|   1 |  PX COORDINATOR      |          |       |       |            |          |       |       |        |      |            |
|   2 |   PX SEND QC (RANDOM)| :TQ10000 |   502 |  7530 |     8   (0)| 00:00:01 |       |       |  Q1,00 | P->S | QC (RAND)  |
|   3 |    PX BLOCK ITERATOR |          |   502 |  7530 |     8   (0)| 00:00:01 |     2 |     3 |  Q1,00 | PCWC |            |
|*  4 |     TABLE ACCESS FULL| T1       |   502 |  7530 |     8   (0)| 00:00:01 |     2 |     3 |  Q1,00 | PCWP |            |
------------------------------------------------------------------------------------------------------------------------------

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('11.2.0.4')
      DB_VERSION('11.2.0.4')
      ALL_ROWS
      SHARED(2)
      OUTLINE_LEAF(@"SEL$1")
      FULL(@"SEL$1" "T1"@"SEL$1")
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access(:Z>=:Z AND :Z<=:Z)
       filter(("ID"<=2000 AND "ID">=1500))

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 2


SQL_ID  3m6mnk9b337dd, child number 0
=====================================
select  /*+ first_rows(1) */  v1 from  t1 where  id between 1500 and
2000

Plan hash value: 1044541683

-----------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                   |      |       |       |     6 (100)|          |       |       |
|   1 |  PARTITION RANGE ITERATOR          |      |     4 |    60 |     6   (0)| 00:00:01 |     2 |     3 |
|   2 |   TABLE ACCESS BY LOCAL INDEX ROWID| T1   |     4 |    60 |     6   (0)| 00:00:01 |     2 |     3 |
|*  3 |    INDEX RANGE SCAN                | T_PK |       |       |     2   (0)| 00:00:01 |     2 |     3 |
-----------------------------------------------------------------------------------------------------------

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('11.2.0.4')
      DB_VERSION('11.2.0.4')
      FIRST_ROWS(1)
      OUTLINE_LEAF(@"SEL$1")
      INDEX_RS_ASC(@"SEL$1" "T1"@"SEL$1" ("T1"."ID"))
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("ID">=1500 AND "ID"<=2000) -- > needs edit to avoid WordPress formatting issue


SQL_ID  9asm7t1zbv4q8, child number 1
=====================================
select  /*+ parallel first_rows(1) */  v1 from  t1 where  id between
1500 and 2000

Plan hash value: 4229065483

----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |          |       |       |     3 (100)|          |       |       |        |      |            |
|   1 |  PX COORDINATOR                      |          |       |       |            |          |       |       |        |      |            |
|   2 |   PX SEND QC (RANDOM)                | :TQ10000 |     4 |    60 |     3   (0)| 00:00:01 |       |       |  Q1,00 | P->S | QC (RAND)  |
|   3 |    PX PARTITION RANGE ITERATOR       |          |     4 |    60 |     3   (0)| 00:00:01 |     2 |     3 |  Q1,00 | PCWC |            |
|   4 |     TABLE ACCESS BY LOCAL INDEX ROWID| T1       |     4 |    60 |     3   (0)| 00:00:01 |     2 |     3 |  Q1,00 | PCWP |            |
|*  5 |      INDEX RANGE SCAN                | T_PK     |       |       |     1   (0)| 00:00:01 |     2 |     3 |  Q1,00 | PCWP |            |
----------------------------------------------------------------------------------------------------------------------------------------------

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('11.2.0.4')
      DB_VERSION('11.2.0.4')
      FIRST_ROWS(1)
      SHARED(2)
      OUTLINE_LEAF(@"SEL$1")
      INDEX_RS_ASC(@"SEL$1" "T1"@"SEL$1" ("T1"."ID"))
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   5 - access("ID">=1500 AND "ID"<=2000)

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 2

Critically we get four different execution plans from the four different strategies – so clearly the optimizer is perfectly happy to accept the parallel and first_rows() hints simultaneously. Note, particularly, how the first_rows(1) hint when combined with the parallel hint moved us from a parallel full tablescan to a parallel index range scan.

Whether or not it’s sensible to use the hint combination in this way is a matter for careful consideration, of course, but there could be circumstances where the combination really is the best way to get the starting row(s) from a query that otherwise has to return a large amount of data.


Getting Started with Pivotal Cloud Cache on Pivotal Cloud Foundry

Pas Apicella - Sun, 2017-04-09 22:57
Recently we announced the new cache service Pivotal Cloud Cache (PCC) for Pivotal Cloud Foundry (PCC). In short Pivotal Cloud Cache (PCC) is a opinionated, distributed, highly available, high speed key/value caching service. PCC can be easily horizontally scaled for capacity and performance.

In this post we will show how you would provision a service, login to the Pulse UI dashboard, connect using GFSH etc. I won't create a spring boot application to use the service at this stage BUT that will follow in a post soon enough.

Steps

1. First you will need the PCC service and if it's been installed it will look like this


2. Now let's view the current plans we have in place as shown below

pasapicella@pas-macbook:~$ cf marketplace -s p-cloudcache
Getting service plan information for service p-cloudcache as papicella@pivotal.io...
OK

service plan   description          free or paid
extra-small    Plan 1 Description   free
extra-large    Plan 5 Description   free

3. Now let's create a service as shown below

pasapicella@pas-macbook:~$ cf create-service p-cloudcache extra-small pas-pcc
Creating service instance pas-pcc in org pivot-papicella / space development as papicella@pivotal.io...
OK

Create in progress. Use 'cf services' or 'cf service pas-pcc' to check operation status.

4. At this point it will asynchronously create the GemFire cluster which is essentially what PCC is. For more Information on GemFire see the docs link here.

You can check the progress one of two ways.

1. Using Pivotal Apps manager as shown below


2. Using a command as follows

pasapicella@pas-macbook:~$ cf service pas-pcc

Service instance: pas-pcc
Service: p-cloudcache
Bound apps:
Tags:
Plan: extra-small
Description: Pivotal CloudCache offers the ability to deploy a GemFire cluster as a service in Pivotal Cloud Foundry.
Documentation url: http://docs.pivotal.io/gemfire/index.html
Dashboard: http://gemfire-yyyyy.run.pez.pivotal.io/pulse

Last Operation
Status: create in progress
Message: Instance provisioning in progress
Started: 2017-04-10T01:34:58Z
Updated: 2017-04-10T01:36:59Z

5. Once complete it will look as follows


6. Now in order to log into both GFSH and Pulse we are going to need to create a service key for the service we just created, which we do as shown below.

pasapicella@pas-macbook:~/pivotal/PCF/services/PCC$ cf create-service-key pas-pcc pas-pcc-key
Creating service key pas-pcc-key for service instance pas-pcc as papicella@pivotal.io...
OK

7. Retrieve service keys as shown below

pasapicella@pas-macbook:~$ cf service-key pas-pcc pas-pcc-key
Getting key pas-pcc-key for service instance pas-pcc as papicella@pivotal.io...

{
 "locators": [
  "0.0.0.0[55221]",
  "0.0.0.0[55221]",
  "0.0.0.0[55221]"
 ],
 "urls": {
  "gfsh": "http://gemfire-yyyy.run.pez.pivotal.io/gemfire/v1",
  "pulse": "http://gemfire-yyyy.run.pez.pivotal.io/pulse"
 },
 "users": [
  {
   "password": "password",
   "username": "developer"
  },
  {
   "password": "password",
   "username": "operator"
  }
 ]
}

8. Now lets log into Pulse. The URL is available as part of the output above

Login Page


Pulse Dashboard : You can see from the dashboard page it shows how many locators and cache server members we have as part of this default cluster



9. Now lets log into GFSH. Once again the URL is as per the output above

- First we will need to download Pivotal GemFire so we have the GFSH client, download the zip at the link below and extract to your file system

  https://network.pivotal.io/products/pivotal-gemfire

- Invoke as follows using the path to the extracted ZIP file

$GEMFIRE_HOME/bin/gfsh

pasapicella@pas-macbook:~/pivotal/software/gemfire/pivotal-gemfire-9.0.3/bin$ ./gfsh
    _________________________     __
   / _____/ ______/ ______/ /____/ /
  / /  __/ /___  /_____  / _____  /
 / /__/ / ____/  _____/ / /    / /
/______/_/      /______/_/    /_/    9.0.3

Monitor and Manage Pivotal GemFire
gfsh>connect --use-http --url=http://gemfire-yyyy.run.pez.pivotal.io/gemfire/v1 --user=operator --password=password
Successfully connected to: GemFire Manager HTTP service @ http://gemfire-yyyy.run.pez.pivotal.io/gemfire/v1

gfsh>

10. Now lets create a region which will use to store some cache data

$ create region --name=demoregion --type=PARTITION_HEAP_LRU --redundant-copies=1
  
gfsh>create region --name=demoregion --type=PARTITION_HEAP_LRU --redundant-copies=1
Member | Status
----------------------------------- | ---------------------------------------------------------------------
cacheserver-PCF-PEZ-Heritage-RP04-1 | Region "/demoregion" created on "cacheserver-PCF-PEZ-Heritage-RP04-1"
cacheserver-PCF-PEZ-Heritage-RP04-0 | Region "/demoregion" created on "cacheserver-PCF-PEZ-Heritage-RP04-0"
cacheserver-PCF-PEZ-Heritage-RP04-2 | Region "/demoregion" created on "cacheserver-PCF-PEZ-Heritage-RP04-2"
cacheserver-PCF-PEZ-Heritage-RP04-3 | Region "/demoregion" created on "cacheserver-PCF-PEZ-Heritage-RP04-3"

Note: Understanding the region types you can create exist at the Pivotal GemFire docs but basically in the example above we create a partitioned region where primary and backup data is stored among the cache servers. As you can see we asked for a single backup copy of each region entry to be placed on a separate cache server itself for redundancy

http://gemfire.docs.pivotal.io/geode/developing/region_options/region_types.html#region_types

11. If we return to the Pulse Dashboard UI we will see from the "Data Browser" tab we have a region


12. Now lets just add some data , few entries which are simple String key/value pairs only
  
gfsh>put --region=/demoregion --key=1 --value="value 1"
Result : true
Key Class : java.lang.String
Key : 1
Value Class : java.lang.String
Old Value : <NULL>


gfsh>put --region=/demoregion --key=2 --value="value 2"
Result : true
Key Class : java.lang.String
Key : 2
Value Class : java.lang.String
Old Value : <NULL>


gfsh>put --region=/demoregion --key=3 --value="value 3"
Result : true
Key Class : java.lang.String
Key : 3
Value Class : java.lang.String
Old Value : <NULL>

13. Finally lets query the data we have in the cache
  
gfsh>query --query="select * from /demoregion"

Result : true
startCount : 0
endCount : 20
Rows : 3

Result
-------
value 3
value 1
value 2

NEXT_STEP_NAME : END

13. We can return to Pulse and invoke the same query from the "Data Browser" tab as shown below.



Of course storing data in a cache isn't useful unless we actually have an application on PCF that can use the Cache BUT that will come in a separate post. Basically we will BIND to this service, connect as a GemFire Client using the locators we are given as part of the service key and then extract the cache data we have just created above by invoking a query.

More Information

Download PCC for PCF
https://network.pivotal.io/products/cloud-cache

Data Sheet for PCC
https://content.pivotal.io/datasheets/pivotal-cloud-cache
Categories: Fusion Middleware

ADF Multi Task Flow Binding and Tab Order

Andrejus Baranovski - Sun, 2017-04-09 19:47
I had a post while ago about ADF multi task flow binding and loading dynamic regions - Building Custom UI Shell with ADF 11g R2. In that sample, new region was opened in the first tab position. Meaning tab order was from right to left. It is more natural to have left to right tab opening order. This can be done too, check updated sample app - we need to control disclosed property and add new region to the end of array.

Sample app - MultiTaskFlowApp_TabOrder.zip. Sample app contains four regions that can be opened dynamically. Let's say user opens region Locations:


With improved tab order, next region will be opened in the tab on the right (before it was on the left, in the first position):


Tab closing works in the way. Let's say user wants to close second tab:


When tab with Departments is closed, next tab on the right is opened - Employees tab:


Key thing in implementation for this requirement - disclosed property in dynamic tab:


It calls bean method, where it evaluates current tab to be disclosed. If given tab is matching the value - it will be disclosed. Disclosed property for all other tabs will be reset:


Each time when new tab is loaded, it is loaded to the end of the array. New tab is set to be disclosed:


When user selects tab - currently disclosed tab property is updated too, to make sure info about new disclosed tab is stored:


One more case - tab closure. When tab is closed - next tab is selected, unless current tab was the last one:


When tab is selected programmatically, we update information about current selected tab too:

how to decrease the database table size

Tom Kyte - Sun, 2017-04-09 16:06
to reduce the size of the overall db space, we are deleting unused rows/data from our tables which occupies more memory. But even after deleting around half of the existing rows, the memory size didnot decrease but increase. Before delete: Rows...
Categories: DBA Blogs

Rebuild Index of partition after updating partition of record

Tom Kyte - Sun, 2017-04-09 16:06
Hi, I got the huge table with me ,consider it as table1. And I want to delete old record from it around 2 billion. This table is partition on date using range partition. So I have updated the it's date field and pulled identified record to one...
Categories: DBA Blogs

GRANTING PRIVILEGES TO OTHER USER WITHOUT USING SCHEMANAME.....

Tom Kyte - Sun, 2017-04-09 16:06
QUESTION 1 ------------ I have two user i.e. user1 and user2. user1 have 1000 object. This all objects need to be accessed by user2, for that i have to grant access privilege to user2. after that i want user2 to access user1 objects without usin...
Categories: DBA Blogs

database duplicate using rman

Tom Kyte - Sun, 2017-04-09 16:06
How i can duplicate database from production database? I have a little confusion as below. A) on standby database , Is install oracle database or software only? B) If software only then how i can connect to rman because on standby database does no...
Categories: DBA Blogs

can SQL loader read from zipped file .gz as input/data ?

Tom Kyte - Sun, 2017-04-09 16:06
im using SQL loader on unix solaries i have a file.txt like below which can use it normally as input / data file my problem is i need to use the file in .gz stat for desk space issues i've tried the below example as test & it fails, i need to ...
Categories: DBA Blogs

Documentum – Deactivation of a docbase without uninstallation

Yann Neuhaus - Sun, 2017-04-09 03:19

At some of our customers, we often install new docbases for development purposes which are used only for a short time to avoid cross-team interactions/interferences and this kind of things. Creating new docbases is quite easy with Documentum but it still takes some time (unless you use silent installations or Docker components). Therefore installing/removing docbases over and over can be a pain. For this purpose, we often install new docbases but then we don’t uninstall it, we simply “deactivate” it. By deactivate I mean updating configuration files and scripts to act just like if this docbase has never been created in the first place. As said above, some docbases are there only temporarily but we might need them again in a near future and therefore we don’t want to remove them completely.

In this blog, I will show you which files should be updated and how to simulate a “deactivation” so that the Documentum components will just act as if the docbase wasn’t there. I will describe the steps for the different applications of the Content Server including the JMS and Thumbnail Server, Web Application Server (D2/DA for example), Full Text Server and ADTS.

On this blog, I will use a Documentum 7.2 environment in LINUX of course (except for the ADTS…) which is therefore using JBoss 7.1.1 (for the JMS and xPlore 1.5). In all our environments we also have a custom script that can be used to stop or start all components installed in the host. Therefore in this blog, I will assume that you do have a similar script (let’s say that this script is named “startstop”) which include a variable named “DOCBASES=” that contains the list of docbases/repositories installed on the local Content Server (DOCBASES=”DOCBASE1 DOCBASE2 DOCBASE3″). For the Full Text Server, this variable will be “INDEXAGENTS=” and it will contain the name of the Index Agents installed on the local FT (INDEXAGENTS=”Indexagent_DOCBASE1 Indexagent_DOCBASE2 Indexagent_DOCBASE3″). If you don’t have such kind of script or if it is setup differently, then just adapt the needed steps below. I will put this custom startstop script at the following locations: $DOCUMENTUM/scripts/startstop in the Content Server and $XPLORE_HOME/scripts/startstop in the Full Text Server.

In the steps below, I will also assume that the docbase that need to be deactivated is “DOCBASE1″ and that we have two additional docbases installed on our environment (“DOCBASE2″ and “DOCBASE3″) that need to stay up&running. If you have some High Availability environments, then the steps below will apply to the Primary Content Server but for Remote Content Servers, you will need to adapt the name of the Docbase start and shutdown scripts which are placed under $DOCUMENTUM/dba: the correct name for Remote CSs should be $DOCUMENTUM/dba/dm_shutdown_DOCBASE1_<ServiceName@RemoteCSs>.

 

1. Content Server

Ok so let’s start with the deactivation of the docbase on the Content Server. Obviously the first thing to do is to stop the docbase if it is running:

ps -ef | grep "docbase_name DOCBASE1 " | grep -v grep
$DOCUMENTUM/dba/dm_shutdown_DOCBASE1

 

Once done and since we don’t want the docbase to be inadvertently restarted, then we need to update the custom script that I mentioned above. In addition to that, we should also rename the Docbase start script so an installer won’t start the docbase too.

mv $DOCUMENTUM/dba/dm_start_DOCBASE1 $DOCUMENTUM/dba/dm_start_DOCBASE1_deactivated
vi $DOCUMENTUM/scripts/startstop
    ==> Duplicate the line starting with "DOCBASES=..."
    ==> Comment one of the two lines and remove the docbase DOCBASE1 from the list that isn't commented
    ==> In the end, you should have something like:
        DOCBASES="DOCBASE2 DOCBASE3"
        #DOCBASES="DOCBASE1 DOCBASE2 DOCBASE3"

 

Ok so now the docbase has been stopped and can’t be started anymore so let’s start to check all the clients that were able to connect to this docbase. If you have a monitoring running on the Content Server (using the crontab for example), don’t forget to disable the monitoring too since the docbase isn’t running anymore. In the crontab, you can just comment the lines for example (using “crontab -e”). On the Java MethodServer (JMS) side, there are at least two applications you should take a look at (ServerApps and the ACS). To deactivate the docbase DOCBASE1 for these two applications, you should apply the following steps:

cd $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments
vi ServerApps.ear/DmMethods.war/WEB-INF/web.xml
    ==> Comment the 4 lines related to DOCBASE1 as follow:
        <!--init-param>
            <param-name>docbase-DOCBASE1</param-name>
            <param-value>DOCBASE1</param-value>
        </init-param-->

vi acs.ear/lib/configs.jar/config/acs.properties
    ==> Reorder the “repository.name.X=” properties for DOCBASE1 to have the biggest number (X is a number which goes from 1 to 3 in this case since I have 3 docbases)
    ==> Reorder the “repository.acsconfig.X=” properties for DOCBASE1 to have the biggest number (X is a number which goes from 1 to 3 in this case since I have 3 docbases)
    ==> Comment the “repository.name.Y=” property with the biggest number (Y is the number for DOCBASE1 so should be 3 now)
    ==> Comment the “repository.acsconfig.Y=” property with the biggest number (Y is the number for DOCBASE1 so should be 3 now)
    ==> Comment the “repository.login.Y=” property with the biggest number (Y is the number for DOCBASE1 so should be 3 now)
    ==> Comment the “repository.password.Y=” property with the biggest number (Y is the number for DOCBASE1 so should be 3 now)

 

So what has been done above? In the file web.xml, there is a reference to all docbases that are configured for the applications. Therefore commenting these lines in the file simply avoid the JMS to try to contact the docbase DOCBASE1 because it’s not running anymore. For the ACS, the update of the file acs.properties is a little bit more complex. What I usually do in this file is reordering the properties so that the docbases that aren’t running have the biggest index. Since we have DOCBASE1, DOCBASE2 and DOCBASE3, DOCBASE1 being the first docbase installed, therefore it will have by default the index N°1 inside the acs.properties (e.g.: repository.name.1=DOCBASE1.DOCBASE1 // repository.name.2=DOCBASE2.DOCBASE2 // …). Reordering the properties will simply allow you to just comment the highest number (3 in this case) for all properties and you will keep the numbers 1 and 2 enabled.

In addition to the above, you might also have a BPM (xCP) installed, in which case you also need to apply the following step:

vi bpm.ear/bpm.war/WEB-INF/web.xml
    ==> Comment the 4 lines related to DOCBASE1 as follow:
        <!--init-param>
            <param-name>docbase-DOCBASE1</param-name>
            <param-value>DOCBASE1</param-value>
        </init-param-->

 

Once the steps have been applied, you can restart the JMS using your preferred method. This is an example:

$DOCUMENTUM_SHARED/jboss7.1.1/server/stopMethodServer.sh
ps -ef | grep "MethodServer" | grep -v grep
nohup $DOCUMENTUM_SHARED/jboss7.1.1/server/startMethodServer.sh >> $DOCUMENTUM_SHARED/jboss7.1.1/server/nohup-JMS.out 2>&1 &

 

After the restart of the JMS, it won’t contain any errors anymore related to connection problems to DOCBASE1. For example if you don’t update the ACS file (acs.properties), it will still try to project itself to all docbases and it will therefore fail for DOCBASE1.

The next component I wanted to describe isn’t a component that is installed by default on all Content Servers but you might have it if you need document previews: the Thumbnail Server. To deactivate the docbase DOCBASE1 inside the Thumbnail Server, it’s pretty easy too:

vi $DM_HOME/thumbsrv/conf/user.dat
    ==> Comment the 5 lines related to DOCBASE1:
        #[DOCBASE1]
        #user=dmadmin
        #local_folder=thumbnails
        #repository_folder=/System/ThumbnailServer
        #pfile.txt=/app/dctm/server/product/7.2/thumbsrv/conf/DOCBASE1/pfile.txt

sh -c "$DM_HOME/thumbsrv/container/bin/shutdown.sh"
ps -ef | grep "thumbsrv" | grep -v grep
sh -c "$DM_HOME/thumbsrv/container/bin/startup.sh"

 

If you don’t do that, the Thumbnail Server will try to contact all docbases configured in the “user.dat” file and because of a bug with certain versions of the Thumbnail (see this blog for more information), your Thumbnail Server might even fail to start. Therefore commenting the lines related to DOCBASE1 inside this file is quite important.

 

2. Web Application Server

For the Web Application Server hosting your Documentum Administrator and D2/D2-Config clients, the steps are pretty simple: usually nothing or almost nothing has to be done. If you really want to be clean, then there might be a few things to do, it all depends on what you configured… On this part, I will consider that you are using non-exploded applications (which means: war files). I will put these WAR files under $WS_HOME/applications/. In case your applications are exploded (meaning your D2 is a folder and not a war file), then you don’t have to extract the files (no need to execute the jar commands). If you are using a Tomcat Application Server, then the applications will usually be exploded (folder) and will be placed under $TOMCAT_HOME/webapps/.

 – D2:

If you defined the LoadOnStartup property for DOCBASE1, then you might need to execute the following commands to extract the file, comment the line for the DOCBASE1 inside it and update the file back into the war file:

jar -xvf $WS_HOME/applications/D2.war WEB-INF/classes/D2FS.properties
sed -i 's,^LoadOnStartup.DOCBASE1.\(username\|domain\)=.*,#&,' WEB-INF/classes/D2FS.properties
jar -uvf $WS_HOME/applications/D2.war WEB-INF/classes/D2FS.properties

 

Also if you defined which docbase should be the default one in D2 and that this docbase is DOCBASE1 then you need to change the default docbase to DOCBASE2 or DOCBASE3. In my case, I will use DOCBASE2 as new default docbase:

jar -xvf $WS_HOME/applications/D2.war WEB-INF/classes/config.properties
sed -i 's,^defaultRepository=.*,defaultRepository=DOCBASE2,' WEB-INF/classes/config.properties
jar -uvf $WS_HOME/applications/D2.war WEB-INF/classes/config.properties

 

Finally if you are using Single Sign-On, you will have a SSO User. This is defined inside the d2fs-trust.properties file with recent versions of D2 while it was defined in the shiro.ini file before. Since I’m using a D2 4.5, the commands would be:

jar -xvf $WS_HOME/applications/D2.war WEB-INF/classes/d2fs-trust.properties
sed -i 's,^DOCBASE1.user=.*,#&,' WEB-INF/classes/d2fs-trust.properties
jar -uvf $WS_HOME/applications/D2.war WEB-INF/classes/d2fs-trust.properties

 

 – D2-Config:

Usually nothing is needed. Only running docbases will be available through D2-Config.

 

 – DA:

Usually nothing is needed, unless you have specific customization for DA, in which case you probably need to take a look at the files under the “custom” folder.

 

3. Full Text Server

For the Full Text Server, the steps are also relatively easy. The only thing that needs to be done is to stop the Index Agent related to the docbase DOCBASE1 and prevent it from starting again. In our environments, since we sometimes have several docbases installed on the same Content Server and several Index Agents installed on the same Full Text, then we need to differentiate the name of the Index Agents. We usually only add the name of the docbase at the end: Indexagent_DOCBASE1. So let’s start with stopping the Index Agent:

ps -ef | grep "Indexagent_DOCBASE1" | grep -v grep
$XPLORE_HOME/jboss7.1.1/server/stopIndexagent_DOCBASE1.sh

 

Once done and if I use the global startstop script I mentioned earlier in this blog, then the only remaining step is preventing the Index Agent to start again and that can be done in the following way:

mv $XPLORE_HOME/jboss7.1.1/server/startIndexagent_DOCBASE1.sh $XPLORE_HOME/jboss7.1.1/server/startIndexagent_DOCBASE1.sh_deactivated
vi $XPLORE_HOME/scripts/startstop
    ==> Duplicate the line starting with "INDEXAGENTS=..."
    ==> Comment one of the two lines and remove the Index Agent related to DOCBASE1 from the list that isn't commented
    ==> In the end, you should have something like:
        INDEXAGENTS="Indexagent_DOCBASE2 Indexagent_DOCBASE3"
        #INDEXAGENTS="Indexagent_DOCBASE1 Indexagent_DOCBASE2 Indexagent_DOCBASE3"

 

If you have a monitoring running on the Full Text Server for this Index Agent, don’t forget to disable it.

 

4. ADTS

The last section of this blog will talk about the ADTS (Advanced Document Transformation Services), also called the Rendition Server. The ADTS is fairly similar to all other Documentum components: first you start with installing the different binaries and then you can configure a docbase to use/be supported by the ADTS. By doing that, the ADTS will update some configuration files that therefore need to be updated again if you want to deactivate a docbase. As you know, the ADTS is a Windows Server so I won’t show you commands to be executed in this section, I will just point you to the configuration files that need to be edited and what to update inside them. In this section, I will use %ADTS_HOME% as the folder under which the ADTS has been installed. It’s usually a good idea to install the ADTS under a specific/separated drive (not the OS drive) like D:\CTS\.

So the first thing to do is to prevent the different profiles for a docbase to be loaded:

Open the file "%ADTS_HOME%\CTS\config\CTSProfileService.xml"
    ==> Comment the whole "ProfileManagerContext" XML tag related to DOCBASE1
    ==> In the end, you should have something like:
        <!--ProfileManagerContext DocbaseName="DOCBASE1" ProcessExternally="false">
            <CTSServerProfile CTSProfileValue="%ADTS_HOME%\CTS\\docbases\\DOCBASE1\\config\\profiles\\lightWeightProfiles" CTSProfileName="lightWeightProfile"/>
            <CTSServerProfile CTSProfileValue="%ADTS_HOME%\CTS\\docbases\\DOCBASE1\\config\\profiles\\lightWeightSystemProfiles" CTSProfileName="lightWeightSystemProfile"/>
            <CTSServerProfile CTSProfileValue="%ADTS_HOME%\CTS\\docbases\\DOCBASE1\\config\\profiles\\heavyWeightProfiles" CTSProfileName="heavyWeightProfile"/>
            <CTSServerProfile CTSProfileValue="/System/Media Server/Profiles" CTSProfileName="lightWeightProfileFolder"/>
            <CTSServerProfile CTSProfileValue="/System/Media Server/System Profiles" CTSProfileName="lightWeightSystemProfileFolder"/>
            <CTSServerProfile CTSProfileValue="/System/Media Server/Command Line Files" CTSProfileName="heavyWeightProfileFolder"/>
            <CTSServerProfile CTSProfileValue="%ADTS_HOME%\CTS\docbases\DOCBASE1\config\temp_profiles" CTSProfileName="tempFileDir"/>
            <CTSServerProfile CTSProfileValue="ProfileSchema.dtd" CTSProfileName="lwProfileDTD"/>
            <CTSServerProfile CTSProfileValue="MP_PROPERTIES.dtd" CTSProfileName="hwProfileDTD"/>
            <ForClients>XCP</ForClients>
        </ProfileManagerContext-->

 

Once that is done, the queue processors need to be disabled too:

Open the file "%ADTS_HOME%\CTS\config\CTSServerService.xml"
    ==> Comment the two "QueueProcessorContext" XML tags related to DOCBASE1
    ==> In the end, you should have something like (I'm not displaying the whole XML tags since they are quite long...):
        <!--QueueProcessorContext DocbaseName="DOCBASE1">
            <CTSServer AttributeName="queueItemName" AttributeValue="dm_mediaserver"/>
            <CTSServer AttributeName="queueInterval" AttributeValue="10"/>
            <CTSServer AttributeName="maxThreads" AttributeValue="10"/>
            ...
            <CTSServer AttributeName="processOnlyParked" AttributeValue=""/>
            <CTSServer AttributeName="parkingServerName" AttributeValue=""/>
            <CTSServer AttributeName="notifyFailureMessageAdmin" AttributeValue="No"/>
        </QueueProcessorContext-->
        <!--QueueProcessorContext DocbaseName="DOCBASE1">
            <CTSServer AttributeName="queueItemName" AttributeValue="dm_autorender_win31"/>
            <CTSServer AttributeName="queueInterval" AttributeValue="10"/>
            <CTSServer AttributeName="maxThreads" AttributeValue="10"/>
            ...
            <CTSServer AttributeName="processOnlyParked" AttributeValue=""/>
            <CTSServer AttributeName="parkingServerName" AttributeValue=""/>
            <CTSServer AttributeName="notifyFailureMessageAdmin" AttributeValue="No"/>
        </QueueProcessorContext-->

 

After that, there is only one last configuration file to be updated and that’s the session manager which is the one responsible for the errors printed during startup of the ADTS because it defines which docbases the ADTS should try to contact, using which user/password and how many tries should be perform:

Open the file "%ADTS_HOME%\CTS\config\SessionService.xml"
    ==> Comment the whole "LoginContext" XML tag related to DOCBASE1
    ==> In the end, you should have something like:
        <!--LoginContext DocbaseName="DOCBASE1" Domain="" isPerformanceLogRepository="false">
            <CTSServer AttributeName="userName" AttributeValue="adtsuser"/>
            <CTSServer AttributeName="passwordFile" AttributeValue="%ADTS_HOME%\CTS\docbases\DOCBASE1\config\pfile\mspassword.txt"/>
            <CTSServer AttributeName="maxConnectionRetries" AttributeValue="10"/>
        </LoginContext-->

 

Once the configuration files have been updated, simply restart the ADTS services for the changes to be applied.

 

And here we go, you should have a clean environment with one less docbase configured without having to remove it on all servers. As a final note, if you ever want to reactivate the docbase, simply uncomment everything that was commented above, restore the default line from the custom “startstop” scripts and rename the Documentum start scripts with their original names (without the “_deactivated”) on the Content Server and Full Text Server.

 

 

Cet article Documentum – Deactivation of a docbase without uninstallation est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator