Feed aggregator

Oracle Speeds Clinical Trials with Quorum Integration

Oracle Press Releases - 3 hours 50 min ago
Press Release
Oracle Speeds Clinical Trials with Quorum Integration Collaboration between Quorum Review IRB and Oracle goBalto Activate eases IRB submissions and approvals for accelerated clinical trials

SCOPE Summit – Orlando, FL.—Feb 19, 2019

Delays in institutional review board (IRB) approvals often complicate clinical research and development and delay the introduction of new therapies to market. Such bottlenecks pose additional obstacles in an industry plagued by rising development costs and increasing complexities. Out-of-the-box integration between Quorum and Oracle Health Sciences goBalto Activate cloud service addresses the inefficiencies associated with submission errors and lengthy IRB submission review cycles.

Research indicates that principal investigators spend nearly half (42 percent) of their time dealing with “administrative burdens.”  This is a category in which IRB-related issues weigh heavily, with one third of that lost time emanating from researcher omissions and errors. IRB review takes up to 2.9 percent of the total time devoted to a study and represents up to 4.7 percent of study costs.

With the integration, country-specific workflows and a management-based approach to site activation in goBalto Activate creates seamless communication with Quorum. This eliminates the need to send IRB submission documents via traditional, error-prone, manual forms of communication. The integrated connection enables Quorum to push approval documents directly into Activate workflows, automatically triggering alerts and thereby saving significant time. Study teams benefit from point-and-click submission of packages, seamless data transfers and the confidence that their submission was completed.

“Quorum is proud to collaborate with Oracle Health Sciences on this critical industry need,” said Cami Gearhart, CEO of Quorum. “This collaboration aligns with our customer promise of providing exceptional service through One-Touch Collaboration™, by continuing to be a partner of choice for agile and innovative ethics review services while maintaining the highest quality of human subject protections.”

“Quorum joins other leading organizations that are committed to modernizing and rethinking how clinical trials are initiated,” said Steve Rosenberg, general manager, Oracle Health Sciences. “Automation has become critical for reducing the costs and complexities of clinical trials and this new level of integration eliminates time consuming processes and improves operational efficiencies by addressing an entrenched bottleneck in the initiation process.”

By enabling global anytime, anywhere access to purpose-built study startup technology Oracle Health Sciences brings measurable change to the inception of a trial. The adoption of this solution will help shorten cycle times, reduce study costs and, most importantly, speed the delivery of new therapies to patients.

Contact Info
Valerie Beaudett
Oracle
+1 650.400.7833
valerie.beaudett@oracle.com
Meghan Roman
Blanc & Otus for Oracle
+1 202.347.7113
meghan.roman@blancandotus.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly-Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

About Oracle Health Sciences

Oracle Health Sciences breaks down barriers and opens new pathways to unify people and processes to bring new drugs to market faster. As the number one vendor in Life Sciences (IDC, 2017), the number one provider of eClinical solutions (Everest Group, 2017) and powered by the number one data management technology in the world (Gartner, 2018), Oracle Health Sciences technology is trusted by 29 of the top 30 pharma, 10 of the top 10 biotech and 10 of the top 10 CROs for clinical trial and safety management around the globe.

About Quorum

Quorum Review IRB is the most preferred central IRB. We help clients accelerate research through faster study start-up, reduced fulfillment time, and the largest offering of complimentary study support services. The Quorum difference is One-Touch Collaboration. Your research benefits from outstanding service experiences, increased efficiency, one study contact, one start-up timeline, and one stream of coordinated communications. We are the only IRB to offer harmonized IRB and IBC review, API integrations, and Kinetiq consulting services that move your research forward.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Valerie Beaudett

  • +1 650.400.7833

Meghan Roman

  • +1 202.347.7113

REGEXP_LIKE - Pattern match with complex logic

Tom Kyte - 5 hours 4 min ago
I want to do the regexpr pattern matching and I couldn't. Please help. I want to evaluate the value that is going to be inserted in DB. I want to perform a check as below. Only four special characters and alphabets are allowed. other special ch...
Categories: DBA Blogs

The relationship between null and 0

Tom Kyte - 5 hours 4 min ago
I asked you one last time and I will ask you one more question. If !=0, we know we should include null, but the result is not null. If !=0, it is different from zero and null, should not it also contain null if it is not 0?
Categories: DBA Blogs

Select from a table where a key value has matching link for all key values of another table

Tom Kyte - 5 hours 4 min ago
I have a set of three tables (T1, T2, T3) that represent dictionary data from external sources. I need to match the data from table 1 with the data in table 3 where <b><i>all</i></b> the rows in table 1 for a given OR_ID are reflected in table 3. T...
Categories: DBA Blogs

Update statement to flag rows

Tom Kyte - 5 hours 4 min ago
Hello, Ask Tom team. I'm using the query below to load rows to a destination database based on some conditions. After this is done I want to flag those rows in order to exclude them in the next SSIS ETL run. <code>select t1.invoice_sender,t1.ei...
Categories: DBA Blogs

latch undo global data

Tom Kyte - 5 hours 4 min ago
Hi team, I see spikes in oem for wait event latch undo global data . This is on insert statement , having concurrency of 50 Inserts in one second Due to heavily loaded db ash report takes high time It would be helpful if you share some s...
Categories: DBA Blogs

bound variables

Tom Kyte - 5 hours 4 min ago
I would like to know more about sql injection. Why is it so hard to tell to the Oracle that a certain string is a parameter and not a part of a Sql command? For example, can a person call himself Delete and his name can not be used in a search? And i...
Categories: DBA Blogs

unable to connect using database link

Tom Kyte - 5 hours 4 min ago
DEAR TOM, I CREATED A DATABASE LINK ON MY LOCAL DATABASE USING THE FOLLOWING COMMANDS. SQL> CREATE DATABASE LINK RP 2 CONNECT TO PRINCE 3 IDENTIFIED BY PRINCE 4 USING 'ORB'; Database link created. SQL> SELECT COUNT(*) FROM DUAL@...
Categories: DBA Blogs

Whats new in 19c - Part I (Grid Infrastructure)

Syed Jaffar - 7 hours 9 min ago
Every new Oracle release comes with bundle of new features and enhancements. Though not every new feature is really needed to everyone, there are few new features that worth considering. As part of 19c new features article series, this post is about the new features introduced in Grid Infrastructure. This blog post focuses on some real useful GI features with deprecated and de-supported features in 19.2.

Dry-run to validate Cluster upgrade readiness

Whether it's a new installation or upgrade from previous version to latest version, system readiness is the key factor for success. With 19c, Cluster upgrade can have a dry-run to ensure the system readiness without actually performing the upgrade of the cluster. To determine if the system is ready for the upgrade, run the upgrade in dry-run mode. During the dry-run upgrade, you can click the Help button on the installer page to understand what is being done or asked.

Use the command below from the 19c binaries home to run the cluster upgrade in Dry-run mode:

$ gridSetup.sh -dryRunForUpgrade

Once you run through with all the interactive screens for dry-run, check the gridSetupActions<timestamp>.log file for errors and fix them for real upgrade run.

Multiple ASMBn

It is a common practice to have multiple disk groups in a RAC environment. It is also possible to have some disk groups in MOUNT state and some disk groups in DISMOUNT state on a DB node. However, when a db instance on a node try to communicate (startup) with the DISMOUNT disk group will throw errors.

Multiple ASMB project allows for the database to use DISK GROUPS across multiple ASM instances simultaneously.  This enhancement provides the HA to RAC stack by allowing DB to use multiple disk groups even if a given ASM instance happens to have some DISMOUNT disk groups.

AddNode and Cloning with Installer Wizard

Adding a new node and the functionality of installing a gold image (cloning) is simplified and made easy in 19c. Adding new node and Cloning homes now directly available with Installer Wizard, you no longer need to use add node.sh and clone.pl scripts. These commands will be depreciated in the upcoming releases.

In the upcoming blog, I will discuss about ASM 19c features.




Using awk to remove attributes without values

Michael Dinh - Mon, 2019-02-18 20:41

Attributes without values are displayed.

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w "TYPE = ora.cvu.type" -p
NAME=ora.cvu
TYPE=ora.cvu.type
ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--
ACTIONS=
ACTION_SCRIPT=
ACTION_TIMEOUT=60
ACTIVE_PLACEMENT=0
AGENT_FILENAME=%CRS_HOME%/bin/orajagent
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=60
CHECK_RESULTS=1122099754
CHECK_TIMEOUT=600
CLEAN_TIMEOUT=60
CRSHOME_SPACE_ALERT_STATE=OFF
CSS_CRITICAL=no
CV_DESTLOC=
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=Oracle CVU resource
ENABLED=1
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
GEN_NEXT_CHECK_TIME=1550563672
GEN_RUNNING_NODE=racnode-dc1-2
HOSTING_MEMBERS=
IGNORE_TARGET_ON_FAILURE=no
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
NEXT_CHECK_TIME=
NLS_LANG=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=restricted
RELOCATE_BY_DEPENDENCY=1
RELOCATE_KIND=offline
RESOURCE_GROUP=
RESTART_ATTEMPTS=5
RESTART_DELAY=0
RUN_INTERVAL=21600
SCRIPT_TIMEOUT=30
SERVER_CATEGORY=ora.hub.category
SERVER_POOLS=*
START_CONCURRENCY=0
START_DEPENDENCIES=hard(ora.net1.network) pullup(ora.net1.network)
START_TIMEOUT=0
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=hard(intermediate:ora.net1.network)
STOP_TIMEOUT=0
TARGET_DEFAULT=default
TYPE_VERSION=1.1
UPTIME_THRESHOLD=1h
USER_WORKLOAD=no
USE_STICKINESS=0
USR_ORA_ENV=
WORKLOAD_CPU=0
WORKLOAD_CPU_CAP=0
WORKLOAD_MEMORY_MAX=0
WORKLOAD_MEMORY_TARGET=0

[oracle@racnode-dc1-1 ~]$

Attributes without values are NOT displayed.

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w "TYPE = ora.cvu.type" -p|awk -F'=' '$2'
NAME=ora.cvu
TYPE=ora.cvu.type
ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--
ACTION_TIMEOUT=60
AGENT_FILENAME=%CRS_HOME%/bin/orajagent
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=60
CHECK_RESULTS=1122099754
CHECK_TIMEOUT=600
CLEAN_TIMEOUT=60
CRSHOME_SPACE_ALERT_STATE=OFF
CSS_CRITICAL=no
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=Oracle CVU resource
ENABLED=1
GEN_NEXT_CHECK_TIME=1550563672
GEN_RUNNING_NODE=racnode-dc1-2
IGNORE_TARGET_ON_FAILURE=no
INSTANCE_FAILOVER=1
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
PLACEMENT=restricted
RELOCATE_BY_DEPENDENCY=1
RELOCATE_KIND=offline
RESTART_ATTEMPTS=5
RUN_INTERVAL=21600
SCRIPT_TIMEOUT=30
SERVER_CATEGORY=ora.hub.category
SERVER_POOLS=*
START_DEPENDENCIES=hard(ora.net1.network) pullup(ora.net1.network)
STOP_DEPENDENCIES=hard(intermediate:ora.net1.network)
TARGET_DEFAULT=default
TYPE_VERSION=1.1
UPTIME_THRESHOLD=1h
USER_WORKLOAD=no
[oracle@racnode-dc1-1 ~]$

You might ask why I am doing this.

I am reviewing configuration before implementation and will compare them with after implementation.

The less items I need to look at before implementation, the better.

Setting up OCI Compute and Storage for Builds on Oracle Developer Cloud

OTN TechBlog - Mon, 2019-02-18 17:59

With the 19.1.3 release of Oracle Developer Cloud, we have started supporting OCI based Build slaves for the continuous integration and continuous deployment. So now you are enabled to use OCI Compute, Storage for the Build VMs and for the artifact storage respectively. This blog will help you understand how you can configure the OCI account for Compute and Storage in Oracle Developer Cloud.

How to get to the OCI Account configuration screen in Developer Cloud?

If your user has Organization Administrator privileges then you will by default land on the Organization Tab after you successfully login into you Developer Cloud instance. In the Organization screen, you need to click on the OCI Account tab.

Note: You will not be able to access this tab if you do have the Organization Administrator privileges. 

 

Existing users of Developer Cloud will see their OCI Classic account configuration and will notice that unlike the previous version, both Compute and Storage configuration have now been consolidated to a single screen. Click on the Edit button for configuring the OCI account.

Click on the OCI radio button to get the form for configuring OCI account. This wizard will help you configure both compute and storage for OCI to be used on Developer Cloud.

 

 

Before we start to understand, what each of the fields in the wizard means and where to retrieve its value from the OCI console, let us understand what does the message displayed on top of the Configure OCI Account wizard(as shown in the screenshot below) means:

 

It means that, if you change from OCI Classic to OCI Account, the Build VMs that were created using  Compute on OCI Classic will now be migrated to OCI based Build VMs. It also gives the count of the existing Build VMs created using OCI Classic compute that will be migrated. This change will also result in the migration of the build and Maven artifacts from Storage Classic to OCI storage automatically.

Prerequisite for the OCI Account configuration:

You should have access to the OCI account and you should also have a native OCI user with the Admin privilege created in the OCI instance.

Note: You will not be able to use the IDCS user or the user with which you are able to login into the Oracle Cloud Myservices console, until and unless that user also exists as native OCI user.

By native user, it means that you should be able to see the user (eg: ociuser) in the Governance & Administration > Identity > Users tab on the OCI console as shown in the screenshot below. If not then you will have to go ahead and create a user following this link.

OCI Account Configuration:

Below are the list of values, explanation of what it is and finally a screenshot of OCI console to show where it can be found. You will need these values to configure the OCI account in Developer Cloud.

Tenancy OCID - This is the cloud tenancy identifier in OCI. Go to Governance and Administration > Administration > Tenancy Details in the OCI console. Under Tenancy Information, click on the Copy link for the Tenancy OCID.

 

User OCID: ID for the native OCI user. Go to Governance and Administration > Identity > Users in the OCI console. For the user of your choice click on the Copy link for the User OCID.

 

Home Region: On the OCI console look at the right-hand top corner and you should find the region for your tenancy, as highlighted in the screenshot below.

 

Private Key: The user has to generate a Public and Private Key pair in the PEM format. The Public key in the PEM format has to be configured in the OCI console. Use this link to see understand how you can create the Public and Private Key Pair.  You will have to go to Governance and Administration > Identity > Users in the OCI console. Select the user by clicking on the username link and then click on the Add Public Key button and then configure the Public Key here. While the Private key needs to be copied in the Private Key field of the Configure OCI Account wizard in Developer Cloud.

 

Passphrase: If you have given any passphrase while generating the Private Key, then you will have to configure the same here, else you can leave it empty.

Fingerprint: It is the fingerprint value of the OCI user who’s OCID you had copied earlier from the OCI console. You will have to go to Governance and Administration > Identity > Users in the OCI console. Select the user by clicking on the username link and for the Public Key created, copy the fingerprint value as shown in the screenshot below.

 

Compartment OCID: You can either select the root compartment for which the OCID would be the same as the Tenancy OCID. But it is recommended that you create a separate compartment for the Developer Cloud Build VMs for the better management. You can create a new compartment by going to Governance and Administration > Identity > Compartments in the OCI console and then click on the Create Compartment button, give the Compartment Name, Description values of your choice and select the root compartment as the Parent Compartment.

Click on the link in the OCID column for the compartment that you have created and then click on the Copy link to copy the DevCSBuild compartment OCID.

 

Storage Namespace: This is the Storage Namespace where the artifacts will be stored on the OCI. Go to Governance and Administration > Administration > Tenancy Details in the OCI console. Under Object Storage Settings, copy the Storage Namespace name as shown in the screenshot below.

 

After you have entered all the values, select the checkbox to accept the terms and conditions. Click the Validate button, if validation is successful, then click the Save button to complete the OCI Account configuration. 

 

You will get a confirmation dialog for the account switch from OCI Classic to OCI. Select the checkbox and click the Confirm button. By doing this you are giving your consent to migrate the VMs, build and Maven artifacts to OCI compute and storage respectively. This action will also remove the artifacts from the Storage classic.

On confirmation, you should see the OCI Account configured with the provided details. You can edit it at any point of time by clicking the Edit button.

 

You can check for the Maven and build artifacts in the projects to confirm the migration.

 

To know more about Oracle Developer Cloud, please refer the documentation link.

Happy Coding!

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

Performance issue with data densification process

Tom Kyte - Mon, 2019-02-18 11:26
Hi Tom, I am facing an issue while making sparse data dense. Here is the problem statement: We are getting price information for securities from upstream in a file and prices will come only when either there will be new security on boarded or t...
Categories: DBA Blogs

The relationship between null and 0

Tom Kyte - Mon, 2019-02-18 11:26
Coding 1) <code>select comm from emp where comm is not null and comm != 0;</code> Coding 2) <code>select comm from emp where comm != 0;</code> The results of Coding 1 and Coding 2 are the same. I know that the values ??of null and 0 ar...
Categories: DBA Blogs

Table and Index maintenance

Tom Kyte - Mon, 2019-02-18 11:26
Good Afternoon Tom, I'm going to develop a little SQL Tool for maintenance of compress tables and indexes for our DWH Schema, our clients have Oracle EE (11.2 and 12.2), my "big" doubt is use or not use parallel execution because i see that using ...
Categories: DBA Blogs

writing a stand-alone application to continuously monitor a database queue (AQ)

Tom Kyte - Mon, 2019-02-18 11:26
Hi Tom, A question regarding oracle AQ... I wish to write a small stand-alone application that would *constantly* monitor a queue (only one queue) for the arrival of a message and as soon as a mesage arrives, take some action. I figured I could use...
Categories: DBA Blogs

Best way to enforce cross-row constraints?

Tom Kyte - Mon, 2019-02-18 11:26
I use the database to declare (and enforce) as much application logic as I can. What I'd like to do is to enforce application constraints across related rows, if possible. As a contrived example, suppose we have a table of Agreements and a secon...
Categories: DBA Blogs

[Troubleshooting] Oracle Apps R12.2 Online Patching ADOP : Prepare Phase Issue

Online Apps DBA - Mon, 2019-02-18 08:00

[Troubleshooting] Oracle Apps R12.2 Online Patching ADOP : Prepare Phase Issue   Wants to know how to solve Online Patching (ADOP) issue? Visit: https://k21academy.com/appsdba23 & learn with follow the steps: ✔ Run Prepare Phase ✔ Look at ADOP Logs ✔ Errors in ADOP ✔ Root Causes Fix and Revise to troubleshoot the Error & Solve. Write […]

The post [Troubleshooting] Oracle Apps R12.2 Online Patching ADOP : Prepare Phase Issue appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Understanding grid disks in Exadata

Amardeep Sidhu - Mon, 2019-02-18 07:07

Use of Exadata storage cells seems to be a very poorly understood concept. A lot of people have confusions about how exactly ASM makes uses of disks from storage cells. Many folks assume there is some sort of RAID configured in the storage layer whereas there is nothing like that. I will try to explain some of the concepts in this post.

Let’s take an example of an Exadata quarter rack that has 2 db and 3 storage nodes (node means a server here). Few things to note:

  • The space for binaries installation on db nodes comes from the local disks installed in db nodes (600GB * 4 (expandable to 8) configured in RAID5). In case you are using OVM, same disks are used for keeping configuration files, Virtual disks for VMs etc.
  • All of the ASM space comes from storage cells. The minimum configuration is 3 storage cells.

So let’s try to understand what makes a storage cell. There are 12 disks in each storage cell (latest X7 cells are coming with 10 TB disks). As I mentioned above that there are 3 storage cells in a minimum configuraiton. So we have a total of 36 disks. There is no RAID configured in the storage layer. All the redundancy is handled at ASM level. So to create a disk group:

  • First of all cell disks are created on each storage cell. 1 physical disk makes 1 cell disk. So a quarter rack has 36 cell disks.
  • To divide the space in various disk groups (by default only two disk groups are created : DATA & RECO; you can choose how much space to give to each of them) grid disks are created. grid disk is a partition on the cell disk. slice of a disk in other words. Slice from each cell disk must be part of both the disk groups. We can’t have something like say DATA has 18 disks out of 36 and the RECO has another 18. That is not supported. Let’s say you decide to allocate 5 TB to DATA grid disks and 4 TB to RECO grid disks (out of 10 TB on each disk, approx 9 TB is what you get as usable). So you will divide each cell disk into 2 parts – 5 TB and 4 TB and you would have 36 slices of 5 TB each and 36 slices of 4 TB each.
  • DATA disk group will be created using the 36 5 TB slices where grid disks from each storage cell constitute one failgroup.
  • Similarly RECO disk group will be created using the 36 4 TB slices.

What we have discussed above is a quarter rack scenario with High Capacity (HC) disks. There can be somewhat different configurations too:

  • Instead of HC disks, you can have the Extreme Flash (EF) configuration which uses flash cards in place of disks. Everything remains the same except the number. Instead of 12 HC disks there will be 8 flash cards.
  • With X3 I think, Oracle introduced an eighth rack configuration. In an eighth rack configuration db nodes come with half the cores (of quarter rack db nodes) and storage cells come with 6 disks in each of the cell. So here you would have only 18 disks in total. Everything else works in the same way.

Hope it clarified some of the doubts about grid disks.


Categories: BI & Warehousing

[Video 2 of 5] 3 Ways to Connect to Oracle Cloud

Online Apps DBA - Mon, 2019-02-18 03:17

There are 3 ways to connect to the Oracle Cloud! Leave a comment below and share how many you know. Note: We’ve covered these 3 ways in our 2nd video part of Networking In Oracle Cloud here: https://k21academy.com/1z093214 There are 3 ways to connect to the Oracle Cloud! Leave a comment below and share how […]

The post [Video 2 of 5] 3 Ways to Connect to Oracle Cloud appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Batch Architecture - Designing Your Cluster - Part 1

Anthony Shorten - Sun, 2019-02-17 18:42

The Batch Architecture for the Oracle Utilities Application Framework is both flexible and powerful. To simplify the configuration and prevent common mistakes the Oracle Utilities Application Framework includes a capability called Batch Edit. This is a command line utility, named bedit.sh, that provides a wizard style capability to build and maintain your configuration. By default the capability is disabled and can be enabled by setting the Enable Batch Edit Functionality to true in the Advanced Configuration settings using the configureEnv.sh script:

$ configureEnv.sh -a ************************************************* * Environment Configuration demo * *************************************************   50. Advanced Environment Miscellaneous Configuration ...        Enable Batch Edit Functionality:                    true ...

Once enabled the capability can be used to build and maintain your batch architecture.

Using Batch Edit

The Batch Edit capability is an interactive utility to build the environment. The capability is easy to use with the following recommendations:

  • Flexible Options. When invoking the command you specify the object type you want to configure (cluster, threadpool or submitter) and any template you want to use. The command options will vary. Use the -h option for a full list.
  • In Built Help. If you do not know what a parameter is about or even the object type. You can use the help <topic> command. For example, using when configuring help threadpoolworker gives you advice about the approaches you can take for threadpools. If you want a list of topics, type help with no topic.
  • Simple Commands. The utility has a simple set of commands within the utility to interact with the settings. For example if you want to set the role within the cluster to say fred you would use the set role fred command within the utility.
  • Save the Configuration. There is a save command to make all changes in the session reflect in the relevant file and conversely if you make a mistake you can exit without saving the session.
  • Informative. It will tell you which file you are editing at the start of the session so you can be sure you are in the right location.

Here is an example of an edit session:

$ bedit.sh -w

Editing file /u01/ugtbk/splapp/standalone/config/threadpoolworker.properties using template /u01/ugtbk/etc/threadpoolworker.be
Includes the following push destinations:
  dir:/u01/ugtbk/etc/conf/tpw

Batch Configuration Editor 4.4.0.0.0_1 [threadpoolworker.properties]
--------------------------------------------------------------------

Current Settings

  minheap (1024m)
  maxheap (1024m)
  daemon (true)
  rmiport (7540)
  dkidisabled (false)
  storage (true)
  distthds (4)
  invocthds (4)
  role (OUAF_Base_TPW)
  jmxstartport (7540)
  l2 (READ_ONLY)
  devmode (false)
  ollogdir (/u02/sploutput/ugtbk)
  ollogretain ()
  thdlogretain ()
  timed_thdlog_dis (false)
  pool.1
      poolname (DEFAULT)
      threads (5)
  pool.2
      poolname (LOCAL)
      threads (0)
> save
Changes saved
Pushing file threadpoolworker.properties to /u01/ugtbk/etc/conf/tpw ...
> exit

Cluster Configuration

The first step in the process is to design your batch cluster. This is the group of servers that will execute batch processes. The Oracle Utilities Application Framework uses a Restricted Use License of Oracle Coherence to cluster batch processes and resources. The use of Oracle Coherence allows you to implement different architectures from simple to complex. Using Batch Edit there are three cluster types supported (you must choose one type per environment).

Cluster Type (template code) Use Cases Comments Single Server (ss) Cluster is restricted to a single host This is useful for non-production environments such as demonstration, development and testing as it is most simple to implement Uni-Cast (wka) The cluster uses unicast protocol with the hosts explicitly named within the cluster that are part of the cluster. This is recommended for sites wanting to lock down a cluster to specific hosts and does not want to use multi-cast protocols. Administrators will have to name the list of hosts, known as Well Known Addresses, that are part of the cluster as part of this configuration Multi-Cast (mc) The cluster uses the multi-cast protocol with a valid multi-cast IP address and port. This is recommended for sites who want a dynamic configuration where threadpools and submitters are accepted on demand. This is the lowest amount of configuration for product clusters as the threadpools can join a cluster from any server with the right configuration dynamically. It is not recommended for sites that do not use the multi-cast protocol. Single Server Configuration

This is the simplest configuration with the cluster restricted to a single host. The cluster configuration is restricted networking wise within the configuration. To use this cluster type simply use the following command and follow the configuration generated for you from the template.

bedit.sh -c -t ss Uni-Cast Configuration

This is a multi-host cluster where the hosts in the configuration are defined explicitly in host and port number combinations. The port number is used for communication to that host in the cluster. This style is useful where the site does not want to use the multi-cast protocol or wants to micro-manage their configuration. To use this cluster type simply use the following command and follow the configuration generated for you from the template.

bedit.sh -c -t wka

You then add each host as a socket using the command:

add socket

This will add a new socket collection in the format socket.<socketnumber>. To set the values use the command:

set socket.<socketnumber> <parameter> <value>

where:

<socketnumber> The host number to edit <parameter> Either wkaaddress (host or IP address of server) and wkaport (port number on that host to use) <value> the value for the parameter. For example: set socket.1 wkaaddress host1

To use this cluster style ensure the following:

  • Use the same port number per host. Try and use the same broadcast port on each host in the cluster. If they are different then the port number in the main file for the machines that are on the cluster has to be changed to define that port.
  • Ensure each host has a copy of the configuration file. When you build the configuration file, ensure the same file is on each of the servers in the cluster (each host will require a copy of the product).
Multi-Cast Configuration

This is the most common multi-host configuration. The idea with this cluster type is that a multi-cast port and IP Address are broadcast across your network per cluster. It requires very little configuration and the threadpools can dynamically connect to that cluster with little configuration. It uses the multi-cast protocol which network administrators either love or hate. The configuration is similar to the Single Server but the cluster settings are actually managed in the installation configuration (ENVIRON.INI) using the COHERENCE_CLUSTER_ADDRESS and COHERENCE_CLUSTER_PORT settings. Refer to the Server Administrator Guide for additional configuration advice.

Cluster Guidelines

When setting up the cluster there are a few guidelines to follow:

  • Use Single Server for Non-Production. Unless you need multi-host clusters, use the Single Server cluster to save configuration effort.
  • Name Your Cluster Uniquely. Ensure your cluster is named appropriately and uniquely per environment to prevent cross environment unintentional clustering.
  • Set a Cluster Type and Stick with it. It is possible to migrate from one cluster type to another (without changing other objects) but to save time it is better to lock in one type and stick with it for the environment.
  • Avoid using Prod Mode. There is a mode in the configuration which is set to dev by default. It is recommended to leave the default for ALL non-production environment to avoid cross cluster issues. The Prod mode is recommended for Production systems only. Note: There are further safeguards built into the Oracle Utilities Application Framework to prevent cross cluster connectivity.

The cluster configuration generates a tangosol-coherence-override.xml configuration file used by Oracle Coherence to manage the cluster.

Cluster Configuration

Now we have the cluster configured, the next step is to design your threadpools to be housed in the cluster. That will be discussed in Part 2 (coming soon).

Pages

Subscribe to Oracle FAQ aggregator