Feed aggregator

Using JMeter to run load test on a ADF application protected by Oracle Access Manager Single Sign On

Yann Neuhaus - Tue, 2016-10-18 10:50

Introduction

In one of my missions, I was requested to  run performance and load tests on a ADF application running in a Oracle Fusion Middleware environment protected using Oracle Access Manager. For this task we decided to use Apache JMeter because it  provides the control needed on the tests and uses multiple threads to emulate multiple users. It can be used to do distributed testing which uses multiple systems to do stress test.  Additionally, the GUI interface provides an easy way to manage the load test scenarios that can be easily recorded using the HTTP(s) Test Script Recorder.

Prepare a JMeter test plan

A first start is to review the following Blog: My Shot on Using JMeter to Load Test Oracle ADF Applications

The blog above explains how to record and use a test plan in JMeter.
It provides a SimplifiedADFJMeterPlan.jmx  JMeter test plan that can be used as a base for the JMeter test plan creation.
But this ADF starter test plan has to be reviewed for the jsessionId and afrLoop Extractors. As the regular expression associated with them might need to be adapted as they might change depending on the version of the ADF software.

In this environment, Oracle Fusion Middleware ADF 11.1.2.4 WebLogic Server 10.3.6 and Oracle Access Manager 11.2.3 were used.
The regular expressions for afrLoop and jsessionid needed to be updated as shown below:

reference name regular expression afrLoop _afrLoop\’, \';([0-9]{13,16}) jsesionId ;jsessionid=([-_0-9A-Za-z!]{62,63})

Coming to the single Sign On layer, it appears that the Oracle Access Manager compatible login screen requires three parameters:

  • username
  • password
  • request_id

First username and password pattern values will be provided by the recording of the test scenario. To run the same scenario with multiple users, a CSV file is used to store test users and passwords. This will be detailed later in this blog.
The request_id is provided by the Oracle Access Manager Single Sign On layer and needs to be fetched and re-injected to the authentication URL.
To resolve this, a new variable needed to be created and the regular expression below is used.

reference name regular expression requestId name=\’request_id\’ value=\'([&#;0-9]{18,25})\';

Once the test plan scenario is recorded, look for the OAM standard “/oam/server/auth_cred_submit” URL and change the request_id parameter to use the defined requestId variable.

**  click on the image to increase the size
OAM Authentication URL
name: request_id   value: ${requestId}

After those changes, the new JMeter test plan can be run.

Steps to run the test plan with multiple users

In JMeter,
Right click on the “Thread Group” on the tree.
Select “Add” – “Config Element” – “CSV Data Set Config”.
Add CSV config in JMeter

Create a CSV file which contains USERNAME,PASSWORD and saved it in a folder on your Jmeter server. Make sure the users exist in OAM/OID:

ahunold,welcome1
jcooper,welcome1
monty,welcome1
king,welcome1
scott,welcome1

Adapt the path in the “CSV Data Set Config” and define the variable values (USERNAME and PASSWORD) in “Variable Names comma-delimited”
JMeter ADF OAM IMG3
Look for the URL that is submitting the authentication – /oam/server/auth_cred_submit- and click on it. In the right frame, replace the username and password got during the recording with respectively ${USERNAME} and ${PASSWORD} as shown below:
JMeter ADF OAM IMG4
At last you can adapt the thread group of your test plan to the number of users (Number of Threads) and loop (Loop Count) you want to run and execute it. The Ramp-Up Period in Seconds is the time between the Threads start.
JMeter test plan IMG5
The test plan can be executed now and results visualised in tree, graph or table views.

 

 

Cet article Using JMeter to run load test on a ADF application protected by Oracle Access Manager Single Sign On est apparu en premier sur Blog dbi services.

Massive Update

Tom Kyte - Tue, 2016-10-18 09:06
Dear Team, Kindly help me for below problems 1) Needs to update all the records of 1 billion record table. Only one column values to be updated to reverse the order (abcde -> edcba) 2) Needs to update records based on a column condition (column ...
Categories: DBA Blogs

Extract Logical Operators (AND/OR) from String

Tom Kyte - Tue, 2016-10-18 09:06
I want to help to extract Logical Operators (AND/OR) from string having n numbers of operators For Example: ((1=1 AND 1=1) OR 1=1) AND (1=1 AND 1=1)
Categories: DBA Blogs

MODEL CLAUSE

Tom Kyte - Tue, 2016-10-18 09:06
Hi? this my query I have a dictionary table create table D_CONFIG_TST ( row_name VARCHAR2(1500), -- text field with name of row row_number VARCHAR2(50), -- text field with row number for order row_array VARCHAR2(15...
Categories: DBA Blogs

Change Data Capture to get modified records of a table

Tom Kyte - Tue, 2016-10-18 09:06
Hello Tom, Hope you are doing good! Request your suggestions for the following scenario. We have a requirement wherein we have to migrate modified/delta data on a table(Modified by DML statements) from Oracle to MongoDb. To do this i have su...
Categories: DBA Blogs

DBMS_OUTPUT to Query output

Tom Kyte - Tue, 2016-10-18 09:06
Hello Tom, First thanks for helping me out. Few days ago you explained how to insert dbms_output into temporary tables or nested table. This was the example: This example works perfectly if I have write permission in the database, Is there a wa...
Categories: DBA Blogs

The output file alignment must be adjusted.

Tom Kyte - Tue, 2016-10-18 09:06
Hi Team, I executed shell script file. The shell script file has the below sql information in it. SET SERVEROUTPUT ON; SET LINESIZE 4600; #SET TRIMSPOOL ON; SET WRAP OFF; SET HEADING OFF; #SET TRIMOUT ON; SET TIMING ON; SET FEEDBA...
Categories: DBA Blogs

"Error occurred : ORA-01555: snapshot too old: rollback segment number 313 with name "_SYSSMU313_2192191193$" too small"

Tom Kyte - Tue, 2016-10-18 09:06
Hi Tom, I'm posting a email conversation between a DBA and a Technical lead in our organization. Is the answer you gave 15 years back on "ORA-01555" is still valid for the new oracle releases. <b>DBA --</b> <i>"This is due to long running qu...
Categories: DBA Blogs

Weblogic 11g to 12c: strictness in listen address

Darwin IT - Tue, 2016-10-18 07:50
Let's say you have a virtual machine with two network adapters, both set on 'HostOnly'.
I used to do that and set the first one of those to a fixed IP address, say 10.0.0.1. To this one I coupled the hostname, for instance darlin-vce-db, using the /etc/hosts file. That way I had a fixed, always existing network address for the database.

Together with the database, you install WebLogic, for instance to serve SOASuite, or OSB, or whatever custom application you want to serve. Now, wouldn't it be nice to be able to use WebLogic from a browser out of the virtual machine? Of course, because this is what you do nowadays: almost everywhere I come, servers are hosted on virtualized computing environments, like VMWare VSE or Oracle VM. So that's where the second adapter comes in, dynamically coupled to an address in the form of 192.168.56.101, for instance. Externally, using the etc/host file on your host OS (in my case Windows), you couple that address also to darlin-vce-db.

So you have two /etc/hosts settings for the hostname, darlin-vce-db:
In internally in the VM:
10.0.0.1        darlin-vce-db     darlin-vce-db.darwin-it.local
And externally on your host OS:
192.168.56.101  darlin-vce-db     darlin-vce-db.darwin-it.local

Nothing special, right? Well, WebLogic 11g, apparently just listens to the hostname 'darlin-vce-db', if that is entered as a listen-address. It seems not to care if the request for 'darlin-vce-db' comes from 192.168.56.101 in stead of the 10.0.0.1 to where the hostname actually is bound.

However, in this particular case WebLogic 12c seems to behave differently. If you provice 'darlin-vce-db' as listen address, that is bound to a network adapter that has 10.0.0.1 it expects that requests also come in via that network adapter. It seems to ignore requests that come in via other adapters (in my case 192.168.56.101).

You can solve this partially using Channels: in the Weblogic Console, navigate to the particular managed server, click Protocols, Channels.
Create a new channel:

Give it a name like 'Extern-Intern' or something else properly denoting the purpose of it and choose 'http' as a protocol:

Then Provide the internal address, for instance 'darlin-vce-db', and the external listen address:

Leave the ports to the default listen-port, in this case. Then finish the wizard.
Although this helps in connecting to the WebLogic console, EM, or with the same method on the SOA Server, to the SOA Composer (soaserver:port/soa/composer), BPM Workspace (soaserver:port/bpm/composer) etc., this will not work for JDeveloper.

When trying to deploy a soa composite from JDeveloper, you define/choose a ApplicationServer with connection to the AdminServer. But in case of deploying a composite, the AdminServer figures out which running SOA Servers there are, and let JDeveloper provide the composite to those servers. But then the soaserver(s) refuse(s) the connections from JDeveloper. Testing the ApplicationServer connection will show success for the Http connection to the AdminServer, but fails for all the other components.

The solution is then to denote a particular Network Adapter/ip address and make sure that internally and externally the particular hostname is coupled to that same particular ip-address.

Oracle Enterprise Manager Cloud Control 13c Release 2 (13cR2) Installation/Upgrade

Tim Hall - Tue, 2016-10-18 02:51

em13cOracle Enterprise Manager Cloud Control 13c Release 2 (13cR2) was released a couple of weeks ago. In a previous post I mentioned we were going to stop our rollout of 13cR1 agents to production and upgrade from 13cR1 to 13cR2 before we resumed.

I don’t like doing anything at work that I haven’t already tried at home, so the first step in that process was for me to do some clean installs and practice upgrades. After a busy weekend and a late night last night I think I’m happy with the process. That resulted in these articles.

If you’ve done a 13cR1 installation, the clean 13cR2 installation will come as no surprise. They now have multitenant and non-CDB repository templates to choose from. I used the multitenant template in this example. The installation was fine on both OL6 and OL7, so nothing out of the ordinary to report there.

The upgrade process was similar to previous point release upgrades too. We used the non-CDB template, the only one available at the time, to build our 13cR1 installation, so not surprisingly I practised the upgrade using that as a starting point. The upgrade process went fine, but I got a lot of warning messages during the process. It was all working fine at the end though…

I guess we will start rolling this bad-boy out once I get back from the APAC Tour and Bulgaria (BGOUG).

Cheers

Tim…

Oracle Enterprise Manager Cloud Control 13c Release 2 (13cR2) Installation/Upgrade was first posted on October 18, 2016 at 8:51 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Documentum story – Change the location of the Xhive Database for the DSearch (xPlore)

Yann Neuhaus - Tue, 2016-10-18 02:00

When using xPlore with Documentum, you will need to setup a DSearch which will be used to perform the searches and this DSearch uses in the background an Xhive Database. This is a native XML Database that persists XML DOMs and provide access to them using XPath and XQuery. In this blog, I will share the steps needed to change the location of the Xhive Database used by the DSearch. You usually don’t want to move this XML Database everyday but it might be useful as a one-time action. In this customer case, one of the DSearch in a Sandbox/Dev environment has been installed using a wrong path for the Xhive Database (not following our installation conventions) and therefore we had to correct that just to keep the alignment between all environments and to avoid a complete uninstall/reinstall of the IndexAgents + DSearch.

 

In the steps below, I will suppose that xPlore has been installed under “/app/xPlore” and that the Xhive Database has been created under “/app/xPlore/data”. This is the default value and then when installing an IndexAgent, it will create, under the data folder, a sub-folder with a name equal to the DSearch Domain’s name (usually the name of the docbase/repository). In this blog I will show you how to move this Xhive Database to “/app/xPlore/test-data” without having to reinstall everything. This means that the Xhive Database will NOT be deleted/recreated from scratch (this is also possible) and therefore you will NOT have to perform a full reindex which would have taken a looong time.

 

So let’s start with stopping all components first:

[xplore@xplore_server_01 ~]$ sh -c "/app/xPlore/jboss7.1.1/server/stopIndexagent.sh"
[xplore@xplore_server_01 ~]$ sh -c "/app/xPlore/jboss7.1.1/server/stopPrimaryDsearch.sh"

 

Once this is done, we need to backup the data and config files, just in case…

[xplore@xplore_server_01 ~]$ current_date=$(date "+%Y%m%d")
[xplore@xplore_server_01 ~]$ cp -R /app/xPlore/data/ /app/xPlore/data_bck_$current_date
[xplore@xplore_server_01 ~]$ cp -R /app/xPlore/config/ /app/xPlore/config_bck_$current_date
[xplore@xplore_server_01 ~]$ mv /app/xPlore/data/ /app/xPlore/test-data/

 

Ok now everything in the background is prepared and we can start the actual steps to move the Xhive Database. The first step is to change the data location in the files stored in the config folder. There is actually two files that need to be updated: indexserverconfig.xml and XhiveDatabase.bootstrap. In the first file, you need to update the “storage-location” path that defines where the data are kept and in the second file you need to update all paths pointing to the Database files. Here are some simple commands to replace the old path with the new path and check that it has been done properly:

[xplore@xplore_server_01 ~]$ sed -i "s,/app/xPlore/data,/app/xPlore/test-data," /app/xPlore/config/indexserverconfig.xml
[xplore@xplore_server_01 ~]$ sed -i "s,/app/xPlore/data,/app/xPlore/test-data," /app/xPlore/config/XhiveDatabase.bootstrap
[xplore@xplore_server_01 ~]$ 
[xplore@xplore_server_01 ~]$ grep -A2 "<storage-locations>" /app/xPlore/config/indexserverconfig.xml
    <storage-locations>
        <storage-location path="/app/xPlore/test-data" quota_in_MB="10" status="not_full" name="default"/>
    </storage-locations>
[xplore@xplore_server_01 ~]$ 
[xplore@xplore_server_01 ~]$ grep "/app/xPlore/test-data" /app/xPlore/config/XhiveDatabase.bootstrap | grep 'id="[0-4]"'
        <file path="/app/xPlore/test-data/xhivedb-default-0.XhiveDatabase.DB" id="0"/>
        <file path="/app/xPlore/test-data/SystemData/xhivedb-SystemData-0.XhiveDatabase.DB" id="2"/>
        <file path="/app/xPlore/test-data/SystemData/MetricsDB/xhivedb-SystemData#MetricsDB-0.XhiveDatabase.DB" id="3"/>
        <file path="/app/xPlore/test-data/SystemData/MetricsDB/PrimaryDsearch/xhivedb-SystemData#MetricsDB#PrimaryDsearch-0.XhiveDatabase.DB" id="4"/>
        <file path="/app/xPlore/test-data/xhivedb-temporary-0.XhiveDatabase.DB" id="1"/>

 

The next step is to announce the new location of the data folder to the DSearch so it can create future Xhive Databases at the right location and this is done inside the file indexserver-bootstrap.properties. After the update, this file should look like the following:

[xplore@xplore_server_01 ~]$ cat /app/xPlore/jboss7.1.1/server/DctmServer_PrimaryDsearch/deployments/dsearch.war/WEB-INF/classes/indexserver-bootstrap.properties
# (c) 1994-2009, EMC Corporation. All Rights Reserved.
#Wed May 20 10:40:49 PDT 2009
#Note: Do not change the values of the properties in this file except xhive-pagesize and force-restart-xdb.
node-name=PrimaryDsearch
configuration-service-class=com.emc.documentum.core.fulltext.indexserver.core.config.impl.xmlfile.IndexServerConfig
indexserver.config.file=/app/xPlore/config/indexserverconfig.xml
xhive-database-name=xhivedb
superuser-name=superuser
superuser-password=****************************************************
adminuser-name=Administrator
adminuser-password=****************************************************
xhive-bootstrapfile-name=/app/xPlore/config/XhiveDatabase.bootstrap
xhive-connection-string=xhive://xplore_server_01:9330
xhive-pagesize=4096
# xhive-cache-pages=40960
isPrimary = true
licensekey=**************************************************************
xhive-data-directory=/app/xPlore/test-data
xhive-log-directory=

 

In this file:

  • indexserver.config.file => defines the location of the indexserverconfig.xml file that must be used to recreate the DSearch Xhive Database.
  • xhive-bootstrapfile-name => defines the location and name of the Xhive bootstrap file that will be generated during bootstrap and will be used to create the empty DSearch Xhive Database.
  • xhive-data-directory => defines the path of the data folder that will be used by the Xhive bootstrap file. This will therefore be the future location of the DSearch Xhive Database.

 

As you probably understood, to change the data folder, you just have to adjust the value of the parameter “xhive-data-directory” to point to the new location: /app/xPlore/test-data.

 

When this is done, the third step is to change the Lucene temp path:

[xplore@xplore_server_01 ~]$ cat /app/xPlore/jboss7.1.1/server/DctmServer_PrimaryDsearch/deployments/dsearch.war/WEB-INF/classes/xdb.properties
xdb.lucene.temp.path=/app/xPlore/test-data/temp

 

In this file, xdb.lucene.temp.path defines the path for temporary uncommitted indexes. Therefore it will just be used for temporary indexes but it is still a good practice to change this location since it’s also talking about the data of the DSearch and it helps to keep everything consistent.

 

Then the next step is to clean the cache and restart the DSearch. You can use your custom start/stop script if you have one or use something like this:

[xplore@xplore_server_01 ~]$ rm -rf /app/xPlore/jboss7.1.1/server/DctmServer_*/tmp/work/*
[xplore@xplore_server_01 ~]$ sh -c "cd /app/xPlore/jboss7.1.1/server;nohup ./startPrimaryDsearch.sh & sleep 5;mv nohup.out nohup-PrimaryDsearch.out"

 

Once done, just verify in the log file generated by the start command (for me: /app/xPlore/jboss7.1.1/server/nohup-PrimaryDsearch.out) that the DSearch has been started successfully. If that’s true, then you can also start the IndexAgent:

[xplore@xplore_server_01 ~]$ sh -c "cd /app/xPlore/jboss7.1.1/server;nohup ./startIndexagent.sh & sleep 5;mv nohup.out nohup-Indexagent.out"

 

And here we are, the Xhive Database is now located under the “test-data” folder!

 

 

Additional note: As said at the beginning of this blog, it is also possible to recreate an empty Xhive Database and change its location at the same time. Doing a recreation of am empty DB will result in the same thing as the steps above BUT you will have to perform a full reindexing which will take a lot of time if this isn’t a new installation (the more documents are indexed, the more time it will take)… To perform this operation, the steps are mostly the same and are summarized below:

  1. Backup the data and config folders
  2. Remove all files inside the config folder except the indexserverconfig.xml
  3. Create a new (empty) data folder with a different name like “test-data” or “new-data” or…
  4. Update the file indexserver-bootstrap.properties with the reference to the new path
  5. Update the file xdb.properties with the reference to the new path
  6. Clean the cache and restart the DSearch+IndexAgents

Basically, the steps are exactly the same except that you don’t need to update the files indexserverconfig.xml and XhiveDatabase.bootstrap. The first one is normally updated by the DSearch automatically and the second file will be recreated from scratch using the right data path thanks to the update of the file indexserver-bootstrap.properties.

 

Have fun :)

 

Cet article Documentum story – Change the location of the Xhive Database for the DSearch (xPlore) est apparu en premier sur Blog dbi services.

CDB resource plan: shares and utilization_limit

Yann Neuhaus - Mon, 2016-10-17 16:00

I’m preparing some slides about PDB security (lockdown) and isolation (resource) for DOAG and as usual I’ve more info to share than what can fit in 45 minutes. In order to avoid the frustration of removing slides, I usually share them in blog posts. Here is the basic concepts of CDB resource plans in multitenant: shares and resource limit.

The CDB resource plan is mainly about CPU. It also governs the degree when in parallel query and the I/O when on Exadata, but the main resource is the CPU: sessions that are not allowed to used more CPU will wait on ‘resmgr: cpu quantum’. In a cloud environment where you provision a PDB, like in the new Exadata Express Cloud Service, you need to ensure that one PDB do not take all CDB resources, but you also have to ensure that resources are fairly shared.

resource_limit

Let’s start with resource limit. This do not depend on the number of PDB: it is defined as a percentage of the CDB resources. Here I have a CDB with two PDBs and I’ll run a workload on one PDB only. I run 8 sessions, all cpu bound, on PDB1.

I’ve defined a CDB resource plan that sets the resource_limit to 50% for PDB1:

CURRENT_TIMESTAMP PLAN PLUGGABLE_DATABASE DIRECTIVE_TYPE SHARES UTIL
------------------------------------ ------------ ------------------------- ------------------------------ ---------- ----------
14-OCT-16 08.46.53.077947 PM +00:00 MY_CDB_PLAN ORA$AUTOTASK AUTOTASK 90
14-OCT-16 08.46.53.077947 PM +00:00 MY_CDB_PLAN ORA$DEFAULT_PDB_DIRECTIVE DEFAULT_DIRECTIVE 1 100
14-OCT-16 08.46.53.077947 PM +00:00 MY_CDB_PLAN PDB1 PDB 1 50
14-OCT-16 08.46.53.077947 PM +00:00 MY_CDB_PLAN PDB2 PDB 1 100

This is an upper limit. I’ve 8 CPUs so PDB1 will be allowed to run only 4 sessions in CPU at a time. Here is the result:

CDB_RESOURCE_PLAN_1_PDB_1_SHARE_50_LIMIT

What you see here is that when more than the allowed percentage has been used the sessions are scheduled out of CPU and wait on ‘resmgr: cpu quantum’. And the interesting thing is that they seem to be stopped all at the same time:

CDB_RESOURCE_PLAN_1_PDB_1_SHARE_50_LIMIT-2

This make sense because the suspended sessions may hold resources that are used by others. However, this pattern does not reproduce for any workload. More work and future blog posts are probably required about that.

Well, the goal here is to explain that resource_limit is there to define a maximum resource usage. Even if there is no other activity, you will not be able to use all CDB resources if you have a resource limit lower than 100%.

Shares

Share are there for the opposite reason: guarantee a minimum of ressources to a PDB.
However, the unit is not the same. It cannot be the same. You cannot guarantee a percentage of CDB ressources to one PDB because you don’t know how many other PDBs you have. Let’s say you have 4 PDBs and you want to have them equal. You want to define a minimum of 25% percent for each. But then, what happens when a new PDB is created? You need to change all 25% to 20%. To avoid that, the minimum ressources is allocated by shares. You give shares to each PDB and they will get a percentage of ressources calculated from their share divided by the total number of shares.

The result is that when there is not enough ressources in the CDB to run all the sessions, then the PDBs that use more than their share will wait. Here is an example where PDB1 has 2 shares and PDB2 has 1 share, which means that PDB1 will get at least 66% of ressources and PDB2 at least 33%:

CURRENT_TIMESTAMP PLAN PLUGGABLE_DATABASE DIRECTIVE_TYPE SHARES UTIL
------------------------------------ ------------ ------------------------- ------------------------------ ---------- ----------
14-OCT-16 09.14.59.302771 PM +00:00 MY_CDB_PLAN ORA$AUTOTASK AUTOTASK 90
14-OCT-16 09.14.59.302771 PM +00:00 MY_CDB_PLAN ORA$DEFAULT_PDB_DIRECTIVE DEFAULT_DIRECTIVE 1 100
14-OCT-16 09.14.59.302771 PM +00:00 MY_CDB_PLAN PDB1 PDB 2 100
14-OCT-16 09.14.59.302771 PM +00:00 MY_CDB_PLAN PDB2 PDB 1 100

Here is the ASH on each PDB when I run 8 CPU-bound sessions on each. System is saturated because I have only 8 CPUs.

CDB_RESOURCE_PLAN_2_PDB_2_SHARE_100_LIMIT-PDB1

CDB_RESOURCE_PLAN_2_PDB_1_SHARE_100_LIMIT-PDB2

Because of the shares difference (2 shares for PDB1 and 1 share for PDB2) PDB1 has been able ti use more CPU than PDB2 when the system was saturated:
PDB1 was 72% in cpu and 22% waiting, PDB2 was 50% in cpu and 50% waiting.

CDB_RESOURCE_PLAN_2_PDB_1_SHARE_100_LIMIT-SUM

In order to illustrate what changes when the system is saturated, I’ve run 16 sessions on PDB1 and then, after 60 seconds, 4 sessions on PDB2.

Here is the activity of PDB1:

CDB_RESOURCE_PLAN_SHARE_PDB1

and PDB2:

CDB_RESOURCE_PLAN_SHARE_PDB2

At 22:14 PDB1 was able to use all available CPU because there is no utilization_limit and no other PDB have activity. The system is saturated, but from PDB1 only.
At 22:15 PDB has also activity, so the resource manager must limit PDB1 in order to give ressources to PDB2 proportionally to its share. PDB1 with 2 shares are guaranteed to be able to use 2/3 of cpu. PDB1 with 1 share is guaranteed to use 1/3 of it.
At 22:16 PDB1 activity has completed, so PDB2 can use more resources. The 4 sessions are lower than the available cpu, so the system is not saturated and there is no wait.

What to remember?

Shares are there to guarantee a minimum of ressources utilization when system is saturated.
Resource_limit is there to set a maximum of resource utilization, whether the system is saturated or not.

 

Cet article CDB resource plan: shares and utilization_limit est apparu en premier sur Blog dbi services.

truncate vs delete with constraints on tables

Tom Kyte - Mon, 2016-10-17 14:46
Tom, I haven't used constraints as much as this new project that I am on. Usually I truncate tables to clear them out as it frees up the space etc. Yet when I tried to truncate the tables in RF order, I received constraint errors and it took a ...
Categories: DBA Blogs

Question / Answer table structure design

Tom Kyte - Mon, 2016-10-17 14:46
Hi, We are building a facility that allows our customers to configure questions which there customers in turn can then answer. These questions can be configured in various ways which determines what is / is not a valid answer and importantly (to t...
Categories: DBA Blogs

Find all queries in application that use string literal

Tom Kyte - Mon, 2016-10-17 14:46
Hi , We have an application in Java that uses Oracle as the database. The current performance is not great, and the DBA have identified a few reasons, one of which is SQLs having string literals instead of bind variables. If we want to change t...
Categories: DBA Blogs

how to track modification of records on a table

Tom Kyte - Mon, 2016-10-17 14:46
Hi Tom, My existing functionality is having Triggers on about 15 tables for insert/update/delete. The modified rows are inserted into a target table. However, I have been asked to use a different functionality (good performance) to track th...
Categories: DBA Blogs

Insert performance issue

Tom Kyte - Mon, 2016-10-17 14:46
Hi Tom, We have an application to load data into database, It uses insert statements to insert data into tables. Each table contains nearly 220 colums. insert into table1(col1,col2,col3,col6,col7,col8................col220)values(1,2,3,6,7...
Categories: DBA Blogs

Data insertion strategy in normalized tables

Tom Kyte - Mon, 2016-10-17 14:46
Hi Team, We have many tables (master tables) having primary and foreign key relationships. These tables (normalized) contain static data (master data). Inserting data manually in these tables is a tedious task because if we insert data out of ord...
Categories: DBA Blogs

export each XML Message to separate .xml files from Databases

Tom Kyte - Mon, 2016-10-17 14:46
Hi , We store specific information as XML field in Oracle database I have specific Query which has results in xml's already as each filed stores as xml file. I wanted to export each field as XML file select REQEmployeeXML from User.Employe...
Categories: DBA Blogs

How to acheive the output in the aligned format ?

Tom Kyte - Mon, 2016-10-17 14:46
The following .sql file is called by the .sh shell script. connect username/password@sidname SET SERVEROUTPUT ON; SET LINESIZE 4600; #SET TRIMSPOOL ON; SET WRAP OFF; SET HEADING OFF; #SET TRIMOUT ON; SET TIMING ON; SET FEEDBACK ON; SET SPOO...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator