Feed aggregator

Paco on Continuing Investment in PeopleSoft

Duncan Davies - Tue, 2016-11-08 05:00

There’s a great episode of Marc Weintraub’s PeopleSoft Talk interview series, featuring a 30 minute discussion with Paco Aubrejuan. There’ll be some great points for everyone to take away from it, however here are my highlights:

On the current Support End date of December 2027:

There’s no plan on ending support for PeopleSoft then, it’s actually not that important a date. It happens to be the date right now that our lifetime support goes to … that probably won’t be the case and as we get closer to that date the plan is to move those dates out.

On Continued Investment:

For me investment and support are one and the same. It’s hard to support applications without continuing to enhance them as well. We’re planning to support and enhance them through that date.

On Fluid Adoption:

We have 2-300 customers who are live on it, and many of whom aren’t live with just a few applications but with Fluid across the board. We’ve got to that hockey stick period in terms of adoption where the majority of our customers who are doing projects or upgrades are deploying Fluid.

On replacing SES with Elasticsearch:

“it’s easier, cheaper, faster, better for customers to run Elasticsearch versus SES”

plus lots more on Cloud, Fluid Approvals and Selective Adoption. It’s well worth a watch:

 

 


Oracle Data Visualization Desktop: Star Schemas and Multiple Fact Sources

Rittman Mead Consulting - Tue, 2016-11-08 04:00

Once added to a project, columns I specified with my custom SQL query now shows up as one source.

Now that I have a custom data source with only the data I want to explore, I can easily create an analysis without having to sift through multiple data sources.

* A note about Data Visualization Desktop and caching: When using the above method of writing SQL to create a virtual table, it is loaded into the cache. You should only use this method for very compact models. If the virtual table contains too much data, you can still add it as a data source but it may be too big to cache, causing your columns not to load when creating analyses.

Although having one fact source is common in relational models, using multiple fact sources is sometimes unavoidable in dimensional modeling.

In my sample data, I have another schema called GCBC_SURVEYS which contains two fact tables containing satisfaction scores for both customers and staff and one dimension table containing the organization that conducted the surveys.

For this example, I’m going to try to add each table as a data source manually first and attempt to join the two fact tables to my dimension table. When using this method, pay close attention to change any key and ID columns from Measure to Attribute so they aren’t aggregated. Data Visualization Desktop sees a numeric datatype and assumes it’s a measure.

Once I've added in all of the GCBCSURVEYS tables as data sources, I’m going to load them all into a project and create my joins using source diagram. When I joined each fact table to the dimension table on the SURVORG_ID, notice how DVD automatically created a join between my two fact tables.

This is not desirable because, due to the presence of a circular join, we run the risk of double counting. When I try to break the join between the two fact tables, DVD asks which data source I want to break conformance from.

When I select one of the fact sources, it will not only break the join between the two fact sources but also the join between the fact and the dimension table.

As of this writing, I have not found a way to only break joins between fact tables if they are using the same key to connect to the dimension table.

The only workaround to this I’ve found is to write SQL statement to pull in the columns and create the joins into one virtual table. This way I could specify the joins without DVD creating one between the fact sources.

Once I created my virtual table, I could use it to create a report and use both fact sources.

Although it can take some time to set up all the data sources you want to use for your model, Data Visualization Desktop packs some powerful features when it comes to dimensional modeling. The ability to use more than one fact source when needed adds another area of flexibility to DVD. Whether you are pulling in each table and then creating your model or writing a SQL to create one virtual table, DVD has the flexibility to be able to accommodate a variety of different scenarios.

Categories: BI & Warehousing

Troubleshooting Cloning Issues in EBS 12.1

Steven Chan - Tue, 2016-11-08 02:06

The Rapid Clone tool is used to create a working identical copy of an existing Oracle E-Business Suite 12.1 environment.  There are several ways of using Rapid Clone, including cloning a single node environment to another single node environment, adding additional nodes to an existing environment, and reducing nodes when cloning a multinode environment to a single node clone. 

Cloning EBS 12.1.3

The guide to using Rapid Clone in EBS 12.1 environments is:

When things go wrong

Given the variety of ways that this tool can be used, it is inevitable that some things might not go as expected.  When that happens, it's helpful to have a troubleshooting framework to narrow down the possible root causes and identify a solution. 

If you've encountered a problem with Rapid Clone, your first stop should be:

This excellent Note covers the most-common questions and issues associated with Rapid Clone, such as:

  • Location of cloning log files
  • Missing prerequisite patches
  • Preparing the source system's database and application tiers
  • The expected layout of the cloning stage area for both the applications and database tiers
  • Execution of the cloning process
  • Inventory registration issues
  • Common issues when cloning the database tier
  • Known issues

Related Articles


Categories: APPS Blogs

Copy table data From One DB to Another DB

Tom Kyte - Mon, 2016-11-07 23:06
Hi Team, I need to Copy table data From One DB to Another DB. One approach I can recollect from one of the conversation asked in asktom.oracle.com is that create database link and simply execute - insert into local_table select * from table@...
Categories: DBA Blogs

export import

Tom Kyte - Mon, 2016-11-07 23:06
Hi team, I wanted to know that when i export entire schema from one database and import into anothere database then- Objects like - tables,functions,triggers,procedures,dblinks,synonyms,public synonyms and many more. Which objects export dur...
Categories: DBA Blogs

SQL not using index

Tom Kyte - Mon, 2016-11-07 23:06
Tom, There is an index on a table, but that index is not being used by SQL(even with hint). Can you please tell if something is wrong with the syntax? Below is the definition of the index on the table <b>XLA.XLA_TRANSACTION_ENTITIES</b> (PS:...
Categories: DBA Blogs

How to find the tables of a particular string

Tom Kyte - Mon, 2016-11-07 23:06
Hello there, I'm trying to find the name of the table and column in which it has the particular string. The below code searches for the string in the whole database and prints it out. However I wanted to use wild card because there are instance...
Categories: DBA Blogs

TEMPORARY TABLESPACE

Tom Kyte - Mon, 2016-11-07 23:06
HOW CAN I DETERMINE THAT WHAT SHOULD BE THE SIZE OF TEMPORARY TABLESPACE FOR ORACLE DATABASE? WHAT ARE THE MEASURES TO BE CONSIDERED FOR SIZING TEMPORARY TABLESPACE? HOW SHOULD I RESIZE THE TEMPORARY TABLESPACE SO THAT WE DO NOT ENCOUNTER ER...
Categories: DBA Blogs

Could not able to drop an empty tablespace. ORA-23515: materialized views and/or their indices exist in the tablespace

Tom Kyte - Mon, 2016-11-07 23:06
Hi Tom, I have been trying to drop a tablespace, but it is showing the below error. I've checked under dba_segments from any Mviews but I couldn't find anything. 1) SQL> drop tablespace GOLFX including contents and datafiles; drop tablespace ...
Categories: DBA Blogs

Drop Schema

Tom Kyte - Mon, 2016-11-07 23:06
Hi, If i Drop a shema with DROP SCHEMA <name> RESTRICT,Will oracle also drop tha USER associated with that Schema?
Categories: DBA Blogs

should i use users tablespace?

Tom Kyte - Mon, 2016-11-07 23:06
hi tom. i am junior dba. my senior dba said to me today that i should not create users in USERS tablespace, if this users create objects, cause this objects will be created in USERS tablespace and that is somehow bad. so my question to him was why...
Categories: DBA Blogs

Oracle Public Cloud: create a database from command line

Yann Neuhaus - Mon, 2016-11-07 15:11

You love the Oracle Public Cloud with its simple Web interface? Great. But what if you want to automate a database creation from command line?
Easy with the CURL and REST API.

JSON

First, you need to create a JSON file with all the information for your service.
Everything is documented: https://apicatalog.oraclecloud.com/ui/views/apicollection/oracle-public/database/1.1/serviceinstances

$ cat createrequestbody.json
 
{
"description": "Mon Nov 7 21:03:39 CET 2016",
"edition": "EE_HP",
"level": "PAAS",
"serviceName": "CDB122",
"shape": "oc3",
"subscriptionType": "MONTHLY",
"version": "12.2.0.1",
"vmPublicKeyText": "ssh-dss AAAAB3NzaC1kc3MAAACBAMrw5Au0hHP1BT/W3gcSg+Fwq36LdfzroubjS6g8RSvcaeltk1O/uQwJV73MCsBDgs4PaAuDekZTW5w6kN8ESd6r6BGLm/sETHNiRzOWWap3ds18iiaLJWcMbKRlZUWLdfhGemryWZaQIFrSNkfE5YkFz4V4m5d4EwKpLzIthKh3AAAAFQDtjTsaF7nydePPJPDqYERu8QlcMQAAAIBjl8NxEmE7nud7b4xuLkuJKnwlf2urHrOZGcQapNUZAjuehe6/8VhPB4GebZb52GlyYOuELDP6e9PXyFRxTfTPff22JE5tPM8vTjCmFEKhBspl43YurJxwvDtvgTNKk5Zp5MBXMDjQ8KNHXlpnRrfh45acHI8gs0KlH51+e7j+6AAAAIA/Q8rVC4g+MBepJGKed2ar0JzralZo7Q8vsZfQ889Y3wkaBJl2/SRaaW1JNmkB20eZIEbRkh9e/ex07ryKg65dgUzU4/2dE2CSxplG0vSf/xp7hYr/bJzR1SZXMKbAdZ2wg+SGaTlKWAAS9xhvKGw1jVWdVgacYJOPl343bMKkuw==",
"parameters": [
{
"type": "db",
"usableStorage": "15",
"adminPassword": "P4ss#ord",
"sid": "CDB122",
"pdbName": "PDB1",
"failoverDatabase": "no",
"backupDestination": "NONE"
}
] }

You can see that you have exactly the same information as from the GUI.

Create Instance

Then, you run the following CURL command (having the cacert.pem certificate in the current directory:

$ curl --include --request POST --cacert ./cacert.pem --user myuser@oracle.com:P4ss#ord --header "X-ID-TENANT-NAME:opcoct" --header "Content-Type:application/json" --data @createrequestbody.json https://dbcs.emea.oraclecloud.com/paas/service/dbcs/api/v1.1/instances/opcoct
 
HTTP/2 202
server: Oracle-Application-Server-11g
location: https://dbcs.emea.oraclecloud.com:443/paas/service/dbcs/api/v1.1/instances/opcoct/status/create/job/2738110
content-language: en
access-control-allow-origin: *
access-control-allow-headers: Content-Type, api_key, Authorization
retry-after: 60
access-control-allow-methods: GET, POST, DELETE, PUT, OPTIONS, HEAD
x-oracle-dms-ecid: 005GBi63mCP3n315RvWByd0003Ri0004Zg
x-oracle-dms-ecid: 005GBi63mCP3n315RvWByd0003Ri0004Zg
service-uri: https://dbcs.emea.oraclecloud.com:443/paas/service/dbcs/api/v1.1/instances/opcoct/CDB122
x-frame-options: DENY
content-type: application/json
vary: user-agent
date: Mon, 07 Nov 2016 20:03:59 GMT

Here “opcoct” is my identity domain id. You find it in the header X-ID-TENANT-NAME and the URL.
The myuser@oracle.com:P4ss#ord is the user and password in the domain.

From the GUI you can see that the creation has started:

CaptureDBAASREST01

DBaaS instance information

Here is the information for the database service


$ curl --include --request GET --cacert ./cacert.pem --user myuser@oracle.com:P4ss#ord --header "X-ID-TENANT-NAME:opcoct" https://dbcs.emea.oraclecloud.com/paas/service/dbcs/api/v1.1/instances/opcoct/CDB122
 
HTTP/2 200
server: Oracle-Application-Server-11g
content-language: en
service-uri: https://dbcs.emea.oraclecloud.com:443/paas/service/dbcs/api/v1.1/instances/opcoct/CDB122
access-control-allow-headers: Content-Type, api_key, Authorization
access-control-allow-methods: GET, POST, DELETE, PUT, OPTIONS, HEAD
x-oracle-dms-ecid: 005GBiK7U4I3z015Rvl3id00071a0000yo
x-oracle-dms-ecid: 005GBiK7U4I3z015Rvl3id00071a0000yo
access-control-allow-origin: *
x-frame-options: DENY
content-type: application/json
vary: user-agent
date: Mon, 07 Nov 2016 20:07:52 GMT
content-length: 1244
 
{
"service_name": "CDB122",
"version": "12.2.0.1",
"status": "In Progress",
"description": "Mon Nov 7 21:03:39 CET 2016",
"identity_domain": "opcoct",
"creation_time": "2016-11-07T20:03:59.524+0000",
"last_modified_time": "2016-11-07T20:03:59.505+0000",
"created_by": "myuser@oracle.com",
"sm_plugin_version": "16.4.3-541",
"service_uri": "https:\/\/dbcs.emea.oraclecloud.com:443\/paas\/service\/dbcs\/api\/v1.1\/instances\/opcoct\/CDB122",
"num_nodes": 1,
"level": "PAAS",
"edition": "EE_HP",
"shape": "oc3",
"subscriptionType": "MONTHLY",
"creation_job_id": "2738110",
"num_ip_reservations": 1,
"backup_destination": "NONE",
"failover_database": false,
"rac_database": false,
"sid": "CDB122",
"pdbName": "PDB1",
"demoPdb": "",
"listenerPort": 1521,
"timezone": "UTC",
"is_clone": false,
"clone_supported_version": "16.3.1",
"active_jobs": [
{
"active_job_operation": "start-db-compute-resources",
"active_job_id": 2738113,
"active_job_messages": [] },
{
"active_job_operation": "create-dbaas-service",
"active_job_id": 2738110,
"active_job_messages": [] }
],
"compute_site_name": "EM003_Z19",
"jaas_instances_using_service": ""
}

The status is ‘in progress’. Let’s look at the compute service.

Compute instance information

From the compute service, you can see if there is already an IP address assigned here


$ curl --include --request GET --cacert ./cacert.pem --user myuser@oracle.com:P4ss#ord --header "X-ID-TENANT-NAME:opcoct" https://dbcs.emea.oraclecloud.com/paas/service/dbcs/api/v1.1/instances/opcoct/CDB122/servers
 
HTTP/2 200
server: Oracle-Application-Server-11g
content-language: en
access-control-allow-headers: Content-Type, api_key, Authorization
access-control-allow-methods: GET, POST, DELETE, PUT, OPTIONS, HEAD
x-oracle-dms-ecid: 005GBiMizXo3z015Rvl3id00071a0004p_
x-oracle-dms-ecid: 005GBiMizXo3z015Rvl3id00071a0004p_
access-control-allow-origin: *
x-frame-options: DENY
content-type: application/json
vary: user-agent
date: Mon, 07 Nov 2016 20:08:35 GMT
content-length: 430
 
[{
"status": "Running",
"creation_job_id": "2738110",
"creation_time": "2016-11-07T20:03:59.524+0000",
"created_by": "myuser@oracle.com",
"shape": "oc3",
"sid": "CDB122",
"pdbName": "PDB1",
"listenerPort": 1521,
"connect_descriptor": "CDB122:1521\/PDB1",
"connect_descriptor_with_public_ip": "null:1521\/PDB1",
"initialPrimary": true,
"storageAllocated": 142336,
"reservedIP": "",
"hostname": "CDB122"
}]

No IP address yet. I have the job id (2738110) so that I can check it later.

Job information


$ curl --include --request GET --cacert ./cacert.pem --user myuser@oracle.com:P4ss#ord --header "X-ID-TENANT-NAME:opcoct" https://dbcs.emea.oraclecloud.com/paas/service/dbcs/api/v1.1/instances/opcoct/status/create/job/2738110
 
HTTP/2 202
server: Oracle-Application-Server-11g
location: https://dbcs.emea.oraclecloud.com:443/paas/service/dbcs/api/v1.1/instances/opcoct/status/create/job/2738110
content-language: en
access-control-allow-origin: *
access-control-allow-headers: Content-Type, api_key, Authorization
retry-after: 60
access-control-allow-methods: GET, POST, DELETE, PUT, OPTIONS, HEAD
x-oracle-dms-ecid: 005GBiOeMbz3n315RvWByd0003Ri00048d
x-oracle-dms-ecid: 005GBiOeMbz3n315RvWByd0003Ri00048d
service-uri: https://dbcs.emea.oraclecloud.com:443/paas/service/dbcs/api/v1.1/instances/opcoct/CDB122
x-frame-options: DENY
content-type: application/json
vary: user-agent
date: Mon, 07 Nov 2016 20:09:08 GMT
 
{
"service_name": "CDB122",
"version": "12.2.0.1",
"status": "In Progress",
"description": "Mon Nov 7 21:03:39 CET 2016",
"identity_domain": "opcoct",
"creation_time": "2016-11-07T20:03:59.524+0000",
"last_modified_time": "2016-11-07T20:03:59.505+0000",
"created_by": "myuser@oracle.com",
"sm_plugin_version": "16.4.3-541",
"service_uri": "https:\/\/dbcs.emea.oraclecloud.com:443\/paas\/service\/dbcs\/api\/v1.1\/instances\/opcoct\/CDB122",
"message": ["Starting Compute resources..."],
"job_start_date": "Mon Nov 07 20:04:01 GMT 2016",
"job_status": "InProgress",
"job_operation": "create-dbaas-service",
"job_request_params": {
"edition": "EE_HP",
"vmPublicKeyText": "ssh-dss AAAAB3NzaC1kc3MAAACBAMrw5Au0hHP1BT/W3gcSg+Fwq36LdfzroubjS6g8RSvcaeltk1O/uQwJV73MCsBDgs4PaAuDekZTW5w6kN8ESd6r6BGLm/sETHNiRzOWWap3ds18iiaLJWcMbKRlZUWLdfhGemryWZaQIFrSNkfE5YkFz4V4m5d4EwKpLzIthKh3AAAAFQDtjTsaF7nydePPJPDqYERu8QlcMQAAAIBjl8NxEmE7nud7b4xuLkuJKnwlf2urHrOZGcQapNUZAjuehe6/8VhPB4GebZb52GlyYOuELDP6e9PXyFRxTfTPff22JE5tPM8vTjCmFEKhBspl43YurJxwvDtvgTNKk5Zp5MBXMDjQ8KNHXlpnRrfh45acHI8gs0KlH51+e7j+6AAAAIA/Q8rVC4g+MBepJGKed2ar0JzralZo7Q8vsZfQ889Y3wkaBJl2/SRaaW1JNmkB20eZIEbRkh9e/ex07ryKg65dgUzU4/2dE2CSxplG0vSf/xp7hYr/bJzR1SZXMKbAdZ2wg+SGaTlKWAAS9xhvKGw1jVWdVgacYJOPl343bMKkuw==",
"count": "2",
"provisioningTimeout": "180",
"subscriptionType": "MONTHLY",
"createStorageContainerIfMissing": "false",
"dbConsolePort": "1158",
"listenerPort": "1521",
"serviceName": "CDB122",
"namespace": "dbaas",
"version": "12.2.0.1",
"timezone": "UTC",
"pdbName": "PDB1",
"level": "PAAS",
"tenant": "opcoct",
"serviceInstance": "CDB122",
"description": "Mon Nov 7 21:03:39 CET 2016",
"failoverDatabase": "false",
"emExpressPort": "5500",
"ncharset": "AL16UTF16",
"trial": "false",
"sid": "CDB122",
"noRollback": "false",
"operationName": "create-dbaas-service",
"goldenGate": "false",
"backupDestination": "NONE",
"ibkup": "false",
"charset": "AL32UTF8",
"serviceVersion": "12.2.0.1",
"shape": "oc3",
"identity_domain_id": "opcoct",
"serviceType": "dbaas",
"usableStorage": "15",
"disasterRecovery": "false",
"server_base_uri": "https:\/\/dbcs.emea.oraclecloud.com:443\/paas\/service\/dbcs\/",
"computeSiteName": "EM003_Z19",
"isRac": "false"
}
}

REST Endpoint

Here, my test is on the EMEA datacenter and this is why the URL starts with https://dbcs.emea.oraclecloud.com
If you don’t know, you can check on My Cloud Services where you have all information:
CaptureDBAASREST02

 

Cet article Oracle Public Cloud: create a database from command line est apparu en premier sur Blog dbi services.

Enqueue Bytes – Is that a Pun?

Pythian Group - Mon, 2016-11-07 13:53

Sometimes it is necessary to put on your uber-geek hat and start using cryptic bits of code to retrieve information from an Oracle database. Troubleshooting enqueue locking events in Oracle databases is one of the times some advanced SQL may be necessary.

Likely you have used SQL similar to the following when troubleshooting Oracle enqueue’s, probably in connection with row lock contention.

SQL# l
  1  SELECT
  2     s.username username,
  3     s.sid,
  4     e.event event,
  5     e.p1text,
  6     e.p1,
  7     e.state
  8  FROM v$session s, v$session_wait e
  9  WHERE s.username IS NOT NULL
 10     AND s.sid = e.sid
 11     AND e.event LIKE '%enq:%'
 12* ORDER BY s.username, UPPER(e.event)
 
USERNAME    SID EVENT                          P1TEXT                P1 STATE
---------- ---- ------------------------------ ---------- ------------- ----------
JKSTILL      68 enq: TX - ROW LOCK contention  name|mode     1415053318 WAITING
 
1 ROW selected.

The value for P1 is not very useful as is; Oracle has encoded the type of enqueue and the requested mode into the column. When working with current events such as when selecting from v$session, it is simple to determine the type of lock and the mode requested by querying v$lock, such as in the following example:

  1* SELECT sid, TYPE, request, block FROM v$lock WHERE sid=68 AND request > 0
SQL# /
 
 SID TY    REQUEST      BLOCK
---- -- ---------- ----------
  68 TX          6          0
 
1 ROW selected.

Session 68 is waiting on a TX enqueue with requested lock mode of 6. Seasoned Oracle DBA’s will recognize this as classic row lock contention.

Why bother to find out just which type of enqueue this is? There are many types of locks in Oracle, and they occur for differing reasons. The TX lock is interesting as it can occur not only in Mode 6 but Mode 4 as well; Mode 4 refers to locks that involve unique keys, such as when 2 or more sessions try to insert the same value for a primary key. The following example shows just that:

SQL# @s
 
USERNAME    SID EVENT                          P1TEXT                P1 STATE
---------- ---- ------------------------------ ---------- ------------- ----------
JKSTILL      68 enq: TX - ROW LOCK contention  name|mode     1415053316 WAITING
 
 
1 ROW selected.
 
SQL# @l
 
 SID TY    REQUEST      BLOCK
---- -- ---------- ----------
  68 TX          4          0
 
1 ROW selected.

Knowing just which lock mode is requested is vital, as the troubleshooting for TX Mode 4 locks will be different from what is used to troubleshoot Mode 6.

Though we can find the lock name and mode information in v$lock, there is still value in being able to decipher that cryptic P1 column.

The ASH and AWR facilities do not include any historical information for the lock name and mode; the P1 column found in v$active_session_history and dba_hist_active_sess_history does not have a corresponding dba_hist_lock view. Any research done after an event has occurred does require decoding this information.

Deciphering v$session.p1

Oracle Support document 34566.1 is the enqueue reference note that provides information needed to get the lock name and mode from the p1 column. As you will see this information is a bit puzzling.

The rest of this article will focus on TX Mode 6 locks. The value shown for this lock and mode in the P1 column is always 1415053318. Following is the SQL recommended by Oracle:

 SELECT chr(to_char(bitand(p1,-16777216))/16777215)||
         chr(to_char(bitand(p1, 16711680))/65535) "Lock",
         to_char( bitand(p1, 65535) )    "Mode"
    FROM v$session_wait
   WHERE event = 'enqueue'

As I currently have some planned row lock contention in a test database we can run this query:

  1   SELECT chr(to_char(bitand(p1,-16777216))/16777215)||
  2           chr(to_char(bitand(p1, 16711680))/65535) "Lock",
  3           to_char( bitand(p1, 65535) )    "Mode"
  4      FROM v$session_wait
  5*    WHERE event LIKE '%enq:%'
SQL# /
 
Lo Mode
-- ----------------------------------------
TX 4

Probably it is not very clear why this SQL works. Let’s try and understand it.
(Note that a small change had to be made to the WHERE clause.)

Converting the P1 value to hex may be helpful

1415053318 = 0x54580006

The two lower order bytes represent the lock mode that has been requested. This can be seen here to be 0x06, which is simple translate to decimal 6 (I can do this one in my head)

The next two bytes are also in hex and represent the two letters of the lock name.

0x54 = 84 = ‘T’
0x58 = 88 = ‘X’

Using string functions it is simple to extract the values from the hex string, convert them to numbers and retrieve the lock name and mode.

SQL# define p1 = 1415053318
 
SQL# l
  1  WITH hex AS (
  2     SELECT TRIM(to_char(&p1,'XXXXXXXXXXXXXXXX')) hexnum FROM dual
  3  ),
  4  hexbreak AS (
  5     SELECT hexnum
  6        , to_number(substr(hexnum,1,2),'XXXXXXXX') enq_name_byte_1
  7        , to_number(substr(hexnum,3,2),'XXXXXXXX') enq_name_byte_2
  8        , to_number(substr(hexnum,5),'XXXXXXXX') enq_mode
  9  FROM hex
 10  )
 11  SELECT
 12     hexnum
 13     , chr(enq_name_byte_1)
 14     || chr(enq_name_byte_2) enqueue_type
 15     , enq_mode
 16* FROM hexbreak
SQL# /
 
HEXNUM            EN   ENQ_MODE
----------------- -- ----------
54580006          TX          6

While that does work, my inner geek wants to investigate those bitmasks and find out why they work. Next are the bitmasks in decimal along with the hex equivalent.

-16777216 = 0xFFFFFFFFFF000000
 16777215 = 0xFFFFFF
 16711680 = 0xFF0000
    65535 = 0xFFFF

The bitand function is used to mask all unwanted bits to 0. The number is then divided by value needed to remove all of the now-zeroed-out lower order bytes.

The values being used as bitmasks are -16777216 and 16711680. The use of -16777216 does not seem to make sense. As the intent is to mask all but one byte, I would expect to find an FF surrounded by a number of zeroes. The bit mask of 16711680, however, looks fine.

Now let’s run the Oracle support query again, but modified to show just the integer values rather than converting them to ASCII.

 
  1  SELECT bitand(p1,-16777216)/16777215,
  2           bitand(p1, 16711680)/65535,
  3           bitand(p1, 65535)
  4      FROM v$session_wait
  5*    WHERE event LIKE '%enq:%'
SQL# /
 
BITAND(P1,-16777216)/16777215 BITAND(P1,16711680)/65535 BITAND(P1,65535)
----------------------------- ------------------------- ----------------
                    84.000005                88.0013428                6

Well, that is interesting. An implicit conversion is taking place with to_char() that is removing the decimal portion of these numbers. Is that being done with trunc(), round(), or something else? I don’t know the answer to that. What seems more important is just doing the math correctly.

There are a couple of things here that can be changed to make this work as expected.

A New BitMask

Let’s modify the first bitmask to something that seems more reasonable than -16777216. Let’s use this instead, as it masks only the single byte we need:

4278190080 = 0xFF000000

Lets’ try it out:

SQL# l
  1  SELECT bitand(p1,4278190080)/16777215,
  2           bitand(p1, 16711680)/65535,
  3           bitand(p1, 65535)
  4      FROM v$session_wait
  5*    WHERE event LIKE '%enq:%'
SQL# /
 
BITAND(P1,4278190080)/16777215 BITAND(P1,16711680)/65535 BITAND(P1,65535)
------------------------------ ------------------------- ----------------
                     84.000005                88.0013428                6

While the new bitmask didn’t break anything, it does not appear to have helped either.

Off By One Error

The solution is to consider the divisors used to remove the lower order zeroes; each of them is off by one. That is easy enough to verify:

SQL# l
  1  SELECT bitand(p1,4278190080)/16777216,
  2           bitand(p1, 16711680)/65536,
  3           bitand(p1, 65535)
  4      FROM v$session_wait
  5*    WHERE event LIKE '%enq:%'
SQL# /
 
BITAND(P1,4278190080)/16777216 BITAND(P1,16711680)/65536 BITAND(P1,65535)
------------------------------ ------------------------- ----------------
                           84                        88                6

Ah, that did it! But what was the problem previously?

Old Divisor Values

The original divisors are off by 1, which does not completely remove the lower order values.

 16777215 = 0xFFFFFF
    65535 = 0xFFFF

Increasing each by one has the desired effect.

New Divisor Values
 16777216 = 0x1000000
    65536 = 0x10000
Conclusion

Those odd bitmasks have been in the back of my mind for some time, and today it seemed like a good idea to dig in and find out more about them. It isn’t too hard to imagine that in some cases the wrong values might be returned, leading to some long and unproductive troubleshooting sessions.

There is a demo script enqueue-bitand.sql containing much of the SQL found in this article. There is also a script awr-top-sqlid-events.sql that incorporates the enqueue lock decoding. This script could be made better than it is, so please issue a pull request if you have some useful modifications.

Categories: DBA Blogs

Reorg

Jonathan Lewis - Mon, 2016-11-07 11:31

A current question on the OTN database forum asks: “What’s the difference between object and tablespace reorganization?” Here’s an analogy to address the question.

I have three crates of Guiness in the boot (trunk) of my car, one crate has 4 bottles left, one has 7 bottles left and one has 2 bottles. I also have two cases of Louis Roederer Brut NV champagne, one case has 2 bottles left and one has only one. (I have two objects in my tablespace – one of type Beer, one of type Champagne – and my boot requires manual free space management .)

I move all the Guiness bottles into a single crate and all the champagne bottles into a single case. That’s a couple of “shrink space compact” calls – I’ve re-organised the objects to get all the bottles in each object close to each other, but the crates are still taking up space in the boot.

I take the two empty crates and the empty case out of the boot. That’s a couple of “resize” (or “shrink space” without “compact”) calls that free up space in the boot.

I now want to load a port barrel into car, but it won’t fit until I slide the remaining beer crate and champagne case together at one side of the boot. That’s a couple of “move” commands that have reorganized the boot (tablespace) to make the free space usable.

 


Moving objects from one tablespace to another

DBA Scripts and Articles - Mon, 2016-11-07 10:57

This script will help you to move all objects from one tablespace to another, note that you will need downtime to move tables to other tablespace with this script, the operation can be done online using the dbms_redefinition package thought. Moving objects [crayon-5820d56021c6e347798014/] You can also modify the script to move objects of a specific … Continue reading Moving objects from one tablespace to another

The post Moving objects from one tablespace to another appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

New OA Framework 12.2.4 Update 13 Now Available

Steven Chan - Mon, 2016-11-07 10:39

Web-based content in Oracle E-Business Suite 12 runs on the Oracle Application Framework (OAF or "OA Framework") user interface libraries and infrastructure.   Since the release of Oracle E-Business Suite 12.2 in 2013, we have released several updates to Oracle Application Framework to fix performance, security, and stability issues. 

These updates are provided in cumulative Release Update Packs, and cumulative Bundle Patches that can be applied on top of the Release Update Pack. "Cumulative" means that the latest RUP or Bundle Patch contains everything released earlier.

The latest OAF update for EBS 12.2.4 is now available:


Where is the documentation for this update?

Instructions for installing this OAF Release Update Pack are here:

Who should apply this patch?

All EBS 12.2.4 users should apply this patch.  Future OAF patches for EBS 12.2.4 will require this patch as a prerequisite. 

What's new in this update?

Fixes are included for following critical issues:

    • When Descriptive Flexfield (DFF) is rendered in an advanced table, some of its segments may not be displayed, and instead other columns of the table get repeated.
    • Unable to attach the Font message choice drop-downs such as Font, Color and Size to Message Rich Text Editor (MRTE).

    This is a cumulative patch for EBS 12.2.4.  It includes updates previously released as Bundle 11 and patch 24007747.

    This Bundle Patch requires the R12.ATG_PF.C.Delta.4 Release Update Pack as a mandatory prerequisite.

    Related Articles


    Categories: APPS Blogs

    Oracle Named in the Constellation Research ShortList for Digital Experience (DX) Integrated Platforms

    WebCenter Team - Mon, 2016-11-07 08:46

    The Constellation ShortListTM presents vendors in different categories of the market relevant to early adopters. In addition, products included meet the threshold criteria for this category as determined by Constellation Research.

    Constellation evaluates over 50 solutions categorized in this particular market. This Constellation ShortList is determined by client inquiries, partner conversations, customer references, vendor selection projects, market share and internal research.

    About the Digital Experience (DX) Integrated Platforms Constellation ShortListTM
    Digital experience (DX) integrated platforms unite digital and analog customer-facing disciplines. With the rise of the cloud and the consolidation of vendor offerings, these platforms meet a growing demand for offerings that unite customer-facing analog and digital services. Moreover, the growing need to support digital transformation efforts has led to a confluence of ad tech, artificial intelligence, commerce, content management, customer experience solutions, data management platforms, internet of things (IoT), marketing automation, mass personalization at scale, and portals—all of which must be managed alongside analog systems.

    While no complete solution exists today, this Constellation ShortList represents Constellation’s attempt to catalogue DX integrated platforms as they emerge and become business model platforms for brands and organizations. 

    “Source:  Constellation Research, Inc., "Constellation Research ShortListTM for Digital Experience (DX) Integrated Platforms", R “Ray” Wang, October 2016".

    Oracle was included in the Constellation Research ShortListTM for Digital Experience (DX) Integrated Platforms.

    The digital age has unleashed limitless potential. Skyrocketing connections are forever transforming how we work, play and live - offering businesses unprecedented opportunities for innovation, growth and value creation. To realize these opportunities, however, it is vital that today's enterprises not only develop digital tools but also put digital experience at the center of their business - empowering every aspect of process, customer experience and innovation. 

    Squeezing maximum advantage from the accelerating connections between organizations, people and things is crucial to success. Within today's digital connections hide the solutions to your most urgent business challenges and the potential to build seamless, interconnected digital experiences that empower employees and amaze customers.

    To thrive in this emerging world, you need to go beyond bolting on new digital tools. Instead, put digital connections and platforms at the heart of your enterprise to develop new ways of working, drive innovation and maximize value in every interaction. Oracle provides an integrated digital experience platform that connects every aspect of your processes, customer interactions and innovation with Content and Experience Management. Content and Experience Management is about driving the complete digital experience from mobile and social collaboration for content creation, reviews and decision making on content, to publication of the content across multiple channels – web, social and mobile. With Content and Experience Management, Oracle is providing a digital experience platform to enable organizations to meet demands, engage stakeholders and move business forward.

    Oracle and was pleased to be included in the Constellation Research ShortListTM for Digital Experience (DX) Integrated Platforms. You can read the full report here.

    Filter Subquery

    Jonathan Lewis - Mon, 2016-11-07 07:04

    There’s a current thread on the OTN database forum showing an execution plan with a slightly unusual feature. It looks like this:

    -----------------------------------------------------------------------------------------------------------------------------------  
    | Id  | Operation                                |  Name                          | Rows  | Bytes |TempSpc| Cost  | Pstart| Pstop |  
    -----------------------------------------------------------------------------------------------------------------------------------  
    |   0 | SELECT STATEMENT                         |                                |   137K|    27M|       |   134K|       |       |  
    |*  1 |  HASH JOIN                               |                                |   137K|    27M|    27M|   134K|       |       |  
    |*  2 |   HASH JOIN                              |                                |   140K|    26M|  1293M|   133K|       |       |  
    |   3 |    TABLE ACCESS FULL                     | PDTCOST_CHARGE_MAP             |    30M|   948M|       | 24044 |       |       |  
    |*  4 |    HASH JOIN                             |                                |    11M|  1837M|   810M| 57206 |       |       |  
    |   5 |     INDEX FAST FULL SCAN                 | PDTCOST_BILL_INV_TRACK         |    29M|   475M|       | 16107 |       |       |  
    |*  6 |     TABLE ACCESS BY LOCAL INDEX ROWID    | BILL_INVOICE_DETAIL            |  5840K|   478M|       |     2 |       |       |  
    |   7 |      NESTED LOOPS                        |                                |    11M|  1634M|       |     6 |       |       |  
    |   8 |       NESTED LOOPS                       |                                |     2 |   120 |       |     3 |       |       |  
    |   9 |        TABLE ACCESS FULL                 | JDL_WORK_LIST                  |     2 |    96 |       |     2 |       |       |  
    |  10 |        PARTITION RANGE ITERATOR          |                                |       |       |       |       |   KEY |   KEY |  
    |  11 |         TABLE ACCESS BY LOCAL INDEX ROWID| BILL_INVOICE                   |     1 |    12 |       |     1 |   KEY |   KEY |  
    |* 12 |          INDEX UNIQUE SCAN               | BILL_INVOICE_XSUM_BILL_REF_NO  |     1 |       |       |       |   KEY |   KEY |  
    |  13 |       PARTITION RANGE ITERATOR           |                                |       |       |       |       |   KEY |   KEY |  
    |* 14 |        INDEX RANGE SCAN                  | BILL_INVOICE_DETAIL_PK         |    32 |       |       |     1 |   KEY |   KEY |  
    |  15 |    SORT AGGREGATE                        |                                |     1 |     8 |       |       |       |       |  
    |  16 |     INDEX FAST FULL SCAN                 | PDTCOST_CHARGE_MAP_PK          |    30M|   229M|       | 17498 |       |       |  
    |  17 |   INDEX FAST FULL SCAN                   | SERVICE_EMF_CONF_SUBSCR        |  1660K|    19M|       |   575 |       |       |  
    -----------------------------------------------------------------------------------------------------------------------------------  
    

    Spot the oddity ? If not, here’s a collapsed version of the plan that makes it easier to see – if you were viewing this plan through OEM or one of the other GUI interfaces to execution plans you’d probably be able to do this by clicking on some sort of  “+/-”  symbol by operation 4:

    -----------------------------------------------------------------------------------------------------------------------------------  
    | Id  | Operation                                |  Name                          | Rows  | Bytes |TempSpc| Cost  | Pstart| Pstop |  
    -----------------------------------------------------------------------------------------------------------------------------------  
    |   0 | SELECT STATEMENT                         |                                |   137K|    27M|       |   134K|       |       |  
    |*  1 |  HASH JOIN                               |                                |   137K|    27M|    27M|   134K|       |       |  
    |*  2 |   HASH JOIN                              |                                |   140K|    26M|  1293M|   133K|       |       |  
    |   3 |    TABLE ACCESS FULL                     | PDTCOST_CHARGE_MAP             |    30M|   948M|       | 24044 |       |       |  
    |*  4 |    HASH JOIN                             |                                |    11M|  1837M|   810M| 57206 |       |       |  
    |  15 |    SORT AGGREGATE                        |                                |     1 |     8 |       |       |       |       |  
    |  16 |     INDEX FAST FULL SCAN                 | PDTCOST_CHARGE_MAP_PK          |    30M|   229M|       | 17498 |       |       |  
    |  17 |   INDEX FAST FULL SCAN                   | SERVICE_EMF_CONF_SUBSCR        |  1660K|    19M|       |   575 |       |       |  
    -----------------------------------------------------------------------------------------------------------------------------------  
    

    How often have you seen a HASH JOIN (operation 2) with three child operations (3, 4, 15) ?

    It’s not a formatting error – but since I’ve shown neither the Predicate section of the report nor the original query it’s a little difficult to recognise what’s going on, so here’s the critical part of the original WHERE clause:

    AND     P.TRACKING_ID      = PCM.TRACKING_ID  
    AND     P.TRACKING_ID_SERV = PCM.TRACKING_ID_SERV  
    AND     (   (P.BILLING_INACTIVE_DT IS NULL AND PCM.INACTIVE_DT IS NULL)  
             OR (PCM.ACTIVE_DT = (SELECT MAX(ACTIVE_DT) FROM PDTCOST_CHARGE_MAP PCM1 ))
            )
    ;  
    

    Operation 4 produces a set of rows derived by joining table P (an alias for pdtcost) to a couple of other tables, and operation 2 joins this to PCM (an alias for pdtcost_change_map) with a simple two-column equality and then introduces a pair of problems: first an “OR SUBQUERY” construct, secondly a predicate that requires data from both tables to be examined before any more rows can be discarded.

    Just to clarify the performance implication of this combination of predicates:

    If we start from pdtcost (p):

    • If the billing_inactive_dt is null we don’t discard it because satisfires a predicate and we need to check the matching pcm.inactive_dt.
    • If the billing_inactive_dt is NOT null we still can’t discard it because the matching pcm.active_dt may satisfy the subquery predicate.
    • Whatever the state of billing_inactive_dt we have to find the matching pcm row(s)

    Starting from pdtcost_charge_map (pcm):

    • We can’t unnest the subquery and use it to drive into p (because of the OR), so we have to scan pcm to apply the subquery.
    • If the active_dt satisfies the subquery we have to find the matching p row.
    • If the active_dt doesn’t satisfy the subquery but pcm_inactive_dt is null we still have to find the matching p row to check the billing_dt.
    • The only time we don’t need to probe p for a match is if the active_dt doesn’t match the subquery and the inactive_dt is not null – which tells us that for a very specific data pattern we have the potential for a (relatively) efficient access path; however this path would require the optimizer to test one part of an OR predicate at one operation in the plan and the second part of the OR predicate at a different operation of the plan and it’s not programmed to do that, so the entire compound predicate test is always run late.

    Returning to the question of interpreting this plan with three child operations for a hash join – what does it mean and how does it work ? In effect the plan is the wrong shape – it has concealed a filter operation.  As the join between the two tables takes place the rows are tested against the simple filter condition – each row that satisfies the predicate is passed to the next operation of the plan; for any row doesn’t satisfy the simple filter predicate the subquery is executed to provide a check against active_dt (fortunately, since this is a “constant” subquery we benefit enormously from scalar subquery caching and the subquery will run a most once in the lifetime of the whole query.)

    The plan would probably be easier to understand if it looked like this (which may actually be how it would have looked in Oracle 8i):

    -----------------------------------------------------------------------------------------------------------------------------------  
    | Id  | Operation                                |  Name                          | Rows  | Bytes |TempSpc| Cost  | Pstart| Pstop |  
    -----------------------------------------------------------------------------------------------------------------------------------  
    |   0 | SELECT STATEMENT                         |                                |   137K|    27M|       |   134K|       |       |  
    |*  1 |  HASH JOIN                               |                                |   137K|    27M|    27M|   134K|       |       |  
    |*  2a|   FILTER                                 |                                |   140K|    26M|  1293M|   133K|       |       |  
    |*  2b|    HASH JOIN                             |                                |   140K|    26M|  1293M|   133K|       |       |  
    |   3 |     TABLE ACCESS FULL                    | PDTCOST_CHARGE_MAP             |    30M|   948M|       | 24044 |       |       |  
    |*  4 |     HASH JOIN                            |                                |    11M|  1837M|   810M| 57206 |       |       |  
    |  15 |    SORT AGGREGATE                        |                                |     1 |     8 |       |       |       |       |  
    |  16 |     INDEX FAST FULL SCAN                 | PDTCOST_CHARGE_MAP_PK          |    30M|   229M|       | 17498 |       |       |  
    |  17 |   INDEX FAST FULL SCAN                   | SERVICE_EMF_CONF_SUBSCR        |  1660K|    19M|       |   575 |       |       |  
    -----------------------------------------------------------------------------------------------------------------------------------  
     
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
       2a - filter("P"."BILLING_INACTIVE_DT" IS NULL AND "PCM"."INACTIVE_DT" IS NULL
                   OR "PCM"."ACTIVE_DT"= (SELECT MAX("ACTIVE_DT") FROM "PDTCOST_CHARGE_MAP" "PCM1"))
    
       2b - access("P"."TRACKING_ID"="PCM"."TRACKING_ID" AND
                   "P"."TRACKING_ID_SERV"="PCM"."TRACKING_ID_SERV")
    
    

    This modified plan makes it clear that the hash join (2b) is followed by execution of the filter (2a) subquery (though we can safely infer that the subquery runs only for join rows where at least one of p.billing_inactive_dt or pcm.inactive_dt is not null).

    You might wonder whether Oracle actually runs the subquery once at a very early point in the query so that it can, effectively, turn the subquery predicate into “active_dt = {derived constant}” – it’s fairly easy to show that this isn’t the case. Perhaps the most obvious way to do this is to run the query with rowsource execution stats enabled after setting billing_inactive_dt and inactive_dt to null for every row in their respective tables – because if you do that the subquery won’t be run at all.

    If you want to experiment with this problem, here’s some code to model it:

    
    drop table pdtcost purge;
    drop table pdtcost_charge_map purge;
    
    create table pdtcost
    nologging
    as
    with generator as (
            select
                    rownum id
            from dual
            connect by
                    level <= 1e4
    )
    select
            mod(rownum,100)                 filter_col,
            rownum                          tracking_id,
            rownum                          tracking_id_serv,
            decode(
                    mod(rownum,97),
                    0 , trunc(sysdate),
                        null
            )                               billing_inactive_dt,
    /*
            to_date(null)                   billing_inactive_dt,
    */
            lpad('x',100,'x')               padding
    from
            generator       v2
    where
            rownum <= 1e4
    ;
    
    alter table pdtcost add constraint pdt_pk primary key(tracking_id, tracking_id_serv);
    
    create table pdtcost_charge_map
    nologging
    as
    with generator as (
            select
                    rownum id
            from dual 
            connect by 
                    level <= 1e4
    )
    select
            rownum                          tracking_id,
            rownum                          tracking_id_serv,
            decode(
                    mod(rownum,93), 
                    0 , trunc(sysdate),
                        null
            )                               inactive_dt,
    /*
            to_date(null)                   inactive_dt,    
    */
            trunc(sysdate + dbms_random.value(-100,0))      active_dt,
            lpad('x',100,'x')               padding
    from
            generator       v2
    where
            rownum <= 1e4
    ;
    
    alter table pdtcost_charge_map add constraint pcm_pk primary key(tracking_id, tracking_id_serv, active_dt);
    -- create index pcm_act_dt on pdtcost_charge_map(active_dt);
    
    -- gather basic table stats if your version needs to.
    
    select
            p.billing_inactive_dt,
            pcm.inactive_dt,
            pcm.active_dt
    from
            pdtcost                 p,
            pdtcost_charge_map      pcm
    where
            p.filter_col = 0
    and     p.tracking_id      = pcm.tracking_id
    and     p.tracking_id_serv = pcm.tracking_id_serv
    and     (   (p.billing_inactive_dt is null and pcm.inactive_dt is null)
             or (pcm.active_dt = (select max(active_dt) from pdtcost_charge_map pcm1 ))
            )
    ;
    
    

    The original question started with a table of 30 million rows and a result set of only 450 rows – suggesting that there ought to be a lot of scope for finding ways to eliminate data early. One possibility, assuming the appropriate indexes exist (which is why I have defined, but commented out, the pcm_act_dt index above), is to convert this query into a union all (taking care to eliminate duplication in the result set) in the following way:

    select
            /*+ leading(p pcm) use_nl(pcm) */
            p.billing_inactive_dt,
            pcm.inactive_dt,
            pcm.active_dt
    from
            pdtcost                 p,
            pdtcost_charge_map      pcm
    where
            p.filter_col = 0
    and     pcm.tracking_id      = p.tracking_id
    and     pcm.tracking_id_serv = p.tracking_id_serv
    and     (p.billing_inactive_dt is null and pcm.inactive_dt is null)
    union all
    select
            /*+ leading(p pcm) use_nl(pcm) */
            p.billing_inactive_dt,
            pcm.inactive_dt,
            pcm.active_dt
    from
            pdtcost                 p,
            pdtcost_charge_map      pcm
    where   
            p.filter_col = 0
    and     pcm.tracking_id      = p.tracking_id   
    and     pcm.tracking_id_serv = p.tracking_id_serv   
    and     (p.billing_inactive_dt is not null or pcm.inactive_dt is not null)
    and     pcm.active_dt = (select /*+ unnest */ max(active_dt) from pdtcost_charge_map pcm1)
    ;
    
    

    Here is the resulting execution plan when the pcm_act_dt index exists. I had to hint the table order and join mechanism because my tables were rather small and the selectivity relatively high – it’s probably safe to assume that selectivities are much better on the original data set and that a path like this is more likely to be chosen unhinted (the full tablescan on pdtcost is irrelevant in the context of the demonstration):

    
    ---------------------------------------------------------------------------------------------------------------
    | Id  | Operation                      | Name               | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    ---------------------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT               |                    |      1 |        |     98 |00:00:00.06 |     657 |
    |   1 |  UNION-ALL                     |                    |      1 |        |     98 |00:00:00.06 |     657 |
    |   2 |   NESTED LOOPS                 |                    |      1 |     99 |     98 |00:00:00.05 |     386 |
    |   3 |    NESTED LOOPS                |                    |      1 |     99 |     99 |00:00:00.05 |     287 |
    |*  4 |     TABLE ACCESS FULL          | PDTCOST            |      1 |     99 |     99 |00:00:00.01 |     173 |
    |*  5 |     INDEX RANGE SCAN           | PCM_PK             |     99 |      1 |     99 |00:00:00.01 |     114 |
    |*  6 |    TABLE ACCESS BY INDEX ROWID | PDTCOST_CHARGE_MAP |     99 |      1 |     98 |00:00:00.01 |      99 |
    |   7 |   NESTED LOOPS                 |                    |      1 |      2 |      0 |00:00:00.01 |     271 |
    |   8 |    NESTED LOOPS                |                    |      1 |    100 |      1 |00:00:00.01 |     270 |
    |*  9 |     TABLE ACCESS FULL          | PDTCOST            |      1 |    100 |    100 |00:00:00.01 |     166 |
    |* 10 |     INDEX UNIQUE SCAN          | PCM_PK             |    100 |      1 |      1 |00:00:00.01 |     104 |
    |  11 |      SORT AGGREGATE            |                    |      1 |      1 |      1 |00:00:00.01 |       2 |
    |  12 |       INDEX FULL SCAN (MIN/MAX)| PCM_ACT_DT         |      1 |      1 |      1 |00:00:00.01 |       2 |
    |* 13 |    TABLE ACCESS BY INDEX ROWID | PDTCOST_CHARGE_MAP |      1 |      1 |      0 |00:00:00.01 |       1 |
    ---------------------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
       4 - filter(("P"."FILTER_COL"=0 AND "P"."BILLING_INACTIVE_DT" IS NULL))
       5 - access("PCM"."TRACKING_ID"="P"."TRACKING_ID" AND
                  "PCM"."TRACKING_ID_SERV"="P"."TRACKING_ID_SERV")
       6 - filter("PCM"."INACTIVE_DT" IS NULL)
       9 - filter("P"."FILTER_COL"=0)
      10 - access("PCM"."TRACKING_ID"="P"."TRACKING_ID" AND
                  "PCM"."TRACKING_ID_SERV"="P"."TRACKING_ID_SERV" AND "PCM"."ACTIVE_DT"=)
      13 - filter(("P"."BILLING_INACTIVE_DT" IS NOT NULL OR "PCM"."INACTIVE_DT" IS NOT NULL))
    
    
    

    You’ll notice that this plan also displays an interesting little quirk – at operation 10 we can see the index unique scan of index pcm_act_dt that occurs once for each row returned from pdtcost; but each unique scan is preceded by a call to run the subquery (except that scalar subquery caching means the subquery runs only once in total) to supply a value for active_dt that can be used in the unique scan. (In the absence of the pcm_act_dt index the full scan min/max would be a fast full scan of the primary key.)

    With a little luck the OP will be able to apply the same strategy to his query, though it may be a little harder to get the desired plan since the original query includes 6 tables; but the principle doesn’t change.

    Footnote:

    various people on the OTN thread have pointed out that there are some odd details about the optimizers cardinality predictions which may mean that part of the problem is simply an issue of misleading (possibly out of date) object statistics. It’s possible that with better estimates the optimizer may change the plan so much that even the strategy of getting all the rows from pdtcost_charge_map related to the rows acquired from pdtcost and then eliminating based on a late filter may be efficient enough for the OP.  By changing the data volume and distribution in my test case one of the plans (which predicted 100 rows from 100,000) was as follows:

    
    -------------------------------------------------------------------------------------------------------------
    | Id  | Operation                    | Name               | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    -------------------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT             |                    |      1 |        |     98 |00:00:01.58 |    2307 |
    |   1 |  NESTED LOOPS                |                    |      1 |     98 |     98 |00:00:01.58 |    2307 |
    |   2 |   NESTED LOOPS               |                    |      1 |    100 |    100 |00:00:00.01 |    1814 |
    |*  3 |    TABLE ACCESS FULL         | PDTCOST            |      1 |    100 |    100 |00:00:00.01 |    1699 |
    |*  4 |    INDEX RANGE SCAN          | PCM_PK             |    100 |      1 |    100 |00:00:00.01 |     115 |
    |*  5 |   TABLE ACCESS BY INDEX ROWID| PDTCOST_CHARGE_MAP |    100 |      1 |     98 |00:00:01.57 |     493 |
    |   6 |    SORT AGGREGATE            |                    |      1 |      1 |      1 |00:00:01.57 |     393 |
    |   7 |     INDEX FAST FULL SCAN     | PCM_PK             |      1 |    100K|    100K|00:00:01.02 |     393 |
    -------------------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
       3 - filter("P"."FILTER_COL"=0)
       4 - access("P"."TRACKING_ID"="PCM"."TRACKING_ID" AND
                  "P"."TRACKING_ID_SERV"="PCM"."TRACKING_ID_SERV")
       5 - filter((("P"."BILLING_INACTIVE_DT" IS NULL AND "PCM"."INACTIVE_DT" IS NULL) OR
                  "PCM"."ACTIVE_DT"=))
    

    The Complete Guide to the Oracle INSERT INTO Statement

    Complete IT Professional - Mon, 2016-11-07 05:00
    The Oracle INSERT INTO statement is one of the most popular commands in Oracle, and it’s one of the first commands you learn to use. Read how to insert data and how to use the full functionality of the INSERT statement in this guide. What Is the INSERT INTO Oracle Statement? The Oracle INSERT INTO […]
    Categories: Development

    Flashback Database -- 1 : Introduction to Operations

    Hemant K Chitale - Mon, 2016-11-07 04:24
    Continuing on my previous post,  ....

    In 11gR2,  ALTER DATABASE FLASHBACK ON   and OFF can be executed when the database is OPEN.  Setting FLASHBACK OFF results in deletion of all Flashback Files.

    Here is some information that I have pulled from my test database environment :

    SQL> alter session set nls_date_format='DD-MON-RR HH24:MI:SS';

    Session altered.

    SQL>
    SQL> select oldest_flashback_scn, oldest_flashback_time,
    2 retention_target, flashback_size
    3 from v$flashback_database_log;

    OLDEST_FLASHBACK_SCN OLDEST_FLASHBACK_T RETENTION_TARGET FLASHBACK_SIZE
    -------------------- ------------------ ---------------- --------------
    7140652 07-NOV-16 10:53:30 180 314572800

    SQL> select sysdate from dual;

    SYSDATE
    ------------------
    07-NOV-16 17:46:54

    SQL>
    SQL> select begin_time, end_time, flashback_data, estimated_flashback_size
    2 from v$flashback_database_stat
    3 order by begin_time;

    BEGIN_TIME END_TIME FLASHBACK_DATA ESTIMATED_FLASHBACK_SIZE
    ------------------ ------------------ -------------- ------------------------
    06-NOV-16 18:56:28 06-NOV-16 21:20:55 202129408 251873280
    06-NOV-16 21:20:55 07-NOV-16 09:53:26 107102208 62054400
    07-NOV-16 09:53:26 07-NOV-16 10:53:30 51609600 67866624
    07-NOV-16 10:53:30 07-NOV-16 13:14:45 10682368 60887040
    07-NOV-16 13:14:45 07-NOV-16 14:14:51 66002944 67986432
    07-NOV-16 14:14:51 07-NOV-16 15:14:57 10018816 66112512
    07-NOV-16 15:14:57 07-NOV-16 16:15:01 10190848 64441344
    07-NOV-16 16:15:01 07-NOV-16 17:15:05 53559296 68751360
    07-NOV-16 17:15:05 07-NOV-16 17:47:57 52862976 0

    9 rows selected.

    SQL>
    SQL> select log#, sequence#, bytes/1048576 Size_MB, first_time
    2 from v$flashback_database_logfile
    3 order by sequence#;

    LOG# SEQUENCE# SIZE_MB FIRST_TIME
    ---------- ---------- ---------- ------------------
    6 6 50 07-NOV-16 09:00:46
    1 7 50 07-NOV-16 10:36:01
    2 8 50 07-NOV-16 13:13:22
    3 9 50 07-NOV-16 13:43:28
    4 10 50 07-NOV-16 16:43:49
    5 11 50 07-NOV-16 17:44:42

    6 rows selected.

    SQL>


    Firstly, we note (as in my previous blog post), that the available flashback that is from 10:53am to 5:46pm (almost 7hours) exceeds the Flashback Target of 3hours (180minutes).  Apparently, Flashback Logfiles 1 to 5 have already been purged (but I find no entries for the deletions in the alert log).

    Note how the "earliest time" does not match in all three views.  The OLDEST_FLASHBACK_TIME is 10:53am although V$FLASHBACK_DATABASE_STAT reports statistics from the previous day (I had enabled Flashback in the database at 18:56:27 of 06-Nov) while V$FLASHBACK_DATABASE_LOGILE shows an existing logfile from 09:00am to 10:36am.

    Let me do a Flashback.  I must rely on the V$FLASHBACK_DATABASE_LOG view to know that I  cannot flashback beyond 10:53am.

    SQL> select open_mode from v$database;

    OPEN_MODE
    --------------------
    READ WRITE

    SQL> shutdown immediate;
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup mount;
    ORACLE instance started.

    Total System Global Area 1068937216 bytes
    Fixed Size 2260088 bytes
    Variable Size 750781320 bytes
    Database Buffers 310378496 bytes
    Redo Buffers 5517312 bytes
    Database mounted.
    SQL>
    SQL> flashback database to timestamp trunc(sysdate)+11/24;

    Flashback complete.

    SQL>
    SQL> alter database open read only; --- to verify data if necessary

    Database altered.

    SQL> shutdown immediate;
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup;
    ORACLE instance started.

    Total System Global Area 1068937216 bytes
    Fixed Size 2260088 bytes
    Variable Size 750781320 bytes
    Database Buffers 310378496 bytes
    Redo Buffers 5517312 bytes
    Database mounted.
    ORA-01589: must use RESETLOGS or NORESETLOGS option for database open


    SQL> alter database open resetlogs;

    Database altered.

    SQL>


    A FLASHBACK DATABASE requires an OPEN RESETLOGS to open READ WRITE.

    Let's look at the alert log for messages about the Flashback operation itself :

    Mon Nov 07 17:56:36 2016
    flashback database to timestamp trunc(sysdate)+11/24
    Flashback Restore Start
    Flashback Restore Complete
    Flashback Media Recovery Start
    started logmerger process
    Parallel Media Recovery started with 2 slaves
    Flashback Media Recovery Log /u02/FRA/ORCL/archivelog/2016_11_07/o1_mf_1_81_d2052ofj_.arc
    Mon Nov 07 17:56:43 2016
    Incomplete Recovery applied until change 7141255 time 11/07/2016 11:00:01
    Flashback Media Recovery Complete
    Completed: flashback database to timestamp trunc(sysdate)+11/24
    Mon Nov 07 17:57:08 2016
    alter database open read only


    What happens if I disable and re-enable Flashback ?

    SQL> select open_mode from v$database;

    OPEN_MODE
    --------------------
    READ WRITE

    SQL> alter database flashback off;

    Database altered.

    SQL>

    From the alert log :
    Mon Nov 07 18:03:02 2016
    alter database flashback off
    Stopping background process RVWR
    Deleted Oracle managed file /u02/FRA/ORCL/flashback/o1_mf_d1y32vjv_.flb
    Deleted Oracle managed file /u02/FRA/ORCL/flashback/o1_mf_d1y32xq0_.flb
    Deleted Oracle managed file /u02/FRA/ORCL/flashback/o1_mf_d1y3bhkx_.flb
    Deleted Oracle managed file /u02/FRA/ORCL/flashback/o1_mf_d1y3dd8r_.flb
    Deleted Oracle managed file /u02/FRA/ORCL/flashback/o1_mf_d1y6r6bf_.flb
    Deleted Oracle managed file /u02/FRA/ORCL/flashback/o1_mf_d1ycky3v_.flb
    Flashback Database Disabled
    Completed: alter database flashback off

    SQL> select open_mode from v$database;

    OPEN_MODE
    --------------------
    READ WRITE

    SQL> alter database flashback on;

    Database altered.

    SQL>

    From the alert log :
    Mon Nov 07 18:04:21 2016
    alter database flashback on
    Starting background process RVWR
    Mon Nov 07 18:04:21 2016
    RVWR started with pid=30, OS id=12621
    Flashback Database Enabled at SCN 7142426
    Completed: alter database flashback on

    From the FRA :
    [oracle@ora11204 flashback]$ pwd
    /u02/FRA/ORCL/flashback
    [oracle@ora11204 flashback]$ ls -ltr
    total 102416
    -rw-rw----. 1 oracle oracle 52436992 Nov 7 18:04 o1_mf_d20nf7wc_.flb
    -rw-rw----. 1 oracle oracle 52436992 Nov 7 18:05 o1_mf_d20nf5nz_.flb
    [oracle@ora11204 flashback]$

    SQL> alter session set nls_date_Format='DD-MON-RR HH24:MI:SS';

    Session altered.

    SQL> select log#, sequence#, bytes/1048576 Size_MB, first_time
    2 from v$flashback_database_logfile
    3 order by sequence#;

    LOG# SEQUENCE# SIZE_MB FIRST_TIME
    ---------- ---------- ---------- ------------------
    2 1 50
    1 1 50 07-NOV-16 18:04:22

    SQL>



    So, I can set FLASHBACK OFF and ON when the database is OPEN.  (But I can't execute a FLASHBACK TO .... with the database OPEN).
    .
    .
    .

    Categories: DBA Blogs

    Pages

    Subscribe to Oracle FAQ aggregator