Feed aggregator

Oracle 12cR2 multitenant: Local UNDO

Yann Neuhaus - Tue, 2016-11-08 12:58

Pluggable Databases are supposed to be isolated, containing the whole of user data and metadata. This is the definition of dictionary separation coming with multitenant architecture: only system data and metadata are at CDB level. User data and metadata are in separate tablespaces belonging to the PDB. And this is what makes the unplug/plug available: because PDB tablespaces contain everything, you can transport their datafiles from one CDB to another.
However, if they are so isolated, can you explain why

  • You cannot flashback a PDB?
  • You need an auxiliary instance for PDB Point-In-Time recovery?
  • You need to put the PDB read-only before cloning it?


There is something that is not contained in your PDB but is at CDB level, and which contains user data. The UNDO tablespace is shared:

CaptureLocalUndo001

You cannot flashback a PDB because doing so requires to rollback the ongoing transactions at the time you flashback. Information was in UNDO tablespace at that time, but is not there anymore.

It’s the same idea with Point-In-Time recovery of PDB. You need to restore the UNDO tablespace to get those UNDO records from the Point-In-Time. But you cannot restore it in place because it’s shared with other PDBs that need current information. This is why you need an auxiliary instance for PDBPITR in 12.1

To clone a PDB cannot be done with ongoing transactions because their UNDO is not in the PDB. This is why it can be done only when the PDB is read-only.

12.2 Local UNDO

In 12.2 you can choose to have one UNDO tablespace per PDB, in local undo mode, which is the default in DBCA:

CaptureLocalUndo000

With local undo PDBs are truly isolated even when opened with ongoing transactions:

CaptureLocalUndo002

Look at the ‘RB segs’ column from RMAN report schema:

[oracle@OPC122 ~]$ rman target /
 
Recovery Manager: Release 12.2.0.1.0 - Production on Tue Nov 8 18:53:46 2016
 
Copyright (c) 1982, 2016, Oracle and/or its affiliates. All rights reserved.
 
connected to target database: CDB1 (DBID=901060295)
 
RMAN> report schema;
 
using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB1
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 880 SYSTEM YES /u02/app/oracle/oradata/CDB1/system01.dbf
3 710 SYSAUX NO /u02/app/oracle/oradata/CDB1/sysaux01.dbf
4 215 UNDOTBS1 YES /u02/app/oracle/oradata/CDB1/undotbs01.dbf
5 270 PDB$SEED:SYSTEM NO /u02/app/oracle/oradata/CDB1/pdbseed/system01.dbf
6 560 PDB$SEED:SYSAUX NO /u02/app/oracle/oradata/CDB1/pdbseed/sysaux01.dbf
7 5 USERS NO /u02/app/oracle/oradata/CDB1/users01.dbf
8 180 PDB$SEED:UNDOTBS1 NO /u02/app/oracle/oradata/CDB1/pdbseed/undotbs01.dbf
9 270 PDB1:SYSTEM YES /u02/app/oracle/oradata/CDB1/PDB1/system01.dbf
10 590 PDB1:SYSAUX NO /u02/app/oracle/oradata/CDB1/PDB1/sysaux01.dbf
11 180 PDB1:UNDOTBS1 YES /u02/app/oracle/oradata/CDB1/PDB1/undotbs01.dbf
12 5 PDB1:USERS NO /u02/app/oracle/oradata/CDB1/PDB1/users01.dbf
 
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 33 TEMP 32767 /u04/app/oracle/oradata/temp/temp01.dbf
2 64 PDB$SEED:TEMP 32767 /u04/app/oracle/oradata/temp/temp012016-10-04_11-34-07-330-AM.dbf
3 100 PDB1:TEMP 100 /u04/app/oracle/oradata/CDB1/PDB1/temp012016-10-04_11-34-07-330-AM.dbf

You have an UNDO tablespace in ROOT, in PDB$SEED and in each user PDB.

If you have a database in shared undo mode, you can move to local undo mode while in ‘startup migrate’. PDBs when opened will have an UNDO tablespace created. You can also create an UNDO tablespace in PDB$SEED.

Yes, in 12.2, you can open the PDB$SEED read/write for this purpose:


18:55:59 SQL> alter pluggable database PDB$SEED open read write force;
 
Pluggable database altered.
 
18:56:18 SQL> show pdbs;
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ WRITE NO
3 PDB1 READ WRITE NO
18:56:23 SQL> alter pluggable database PDB$SEED open read only force;
 
Pluggable database altered.

But remember this is only allowed for local undo migration.

The recommandation is to run in local undo mode, even in Single-Tenant.

More about it in the 12cR2 Multitenant book:

 

Cet article Oracle 12cR2 multitenant: Local UNDO est apparu en premier sur Blog dbi services.

Oracle 12cR2 PL/SQL new feature: TNSPING from the database

Yann Neuhaus - Tue, 2016-11-08 11:55

Database links are resolved with the server TNS_ADMIN configuration (sqlnet.ora and tnsnames.ora). You can use tnsping to check the resolution, but it supposes that you are on the server and have set the same environment as the one which started the database.
In 12.2 you have a new package to check that: DBMS_TNS. It’s the kind of little new features that make our life easier.

The easy way to verify a connection string is to use tnsping. Here is an example with an EZCONNECT resolution:

[oracle@SE122 ~]$ tnsping //10.196.234.38/CDB1.opcoct.oraclecloud.internal
TNS Ping Utility for Linux: Version 12.2.0.1.0 - Production on 08-NOV-2016 17:45:34
Copyright (c) 1997, 2016, Oracle. All rights reserved.
Used parameter files:
/u01/app/oracle/product/12.2.0/dbhome_1/network/admin/sqlnet.ora
Used EZCONNECT adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=CDB1.opcoct.oraclecloud.internal))(ADDRESS=(PROTOCOL=TCP)(HOST=10.196.234.38)(PORT=1521)))
OK (0 msec)

The full connection description is displayed here before contacting the listener.

This resolution is valid only with a specific TNS configuration (which is here /u01/app/oracle/product/12.2.0/dbhome_1/network/admin). However, you may have different configurations (using the TNS_ADMIN environment variable) and if it’s not set consistently, you may have different results.
Basically:

  • When you connect locally to the server (no SQL*Net, no listener), the Oracle session inherits the client environment
  • When you connect remotely to a service statically registered on the listener, the Oracle session inherits the environment which started the listener
  • When you connect remotely to a service dynamically registered on the listener, the Oracle session inherits the environment which started the database
DBMS_TNS

So here is this new package:

SQL> desc dbms_tns
FUNCTION RESOLVE_TNSNAME RETURNS VARCHAR2
Argument Name Type In/Out Default?
------------------------------ ----------------------- ------ --------
TNS_NAME VARCHAR2 IN

And you can run it when connected to the database to see how the name is resolved:

SQL> select dbms_tns.resolve_tnsname('&_connect_identifier') from dual;
old 1: select dbms_tns.resolve_tnsname('&_connect_identifier') from dual
new 1: select dbms_tns.resolve_tnsname('//10.196.234.38/CDB1.opcoct.oraclecloud.internal') from dual
 
DBMS_TNS.RESOLVE_TNSNAME('//10.196.234.38/CDB1.OPCOCT.ORACLECLOUD.INTERNAL')
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=CDB1.opcoct.oraclecloud.internal)(CID=(PROGRAM=oracle)(HOST=SE122.compute-opcoct.oraclecloud.internal)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=10.196.234.38)(PORT=1521)))

The resolution is done without attempting to contact the listener. This ip address do not exist on my network:

select dbms_tns.resolve_tnsname('//10.1.1.1/XX') from dual;
 
DBMS_TNS.RESOLVE_TNSNAME('//10.1.1.1/XX')
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=XX)(CID=(PROGRAM=oracle)(HOST=SE122.compute-opcoct.oraclecloud.internal)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=10.1.1.1)(PORT=1521)))

As you can see, the client identification is send here (PROGRAM and HOST).

Demo

I’ll use this new feature to prove my assumption above about which environment is used when connecting locally or through dynamic or static service.

I create 3 directories with different names for the SERVICE_NAME in order to see which one is used:


mkdir -p /tmp/tns_lsnr ; echo "NAMES.DIRECTORY_PATH=TNSNAMES" > /tmp/tns_lsnr/sqlnet.ora ; echo "XXX=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_lsnr))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))" > /tmp/tns_lsnr/tnsnames.ora
mkdir -p /tmp/tns_sess ; echo "NAMES.DIRECTORY_PATH=TNSNAMES" > /tmp/tns_sess/sqlnet.ora ; echo "XXX=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_sess))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))" > /tmp/tns_sess/tnsnames.ora
mkdir -p /tmp/tns_inst; echo "NAMES.DIRECTORY_PATH=TNSNAMES" > /tmp/tns_inst/sqlnet.ora ; echo "XXX=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_inst))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))" > /tmp/tns_inst/tnsnames.ora

In addition, I’ll need a listener configuration with a static service, let’s call it STATIC:


cat > /tmp/tns_lsnr/listener.ora <<END
LISTENER=(DESCRIPTION_LIST=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=$HOSTNAME)(PORT=1521))))
SID_LIST_LISTENER=(SID_LIST=(SID_DESC=(ORACLE_HOME=$ORACLE_HOME)(GLOBAL_DBNAME=STATIC)(SID_NAME=CDB1)))
END

Here’s a summary of the different configurations:


$ tail /tmp/tns*/*
 
==> /tmp/tns_inst/sqlnet.ora <==
NAMES.DIRECTORY_PATH=TNSNAMES
====> /tmp/tns_inst/tnsnames.ora <==
XXX=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_inst))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))
====> /tmp/tns_lsnr/listener.ora <==
LISTENER=(DESCRIPTION_LIST=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=SE122.compute-opcoct.oraclecloud.internal)(PORT=1521))))
SID_LIST_LISTENER=(SID_LIST=(SID_DESC=(ORACLE_HOME=/u01/app/oracle/product/122EE)(GLOBAL_DBNAME=STATIC)(SID_NAME=CDB1)))
====> /tmp/tns_lsnr/sqlnet.ora <==
NAMES.DIRECTORY_PATH=TNSNAMES
====> /tmp/tns_lsnr/tnsnames.ora <==
XXX=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_lsnr))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))
====> /tmp/tns_sess/sqlnet.ora <==
NAMES.DIRECTORY_PATH=TNSNAMES
====> /tmp/tns_sess/tnsnames.ora <==
XXX=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_sess))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))

I start the listener and the instance with their own environment, and set the session one to another:


export TNS_ADMIN=/tmp/tns_lsnr ; lsnrctl start
export TNS_ADMIN=/tmp/tns_inst ; sqlplus / as sysdba <<< startup
export TNS_ADMIN=/tmp/tns_sess

Now it’s time to use this new DBMS_TNS when connecting locally, through the dynamic service (CDB1) and through the static service (STATIC):


SQL> connect system/oracle
Connected.
 
SQL> select dbms_tns.resolve_tnsname('XXX') from dual;
 
DBMS_TNS.RESOLVE_TNSNAME('XXX')
-----------------------------------------------------------------------------------------------------------------------------------------------------------
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_sess)(CID=(PROGRAM=oracle)(HOST=SE122.compute-opcoct.oraclecloud.internal)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))

When connected locally the TNS_ADMIN from my shell environment running sqlplus is used.


SQL> connect system/oracle@//localhost/CDB1
Connected.
 
SQL> select dbms_tns.resolve_tnsname('XXX') from dual;
 
DBMS_TNS.RESOLVE_TNSNAME('XXX')
-----------------------------------------------------------------------------------------------------------------------------------------------------------
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_inst)(CID=(PROGRAM=oracle)(HOST=SE122.compute-opcoct.oraclecloud.internal)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))

When connected to dynamic service, the TNS_ADMIN used to startup the instance is used.


SQL> connect system/oracle@//localhost/STATIC
Connected.
 
SQL> select dbms_tns.resolve_tnsname('XXX') from dual;
 
DBMS_TNS.RESOLVE_TNSNAME('XXX')
-----------------------------------------------------------------------------------------------------------------------------------------------------------
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_lsnr)(CID=(PROGRAM=oracle)(HOST=SE122.compute-opcoct.oraclecloud.internal)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))

When connected to static service, the TNS_ADMIN used to startup the listener is used.

So what?

You should use a consistent environment setting in order to be sure that all sessions will use the same name resolution. But if you have a doubt about it, DBMS_TNS can help to troubleshoot. It’s better than DBMS_SYSTEM.GET_ENV as it does the name resolution rather than just showing the environment variables.

Want to know quickly where all database links are going? Here it is:

SQL> select username,dbms_tns.resolve_tnsname(host) from cdb_db_links;

 

Cet article Oracle 12cR2 PL/SQL new feature: TNSPING from the database est apparu en premier sur Blog dbi services.

Oracle 12.2.0.1 Manuals available now!

Marco Gralike - Tue, 2016-11-08 10:00
Have a look what’s new and cool in the full Oracle 12.2.0.1 manuals / documentation…

Oracle 12cR2 : Partitioning improvements – auto-list partitioning

Yann Neuhaus - Tue, 2016-11-08 09:16

The second release of Oracle Database 12c (12cR2) comes with new improvements regarding partitioning.
In this blog post I’ll show one of them : auto-list partitioning.
Auto-list partitioning is an extension of list partitioning. It enable the automatic creation of partitions for new values inserted into the partitioned table.
auto-list-part
Image : “Oracle Partitioning in Oracle Database 12c Release 2″ – Hermann Bär

Let’s have a look at the syntax. As you can see below, you only have to specify the new “Automatic” keyword after the “Partition by list” statement :
SQL> CREATE TABLE city(
city_zip NUMBER(5),
city_name VARCHAR2(30)
)
PARTITION BY LIST (city_name) AUTOMATIC
(PARTITION p_delemont VALUES ('Delemont'));


Table created.
SQL>

The auto-list partitioned table is created with only one partition, explicitly created at the table creation :
SQL> SELECT partition_name, high_value, partition_position FROM dba_tab_partitions WHERE table_name = 'CITY';

PARTITION_NAME HIGH_VALUE PARTITION_POSITION
-------------------- ------------------------------ ------------------
P_DELEMONT 'Delemont' 1

Let’s insert new cities in the table… :
SQL> INSERT INTO city VALUES (2800, 'Delemont');
1 row created.

SQL> INSERT INTO city VALUES (4001, 'Basel');
1 row created.

SQL> INSERT INTO city VALUES (8001, 'Zurich');
1 row created.

SQL> INSERT INTO city VALUES (1000, null);
1 row created.

Great ! Every data insertion succeed without the famous partitions error message : “ORA-14400: inserted partition key does not map to any partition.
And if we check the partitions…
SQL> SELECT partition_name, high_value, partition_position FROM dba_tab_partitions WHERE table_name = 'CITY';

PARTITION_NAME HIGH_VALUE PARTITION_POSITION
-------------------- -------------------- ------------------
P_DELEMONT 'Delemont' 1
SYS_P5004 'Basel' 2
SYS_P5005 'Zurich' 3
SYS_P5006 '' 4

…one partition has been automatically created for each values. Take note that a partition has also been created to store the “null” values.

Now, if you want to evolve a list partitioned table to an auto-list partitioned table, you can do it easily and in one shot :
SQL> ALTER TABLE cars SET AUTOMATIC;;
ALTER TABLE cars SET AUTOMATIC
*
ERROR at line 1:
ORA-14852: SET [SUB]PARTITIONING AUTOMATIC is not legal on this table.

Oops, this error occurs because the table contains a DEFAULT partition. As you can imagine, it’s now not necessary to have one, so you MUST drop it before moving to auto-list partitioning (take care of the data stored inside it ! ;-) ):
SQL> ALTER TABLE cars DROP PARTITION (P_DEFAULT);

Table altered.

SQL> ALTER TABLE cars SET AUTOMATIC;

Table altered.

The table is now auto-list partitioned :
SQL> SELECT partitioning_type, autolist, partition_count FROM dba_part_tables WHERE table_name = 'CITY';

PARTITIONING_TYPE AUT PARTITION_COUNT
------------------------------ --- ---------------
LIST YES 4

Conclusion :
From my point of view, this partitioning improvement is a really good one. Indeed, if your list-partitioned table has a big quantity of distinct values, creating and managing partitions for each of them could become an onerous work. But now with 12cR2 you can forget this task : partition creation for new values is automated !

 

Cet article Oracle 12cR2 : Partitioning improvements – auto-list partitioning est apparu en premier sur Blog dbi services.

Creating A New Theme in Oracle Sites Cloud Service

WebCenter Team - Tue, 2016-11-08 08:32

Authored by: Igor Polyakov, Senior Principal Product Manager, Oracle

Creating a new site template in Oracle Sites Cloud Service (SCS) 16.3.5 is easy – you can simply select an existing site in the Manager UI and then save it as a new template. However, what if you want to create a template that will be based on a site created using a brand new theme that you want to develop at the same time. This 9 step process will provide you with step-by-step instructions how to achieve that.

Step 1: Get a Bootstrap Theme
You can develop a new theme using HTML cut that you received from a web design agency or a Bootstrap theme that your found on the Web. To follow instructions in this document you will use Bootstrap theme Modern Business as an example to create new SCS theme.

Download to your desktop and explode ModernBusiness.zip archive attached to this document.

Step 2: Create a New Site using Starter Template

Login on SCS, navigate to Sites > Templates and then create new Site using StarterTemplate. When prompted, enter ‘ModernBusiness’ as the name for the new site.

Note: If StarterTemplate is not available in the catalog, upload the StarterTemplate.zip template package to a folder in DOCS and then in the Template catalog, select Create > Import a Template Package option. When prompted select StarterTemplate.zip file that you uploaded and click OK to import the template. After site creation is completed, you will see new site ModernBusiness on the list of sites. This site is created using StarterTheme that was installed with the StarterTemplate. Under Themes in the Manager UI, select StarterTheme and copy it to a new theme ModernBusinessTheme.

Open ModernBusiness site in the Site Builder, select Edit mode and then select Design option on the left bar to open Design palette. On the palette select Themes. When prompted select ModernBusinessTheme as the them for the ModernBusiness site.

Note: In the October release of Sites Cloud Service, every time you use StarterTemplate to create a new site both <New_Site> and <New_Site>Theme will be created for you. This is different from using a standard template – all sites created from a non-starter template share the same theme.

To continue reading and see steps 3-9, please click here

EDB Postgres Advanced Server 9.6 BETA released

Yann Neuhaus - Tue, 2016-11-08 05:25

Yesterday EnterpriseDB released the beta of the next version of Postgres Advanced Server. You can sign up for the beta program here and download the software here.

As usual this release includes all the features of the community release of PostgreSQL (version 9.6) plus the EDB additions such as Oracle compatibility. The release notes are here.

 

Cet article EDB Postgres Advanced Server 9.6 BETA released est apparu en premier sur Blog dbi services.

Paco on Continuing Investment in PeopleSoft

Duncan Davies - Tue, 2016-11-08 05:00

There’s a great episode of Marc Weintraub’s PeopleSoft Talk interview series, featuring a 30 minute discussion with Paco Aubrejuan. There’ll be some great points for everyone to take away from it, however here are my highlights:

On the current Support End date of December 2027:

There’s no plan on ending support for PeopleSoft then, it’s actually not that important a date. It happens to be the date right now that our lifetime support goes to … that probably won’t be the case and as we get closer to that date the plan is to move those dates out.

On Continued Investment:

For me investment and support are one and the same. It’s hard to support applications without continuing to enhance them as well. We’re planning to support and enhance them through that date.

On Fluid Adoption:

We have 2-300 customers who are live on it, and many of whom aren’t live with just a few applications but with Fluid across the board. We’ve got to that hockey stick period in terms of adoption where the majority of our customers who are doing projects or upgrades are deploying Fluid.

On replacing SES with Elasticsearch:

“it’s easier, cheaper, faster, better for customers to run Elasticsearch versus SES”

plus lots more on Cloud, Fluid Approvals and Selective Adoption. It’s well worth a watch:

 

 


Oracle Data Visualization Desktop: Star Schemas and Multiple Fact Sources

Rittman Mead Consulting - Tue, 2016-11-08 04:00

Once added to a project, columns I specified with my custom SQL query now shows up as one source.

Now that I have a custom data source with only the data I want to explore, I can easily create an analysis without having to sift through multiple data sources.

* A note about Data Visualization Desktop and caching: When using the above method of writing SQL to create a virtual table, it is loaded into the cache. You should only use this method for very compact models. If the virtual table contains too much data, you can still add it as a data source but it may be too big to cache, causing your columns not to load when creating analyses.

Although having one fact source is common in relational models, using multiple fact sources is sometimes unavoidable in dimensional modeling.

In my sample data, I have another schema called GCBC_SURVEYS which contains two fact tables containing satisfaction scores for both customers and staff and one dimension table containing the organization that conducted the surveys.

For this example, I’m going to try to add each table as a data source manually first and attempt to join the two fact tables to my dimension table. When using this method, pay close attention to change any key and ID columns from Measure to Attribute so they aren’t aggregated. Data Visualization Desktop sees a numeric datatype and assumes it’s a measure.

Once I've added in all of the GCBCSURVEYS tables as data sources, I’m going to load them all into a project and create my joins using source diagram. When I joined each fact table to the dimension table on the SURVORG_ID, notice how DVD automatically created a join between my two fact tables.

This is not desirable because, due to the presence of a circular join, we run the risk of double counting. When I try to break the join between the two fact tables, DVD asks which data source I want to break conformance from.

When I select one of the fact sources, it will not only break the join between the two fact sources but also the join between the fact and the dimension table.

As of this writing, I have not found a way to only break joins between fact tables if they are using the same key to connect to the dimension table.

The only workaround to this I’ve found is to write SQL statement to pull in the columns and create the joins into one virtual table. This way I could specify the joins without DVD creating one between the fact sources.

Once I created my virtual table, I could use it to create a report and use both fact sources.

Although it can take some time to set up all the data sources you want to use for your model, Data Visualization Desktop packs some powerful features when it comes to dimensional modeling. The ability to use more than one fact source when needed adds another area of flexibility to DVD. Whether you are pulling in each table and then creating your model or writing a SQL to create one virtual table, DVD has the flexibility to be able to accommodate a variety of different scenarios.

Categories: BI & Warehousing

Troubleshooting Cloning Issues in EBS 12.1

Steven Chan - Tue, 2016-11-08 02:06

The Rapid Clone tool is used to create a working identical copy of an existing Oracle E-Business Suite 12.1 environment.  There are several ways of using Rapid Clone, including cloning a single node environment to another single node environment, adding additional nodes to an existing environment, and reducing nodes when cloning a multinode environment to a single node clone. 

Cloning EBS 12.1.3

The guide to using Rapid Clone in EBS 12.1 environments is:

When things go wrong

Given the variety of ways that this tool can be used, it is inevitable that some things might not go as expected.  When that happens, it's helpful to have a troubleshooting framework to narrow down the possible root causes and identify a solution. 

If you've encountered a problem with Rapid Clone, your first stop should be:

This excellent Note covers the most-common questions and issues associated with Rapid Clone, such as:

  • Location of cloning log files
  • Missing prerequisite patches
  • Preparing the source system's database and application tiers
  • The expected layout of the cloning stage area for both the applications and database tiers
  • Execution of the cloning process
  • Inventory registration issues
  • Common issues when cloning the database tier
  • Known issues

Related Articles


Categories: APPS Blogs

Copy table data From One DB to Another DB

Tom Kyte - Mon, 2016-11-07 23:06
Hi Team, I need to Copy table data From One DB to Another DB. One approach I can recollect from one of the conversation asked in asktom.oracle.com is that create database link and simply execute - insert into local_table select * from table@...
Categories: DBA Blogs

export import

Tom Kyte - Mon, 2016-11-07 23:06
Hi team, I wanted to know that when i export entire schema from one database and import into anothere database then- Objects like - tables,functions,triggers,procedures,dblinks,synonyms,public synonyms and many more. Which objects export dur...
Categories: DBA Blogs

SQL not using index

Tom Kyte - Mon, 2016-11-07 23:06
Tom, There is an index on a table, but that index is not being used by SQL(even with hint). Can you please tell if something is wrong with the syntax? Below is the definition of the index on the table <b>XLA.XLA_TRANSACTION_ENTITIES</b> (PS:...
Categories: DBA Blogs

How to find the tables of a particular string

Tom Kyte - Mon, 2016-11-07 23:06
Hello there, I'm trying to find the name of the table and column in which it has the particular string. The below code searches for the string in the whole database and prints it out. However I wanted to use wild card because there are instance...
Categories: DBA Blogs

TEMPORARY TABLESPACE

Tom Kyte - Mon, 2016-11-07 23:06
HOW CAN I DETERMINE THAT WHAT SHOULD BE THE SIZE OF TEMPORARY TABLESPACE FOR ORACLE DATABASE? WHAT ARE THE MEASURES TO BE CONSIDERED FOR SIZING TEMPORARY TABLESPACE? HOW SHOULD I RESIZE THE TEMPORARY TABLESPACE SO THAT WE DO NOT ENCOUNTER ER...
Categories: DBA Blogs

Could not able to drop an empty tablespace. ORA-23515: materialized views and/or their indices exist in the tablespace

Tom Kyte - Mon, 2016-11-07 23:06
Hi Tom, I have been trying to drop a tablespace, but it is showing the below error. I've checked under dba_segments from any Mviews but I couldn't find anything. 1) SQL> drop tablespace GOLFX including contents and datafiles; drop tablespace ...
Categories: DBA Blogs

Drop Schema

Tom Kyte - Mon, 2016-11-07 23:06
Hi, If i Drop a shema with DROP SCHEMA <name> RESTRICT,Will oracle also drop tha USER associated with that Schema?
Categories: DBA Blogs

should i use users tablespace?

Tom Kyte - Mon, 2016-11-07 23:06
hi tom. i am junior dba. my senior dba said to me today that i should not create users in USERS tablespace, if this users create objects, cause this objects will be created in USERS tablespace and that is somehow bad. so my question to him was why...
Categories: DBA Blogs

Oracle Public Cloud: create a database from command line

Yann Neuhaus - Mon, 2016-11-07 15:11

You love the Oracle Public Cloud with its simple Web interface? Great. But what if you want to automate a database creation from command line?
Easy with the CURL and REST API.

JSON

First, you need to create a JSON file with all the information for your service.
Everything is documented: https://apicatalog.oraclecloud.com/ui/views/apicollection/oracle-public/database/1.1/serviceinstances

$ cat createrequestbody.json
 
{
"description": "Mon Nov 7 21:03:39 CET 2016",
"edition": "EE_HP",
"level": "PAAS",
"serviceName": "CDB122",
"shape": "oc3",
"subscriptionType": "MONTHLY",
"version": "12.2.0.1",
"vmPublicKeyText": "ssh-dss AAAAB3NzaC1kc3MAAACBAMrw5Au0hHP1BT/W3gcSg+Fwq36LdfzroubjS6g8RSvcaeltk1O/uQwJV73MCsBDgs4PaAuDekZTW5w6kN8ESd6r6BGLm/sETHNiRzOWWap3ds18iiaLJWcMbKRlZUWLdfhGemryWZaQIFrSNkfE5YkFz4V4m5d4EwKpLzIthKh3AAAAFQDtjTsaF7nydePPJPDqYERu8QlcMQAAAIBjl8NxEmE7nud7b4xuLkuJKnwlf2urHrOZGcQapNUZAjuehe6/8VhPB4GebZb52GlyYOuELDP6e9PXyFRxTfTPff22JE5tPM8vTjCmFEKhBspl43YurJxwvDtvgTNKk5Zp5MBXMDjQ8KNHXlpnRrfh45acHI8gs0KlH51+e7j+6AAAAIA/Q8rVC4g+MBepJGKed2ar0JzralZo7Q8vsZfQ889Y3wkaBJl2/SRaaW1JNmkB20eZIEbRkh9e/ex07ryKg65dgUzU4/2dE2CSxplG0vSf/xp7hYr/bJzR1SZXMKbAdZ2wg+SGaTlKWAAS9xhvKGw1jVWdVgacYJOPl343bMKkuw==",
"parameters": [
{
"type": "db",
"usableStorage": "15",
"adminPassword": "P4ss#ord",
"sid": "CDB122",
"pdbName": "PDB1",
"failoverDatabase": "no",
"backupDestination": "NONE"
}
] }

You can see that you have exactly the same information as from the GUI.

Create Instance

Then, you run the following CURL command (having the cacert.pem certificate in the current directory:

$ curl --include --request POST --cacert ./cacert.pem --user myuser@oracle.com:P4ss#ord --header "X-ID-TENANT-NAME:opcoct" --header "Content-Type:application/json" --data @createrequestbody.json https://dbcs.emea.oraclecloud.com/paas/service/dbcs/api/v1.1/instances/opcoct
 
HTTP/2 202
server: Oracle-Application-Server-11g
location: https://dbcs.emea.oraclecloud.com:443/paas/service/dbcs/api/v1.1/instances/opcoct/status/create/job/2738110
content-language: en
access-control-allow-origin: *
access-control-allow-headers: Content-Type, api_key, Authorization
retry-after: 60
access-control-allow-methods: GET, POST, DELETE, PUT, OPTIONS, HEAD
x-oracle-dms-ecid: 005GBi63mCP3n315RvWByd0003Ri0004Zg
x-oracle-dms-ecid: 005GBi63mCP3n315RvWByd0003Ri0004Zg
service-uri: https://dbcs.emea.oraclecloud.com:443/paas/service/dbcs/api/v1.1/instances/opcoct/CDB122
x-frame-options: DENY
content-type: application/json
vary: user-agent
date: Mon, 07 Nov 2016 20:03:59 GMT

Here “opcoct” is my identity domain id. You find it in the header X-ID-TENANT-NAME and the URL.
The myuser@oracle.com:P4ss#ord is the user and password in the domain.

From the GUI you can see that the creation has started:

CaptureDBAASREST01

DBaaS instance information

Here is the information for the database service


$ curl --include --request GET --cacert ./cacert.pem --user myuser@oracle.com:P4ss#ord --header "X-ID-TENANT-NAME:opcoct" https://dbcs.emea.oraclecloud.com/paas/service/dbcs/api/v1.1/instances/opcoct/CDB122
 
HTTP/2 200
server: Oracle-Application-Server-11g
content-language: en
service-uri: https://dbcs.emea.oraclecloud.com:443/paas/service/dbcs/api/v1.1/instances/opcoct/CDB122
access-control-allow-headers: Content-Type, api_key, Authorization
access-control-allow-methods: GET, POST, DELETE, PUT, OPTIONS, HEAD
x-oracle-dms-ecid: 005GBiK7U4I3z015Rvl3id00071a0000yo
x-oracle-dms-ecid: 005GBiK7U4I3z015Rvl3id00071a0000yo
access-control-allow-origin: *
x-frame-options: DENY
content-type: application/json
vary: user-agent
date: Mon, 07 Nov 2016 20:07:52 GMT
content-length: 1244
 
{
"service_name": "CDB122",
"version": "12.2.0.1",
"status": "In Progress",
"description": "Mon Nov 7 21:03:39 CET 2016",
"identity_domain": "opcoct",
"creation_time": "2016-11-07T20:03:59.524+0000",
"last_modified_time": "2016-11-07T20:03:59.505+0000",
"created_by": "myuser@oracle.com",
"sm_plugin_version": "16.4.3-541",
"service_uri": "https:\/\/dbcs.emea.oraclecloud.com:443\/paas\/service\/dbcs\/api\/v1.1\/instances\/opcoct\/CDB122",
"num_nodes": 1,
"level": "PAAS",
"edition": "EE_HP",
"shape": "oc3",
"subscriptionType": "MONTHLY",
"creation_job_id": "2738110",
"num_ip_reservations": 1,
"backup_destination": "NONE",
"failover_database": false,
"rac_database": false,
"sid": "CDB122",
"pdbName": "PDB1",
"demoPdb": "",
"listenerPort": 1521,
"timezone": "UTC",
"is_clone": false,
"clone_supported_version": "16.3.1",
"active_jobs": [
{
"active_job_operation": "start-db-compute-resources",
"active_job_id": 2738113,
"active_job_messages": [] },
{
"active_job_operation": "create-dbaas-service",
"active_job_id": 2738110,
"active_job_messages": [] }
],
"compute_site_name": "EM003_Z19",
"jaas_instances_using_service": ""
}

The status is ‘in progress’. Let’s look at the compute service.

Compute instance information

From the compute service, you can see if there is already an IP address assigned here


$ curl --include --request GET --cacert ./cacert.pem --user myuser@oracle.com:P4ss#ord --header "X-ID-TENANT-NAME:opcoct" https://dbcs.emea.oraclecloud.com/paas/service/dbcs/api/v1.1/instances/opcoct/CDB122/servers
 
HTTP/2 200
server: Oracle-Application-Server-11g
content-language: en
access-control-allow-headers: Content-Type, api_key, Authorization
access-control-allow-methods: GET, POST, DELETE, PUT, OPTIONS, HEAD
x-oracle-dms-ecid: 005GBiMizXo3z015Rvl3id00071a0004p_
x-oracle-dms-ecid: 005GBiMizXo3z015Rvl3id00071a0004p_
access-control-allow-origin: *
x-frame-options: DENY
content-type: application/json
vary: user-agent
date: Mon, 07 Nov 2016 20:08:35 GMT
content-length: 430
 
[{
"status": "Running",
"creation_job_id": "2738110",
"creation_time": "2016-11-07T20:03:59.524+0000",
"created_by": "myuser@oracle.com",
"shape": "oc3",
"sid": "CDB122",
"pdbName": "PDB1",
"listenerPort": 1521,
"connect_descriptor": "CDB122:1521\/PDB1",
"connect_descriptor_with_public_ip": "null:1521\/PDB1",
"initialPrimary": true,
"storageAllocated": 142336,
"reservedIP": "",
"hostname": "CDB122"
}]

No IP address yet. I have the job id (2738110) so that I can check it later.

Job information


$ curl --include --request GET --cacert ./cacert.pem --user myuser@oracle.com:P4ss#ord --header "X-ID-TENANT-NAME:opcoct" https://dbcs.emea.oraclecloud.com/paas/service/dbcs/api/v1.1/instances/opcoct/status/create/job/2738110
 
HTTP/2 202
server: Oracle-Application-Server-11g
location: https://dbcs.emea.oraclecloud.com:443/paas/service/dbcs/api/v1.1/instances/opcoct/status/create/job/2738110
content-language: en
access-control-allow-origin: *
access-control-allow-headers: Content-Type, api_key, Authorization
retry-after: 60
access-control-allow-methods: GET, POST, DELETE, PUT, OPTIONS, HEAD
x-oracle-dms-ecid: 005GBiOeMbz3n315RvWByd0003Ri00048d
x-oracle-dms-ecid: 005GBiOeMbz3n315RvWByd0003Ri00048d
service-uri: https://dbcs.emea.oraclecloud.com:443/paas/service/dbcs/api/v1.1/instances/opcoct/CDB122
x-frame-options: DENY
content-type: application/json
vary: user-agent
date: Mon, 07 Nov 2016 20:09:08 GMT
 
{
"service_name": "CDB122",
"version": "12.2.0.1",
"status": "In Progress",
"description": "Mon Nov 7 21:03:39 CET 2016",
"identity_domain": "opcoct",
"creation_time": "2016-11-07T20:03:59.524+0000",
"last_modified_time": "2016-11-07T20:03:59.505+0000",
"created_by": "myuser@oracle.com",
"sm_plugin_version": "16.4.3-541",
"service_uri": "https:\/\/dbcs.emea.oraclecloud.com:443\/paas\/service\/dbcs\/api\/v1.1\/instances\/opcoct\/CDB122",
"message": ["Starting Compute resources..."],
"job_start_date": "Mon Nov 07 20:04:01 GMT 2016",
"job_status": "InProgress",
"job_operation": "create-dbaas-service",
"job_request_params": {
"edition": "EE_HP",
"vmPublicKeyText": "ssh-dss AAAAB3NzaC1kc3MAAACBAMrw5Au0hHP1BT/W3gcSg+Fwq36LdfzroubjS6g8RSvcaeltk1O/uQwJV73MCsBDgs4PaAuDekZTW5w6kN8ESd6r6BGLm/sETHNiRzOWWap3ds18iiaLJWcMbKRlZUWLdfhGemryWZaQIFrSNkfE5YkFz4V4m5d4EwKpLzIthKh3AAAAFQDtjTsaF7nydePPJPDqYERu8QlcMQAAAIBjl8NxEmE7nud7b4xuLkuJKnwlf2urHrOZGcQapNUZAjuehe6/8VhPB4GebZb52GlyYOuELDP6e9PXyFRxTfTPff22JE5tPM8vTjCmFEKhBspl43YurJxwvDtvgTNKk5Zp5MBXMDjQ8KNHXlpnRrfh45acHI8gs0KlH51+e7j+6AAAAIA/Q8rVC4g+MBepJGKed2ar0JzralZo7Q8vsZfQ889Y3wkaBJl2/SRaaW1JNmkB20eZIEbRkh9e/ex07ryKg65dgUzU4/2dE2CSxplG0vSf/xp7hYr/bJzR1SZXMKbAdZ2wg+SGaTlKWAAS9xhvKGw1jVWdVgacYJOPl343bMKkuw==",
"count": "2",
"provisioningTimeout": "180",
"subscriptionType": "MONTHLY",
"createStorageContainerIfMissing": "false",
"dbConsolePort": "1158",
"listenerPort": "1521",
"serviceName": "CDB122",
"namespace": "dbaas",
"version": "12.2.0.1",
"timezone": "UTC",
"pdbName": "PDB1",
"level": "PAAS",
"tenant": "opcoct",
"serviceInstance": "CDB122",
"description": "Mon Nov 7 21:03:39 CET 2016",
"failoverDatabase": "false",
"emExpressPort": "5500",
"ncharset": "AL16UTF16",
"trial": "false",
"sid": "CDB122",
"noRollback": "false",
"operationName": "create-dbaas-service",
"goldenGate": "false",
"backupDestination": "NONE",
"ibkup": "false",
"charset": "AL32UTF8",
"serviceVersion": "12.2.0.1",
"shape": "oc3",
"identity_domain_id": "opcoct",
"serviceType": "dbaas",
"usableStorage": "15",
"disasterRecovery": "false",
"server_base_uri": "https:\/\/dbcs.emea.oraclecloud.com:443\/paas\/service\/dbcs\/",
"computeSiteName": "EM003_Z19",
"isRac": "false"
}
}

REST Endpoint

Here, my test is on the EMEA datacenter and this is why the URL starts with https://dbcs.emea.oraclecloud.com
If you don’t know, you can check on My Cloud Services where you have all information:
CaptureDBAASREST02

 

Cet article Oracle Public Cloud: create a database from command line est apparu en premier sur Blog dbi services.

Enqueue Bytes – Is that a Pun?

Pythian Group - Mon, 2016-11-07 13:53

Sometimes it is necessary to put on your uber-geek hat and start using cryptic bits of code to retrieve information from an Oracle database. Troubleshooting enqueue locking events in Oracle databases is one of the times some advanced SQL may be necessary.

Likely you have used SQL similar to the following when troubleshooting Oracle enqueue’s, probably in connection with row lock contention.

SQL# l
  1  SELECT
  2     s.username username,
  3     s.sid,
  4     e.event event,
  5     e.p1text,
  6     e.p1,
  7     e.state
  8  FROM v$session s, v$session_wait e
  9  WHERE s.username IS NOT NULL
 10     AND s.sid = e.sid
 11     AND e.event LIKE '%enq:%'
 12* ORDER BY s.username, UPPER(e.event)
 
USERNAME    SID EVENT                          P1TEXT                P1 STATE
---------- ---- ------------------------------ ---------- ------------- ----------
JKSTILL      68 enq: TX - ROW LOCK contention  name|mode     1415053318 WAITING
 
1 ROW selected.

The value for P1 is not very useful as is; Oracle has encoded the type of enqueue and the requested mode into the column. When working with current events such as when selecting from v$session, it is simple to determine the type of lock and the mode requested by querying v$lock, such as in the following example:

  1* SELECT sid, TYPE, request, block FROM v$lock WHERE sid=68 AND request > 0
SQL# /
 
 SID TY    REQUEST      BLOCK
---- -- ---------- ----------
  68 TX          6          0
 
1 ROW selected.

Session 68 is waiting on a TX enqueue with requested lock mode of 6. Seasoned Oracle DBA’s will recognize this as classic row lock contention.

Why bother to find out just which type of enqueue this is? There are many types of locks in Oracle, and they occur for differing reasons. The TX lock is interesting as it can occur not only in Mode 6 but Mode 4 as well; Mode 4 refers to locks that involve unique keys, such as when 2 or more sessions try to insert the same value for a primary key. The following example shows just that:

SQL# @s
 
USERNAME    SID EVENT                          P1TEXT                P1 STATE
---------- ---- ------------------------------ ---------- ------------- ----------
JKSTILL      68 enq: TX - ROW LOCK contention  name|mode     1415053316 WAITING
 
 
1 ROW selected.
 
SQL# @l
 
 SID TY    REQUEST      BLOCK
---- -- ---------- ----------
  68 TX          4          0
 
1 ROW selected.

Knowing just which lock mode is requested is vital, as the troubleshooting for TX Mode 4 locks will be different from what is used to troubleshoot Mode 6.

Though we can find the lock name and mode information in v$lock, there is still value in being able to decipher that cryptic P1 column.

The ASH and AWR facilities do not include any historical information for the lock name and mode; the P1 column found in v$active_session_history and dba_hist_active_sess_history does not have a corresponding dba_hist_lock view. Any research done after an event has occurred does require decoding this information.

Deciphering v$session.p1

Oracle Support document 34566.1 is the enqueue reference note that provides information needed to get the lock name and mode from the p1 column. As you will see this information is a bit puzzling.

The rest of this article will focus on TX Mode 6 locks. The value shown for this lock and mode in the P1 column is always 1415053318. Following is the SQL recommended by Oracle:

 SELECT chr(to_char(bitand(p1,-16777216))/16777215)||
         chr(to_char(bitand(p1, 16711680))/65535) "Lock",
         to_char( bitand(p1, 65535) )    "Mode"
    FROM v$session_wait
   WHERE event = 'enqueue'

As I currently have some planned row lock contention in a test database we can run this query:

  1   SELECT chr(to_char(bitand(p1,-16777216))/16777215)||
  2           chr(to_char(bitand(p1, 16711680))/65535) "Lock",
  3           to_char( bitand(p1, 65535) )    "Mode"
  4      FROM v$session_wait
  5*    WHERE event LIKE '%enq:%'
SQL# /
 
Lo Mode
-- ----------------------------------------
TX 4

Probably it is not very clear why this SQL works. Let’s try and understand it.
(Note that a small change had to be made to the WHERE clause.)

Converting the P1 value to hex may be helpful

1415053318 = 0x54580006

The two lower order bytes represent the lock mode that has been requested. This can be seen here to be 0x06, which is simple translate to decimal 6 (I can do this one in my head)

The next two bytes are also in hex and represent the two letters of the lock name.

0x54 = 84 = ‘T’
0x58 = 88 = ‘X’

Using string functions it is simple to extract the values from the hex string, convert them to numbers and retrieve the lock name and mode.

SQL# define p1 = 1415053318
 
SQL# l
  1  WITH hex AS (
  2     SELECT TRIM(to_char(&p1,'XXXXXXXXXXXXXXXX')) hexnum FROM dual
  3  ),
  4  hexbreak AS (
  5     SELECT hexnum
  6        , to_number(substr(hexnum,1,2),'XXXXXXXX') enq_name_byte_1
  7        , to_number(substr(hexnum,3,2),'XXXXXXXX') enq_name_byte_2
  8        , to_number(substr(hexnum,5),'XXXXXXXX') enq_mode
  9  FROM hex
 10  )
 11  SELECT
 12     hexnum
 13     , chr(enq_name_byte_1)
 14     || chr(enq_name_byte_2) enqueue_type
 15     , enq_mode
 16* FROM hexbreak
SQL# /
 
HEXNUM            EN   ENQ_MODE
----------------- -- ----------
54580006          TX          6

While that does work, my inner geek wants to investigate those bitmasks and find out why they work. Next are the bitmasks in decimal along with the hex equivalent.

-16777216 = 0xFFFFFFFFFF000000
 16777215 = 0xFFFFFF
 16711680 = 0xFF0000
    65535 = 0xFFFF

The bitand function is used to mask all unwanted bits to 0. The number is then divided by value needed to remove all of the now-zeroed-out lower order bytes.

The values being used as bitmasks are -16777216 and 16711680. The use of -16777216 does not seem to make sense. As the intent is to mask all but one byte, I would expect to find an FF surrounded by a number of zeroes. The bit mask of 16711680, however, looks fine.

Now let’s run the Oracle support query again, but modified to show just the integer values rather than converting them to ASCII.

 
  1  SELECT bitand(p1,-16777216)/16777215,
  2           bitand(p1, 16711680)/65535,
  3           bitand(p1, 65535)
  4      FROM v$session_wait
  5*    WHERE event LIKE '%enq:%'
SQL# /
 
BITAND(P1,-16777216)/16777215 BITAND(P1,16711680)/65535 BITAND(P1,65535)
----------------------------- ------------------------- ----------------
                    84.000005                88.0013428                6

Well, that is interesting. An implicit conversion is taking place with to_char() that is removing the decimal portion of these numbers. Is that being done with trunc(), round(), or something else? I don’t know the answer to that. What seems more important is just doing the math correctly.

There are a couple of things here that can be changed to make this work as expected.

A New BitMask

Let’s modify the first bitmask to something that seems more reasonable than -16777216. Let’s use this instead, as it masks only the single byte we need:

4278190080 = 0xFF000000

Lets’ try it out:

SQL# l
  1  SELECT bitand(p1,4278190080)/16777215,
  2           bitand(p1, 16711680)/65535,
  3           bitand(p1, 65535)
  4      FROM v$session_wait
  5*    WHERE event LIKE '%enq:%'
SQL# /
 
BITAND(P1,4278190080)/16777215 BITAND(P1,16711680)/65535 BITAND(P1,65535)
------------------------------ ------------------------- ----------------
                     84.000005                88.0013428                6

While the new bitmask didn’t break anything, it does not appear to have helped either.

Off By One Error

The solution is to consider the divisors used to remove the lower order zeroes; each of them is off by one. That is easy enough to verify:

SQL# l
  1  SELECT bitand(p1,4278190080)/16777216,
  2           bitand(p1, 16711680)/65536,
  3           bitand(p1, 65535)
  4      FROM v$session_wait
  5*    WHERE event LIKE '%enq:%'
SQL# /
 
BITAND(P1,4278190080)/16777216 BITAND(P1,16711680)/65536 BITAND(P1,65535)
------------------------------ ------------------------- ----------------
                           84                        88                6

Ah, that did it! But what was the problem previously?

Old Divisor Values

The original divisors are off by 1, which does not completely remove the lower order values.

 16777215 = 0xFFFFFF
    65535 = 0xFFFF

Increasing each by one has the desired effect.

New Divisor Values
 16777216 = 0x1000000
    65536 = 0x10000
Conclusion

Those odd bitmasks have been in the back of my mind for some time, and today it seemed like a good idea to dig in and find out more about them. It isn’t too hard to imagine that in some cases the wrong values might be returned, leading to some long and unproductive troubleshooting sessions.

There is a demo script enqueue-bitand.sql containing much of the SQL found in this article. There is also a script awr-top-sqlid-events.sql that incorporates the enqueue lock decoding. This script could be made better than it is, so please issue a pull request if you have some useful modifications.

Categories: DBA Blogs

Reorg

Jonathan Lewis - Mon, 2016-11-07 11:31

A current question on the OTN database forum asks: “What’s the difference between object and tablespace reorganization?” Here’s an analogy to address the question.

I have three crates of Guiness in the boot (trunk) of my car, one crate has 4 bottles left, one has 7 bottles left and one has 2 bottles. I also have two cases of Louis Roederer Brut NV champagne, one case has 2 bottles left and one has only one. (I have two objects in my tablespace – one of type Beer, one of type Champagne – and my boot requires manual free space management .)

I move all the Guiness bottles into a single crate and all the champagne bottles into a single case. That’s a couple of “shrink space compact” calls – I’ve re-organised the objects to get all the bottles in each object close to each other, but the crates are still taking up space in the boot.

I take the two empty crates and the empty case out of the boot. That’s a couple of “resize” (or “shrink space” without “compact”) calls that free up space in the boot.

I now want to load a port barrel into car, but it won’t fit until I slide the remaining beer crate and champagne case together at one side of the boot. That’s a couple of “move” commands that have reorganized the boot (tablespace) to make the free space usable.

 


Pages

Subscribe to Oracle FAQ aggregator