Feed aggregator

drop pluggable database does not remove all files

Tom Kyte - Wed, 2016-11-23 05:46
Hello, I use multitenant architecture(CDB) in my daily deployments. So on my oracle server i have two "model" pluggable db with name PDB10, PDB11. These db are using in cloning process. using following statement i make duplicate of these db e.g ...
Categories: DBA Blogs

Oracle Apps Database

Tom Kyte - Wed, 2016-11-23 05:46
Hi, I would like to know about Difference between Oracle Apps(r12) Database and Oracle Database(11g)?
Categories: DBA Blogs

Sql file and batch file

Tom Kyte - Wed, 2016-11-23 05:46
Hi, I have a sql file which will take schema name prefix as the parameter.This file in turn will call various sql files and all these files use this variable to do the job. I execute this file like this: sql> @execute_scripts.sql &&1 Enter ...
Categories: DBA Blogs

12C - what does the C stand for

Tom Kyte - Wed, 2016-11-23 05:46
Hi, In Oracle 12C DataBase,is C means Cloud?
Categories: DBA Blogs

Logic to get the first row with one column value of the last duplicate row among all the duplicates

Tom Kyte - Wed, 2016-11-23 05:46
I have a table A with following data. ID Acc_no start_date verify_date completed_date ===========================================================================...
Categories: DBA Blogs

Full Export to testing with different S.O schema

Tom Kyte - Wed, 2016-11-23 05:46
Hi, I want to know if it is possible to import on a testing server, with different file locations. I.e. for example. -- A prod server with: /o1/oradata : datafiles /ssd1/oradata : datafiles -- A testing server with: /o1/oradata : da...
Categories: DBA Blogs

12cR2: Upgrade by remote clone with TDE in DBaaS

Yann Neuhaus - Wed, 2016-11-23 00:20

Upgrading from 12.1 to 12.2 is easy in Oracle Public Cloud DBaaS because you are in multitenant. Here is how to clone a 12.1 PDB to 12.2 service.

I’ve a service HP121 in 12.1.0.2 with one pluggable database PDB1 and a service HP122 in 12.2.0.1 with an empty CDB (only CDB$ROOT and PDB$SEED containers).

Export TDE key

The Oracle Public Cloud uses Transparent Data Encryption to secure the datafiles. When you move the pluggable databases you need to export/import the encryption keys.

Here is the key:

18:42:58 HP121 SQL>select wrl_type,wrl_parameter,wallet_type from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER WALLET_TY
-------- ---------------------------------------- ---------
FILE /u01/app/oracle/admin/HP121/tde_wallet/ AUTOLOGIN
 
18:42:58 HP121 SQL>select key_id from v$encryption_keys where creator_pdbname='PDB1';
 
KEY_ID
------------------------------------------------------------------------------
AQqCc8XWV09uvxkaw0Bm5XUAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

The instance uses an auto-login wallet and you cannot export the keys from that:

18:42:58 HP121 SQL>administer key management export encryption keys with secret "oracle" to '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d" with identifier in (select key_id from v$encryption_keys where creator_pdbname='PDB1');
administer key management export encryption keys with secret "oracle" to '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d" with identifier in (select key_id from v$encryption_keys where creator_pdbname='PDB1')
*
ERROR at line 1:
ORA-28417: password-based keystore is not open

You need to open it with the password:

18:42:58 HP121 SQL>administer key management set keystore close;
keystore altered.
 
18:42:58 HP121 SQL>administer key management set keystore open identified by "Ach1z0#d";
keystore altered.
 
18:42:58 HP121 SQL>select wrl_type,wrl_parameter,wallet_type from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER WALLET_TY
-------- ---------------------------------------- ---------
FILE /u01/app/oracle/admin/HP121/tde_wallet/ PASSWORD

And then you can export it:

18:42:58 HP121 SQL>administer key management export encryption keys with secret "oracle" to '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d" with identifier in (select key_id from v$encryption_keys where creator_pdbname='PDB1');
keystore altered.

Import TDE key

I copy the file /tmp/cdb2pdb1.p12 to the destination (scp) and then I can import it, giving the same ‘secret’ identifier. Here again i have to open the wallet with password because it cannot be imported when opened

18:43:04 HP122 SQL>administer key management set keystore close;
keystore altered.
18:43:04 HP122 SQL>administer key management set keystore open identified by "Ach1z0#d";
keystore altered.
18:43:04 HP122 SQL>administer key management import encryption keys with secret "oracle" from '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d";
keystore altered.

Database link

We need to create a database link to the source (don’t forget to open the port for the listener):

18:43:04 HP122 SQL>select dbms_tns.resolve_tnsname('//HP121/HP121.demnov.oraclecloud.internal') from dual;
 
DBMS_TNS.RESOLVE_TNSNAME('//HP121/HP121.DEMNOV.ORACLECLOUD.INTERNAL')
--------------------------------------------------------------------------------
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=HP121.demnov.oraclecloud.internal)(CID=
(PROGRAM=oracle)(HOST=HP122.compute-demnov.oraclecloud.internal)(USER=oracle)))(
ADDRESS=(PROTOCOL=TCP)(HOST=10.196.202.47)(PORT=1521)))
 
18:43:04 HP122 SQL>create database link HP121@HP121 connect to system identified by "Ach1z0#d" using '//HP121/HP121.demnov.oraclecloud.internal';
Database link created.
 
18:43:04 HP122 SQL>select host_name from v$instance@HP121@HP121;
 
HOST_NAME
----------------------------------------------------------------
HP121.compute-demnov.oraclecloud.internal

Remote clone

You need to have the source PDB1 opened read-only, and the cloning is only one command:

18:43:09 HP122 SQL>create pluggable database PDB1 from PDB1@HP121@HP121 keystore identified by "Ach1z0#d";
Pluggable database created.

Upgrade

Now that you have the PDB you can open it (because you have imported the TDE key) but the dictionary is still in 12.1 so you have to run:

[oracle@HP122 ~]$ dbupgrade -c PDB1

This is described in previous post: http://blog.dbi-services.com/12cr2-how-long-to-upgrade-a-pdb/

 

Cet article 12cR2: Upgrade by remote clone with TDE in DBaaS est apparu en premier sur Blog dbi services.

Webinar: Quality, Safety, Knowledge Management with Oracle WebCenter Content and ControlCenter

DATE: THURSDAY, DECEMBER 8, 2016
TIME: 10:00 A.M. PST / 1:00 P.M. EST

Join Ryan Companies Vice President of Construction Operations, Mike Ernst, and Fishbowl Solutions Product Manager, Kim Negaard, to learn how Ryan Companies, a leading national construction firm, found knowledge management success with ControlCenter for Oracle WebCenter Content.

In this webinar, you’ll hear first-hand how ControlCenter has been implemented as part of Ryan’s Integrated Project Delivery Process helping them create a robust knowledge management system to promote consistent and effective operations across multiple regional offices. You’ll also learn how ControlCenter’s intuitive, modern user experience enabled Ryan to easily find documents across devices, implement reoccurring review cycles, and control both company-wide and project-specific documents throughout their lifecycle.

Register today.

Register

 

 

The post Webinar: Quality, Safety, Knowledge Management with Oracle WebCenter Content and ControlCenter appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

How to skip header and footer line a file while accessing records from external table?

Tom Kyte - Tue, 2016-11-22 11:26
Hi Tom, You guys are doing a great job. How to skip header and footer like (first and last line) in a .dat file while loading records from external table concept. I can able to remove those line in Unix but they are having information about re...
Categories: DBA Blogs

Multiple Instances for One Database

Tom Kyte - Tue, 2016-11-22 11:26
Hi, Can we create a Multiple Instances on Single Database,If we Can Create Will we Face Any issues with that multiple instance architecture? Thank you.
Categories: DBA Blogs

is the search_columns not part of building the plan_hash_value?

Tom Kyte - Tue, 2016-11-22 11:26
Hello there, I have a table with a composite primary key over 3 columns (a,b,c). Now there are 2 typical queries running: <code>select * from my_table t where t.a = ? and t.b = ?</code> <code>select * from my_table t where t.a = ? and t.b betw...
Categories: DBA Blogs

Complex sql

Tom Kyte - Tue, 2016-11-22 11:26
Hi Tom, I am a newbie to oracle and i have a requirement. I even got struck at creating the table as i want to store only time in one of the column 'login' input 1 userid login 1 9:00 2 9:15 3 9:20 1 9:25 4 10:30 ...
Categories: DBA Blogs

FULL TABLE SCAN

Tom Kyte - Tue, 2016-11-22 11:26
Hi, I have table test_tab having billins of data. One of the column used in below query is TEST_KEY with datatype VARCHAR2(76). I have a requiremnt to fecth data from particular start and end position of string whch i sthere WHERE clause below...
Categories: DBA Blogs

Store performance history of SQL

Tom Kyte - Tue, 2016-11-22 11:26
Hello Tom, Please let me know if we can store the history of a sql that we wish to, for example I would like to store the statistics like hash plan, execution, time elapsed etc for a given sqlid, this may not be a top 10 or top 100 sql.
Categories: DBA Blogs

Upgrade from Oracle 11g to 12c

Tom Kyte - Tue, 2016-11-22 11:26
Hi, We have a stable application on 11.2.0 database. Business is planning to upgrade database from 11g to 12c. Kindly advise, if anything particular we need to take care during this upgrade. Is it advisable to lock the existing explain plans befor...
Categories: DBA Blogs

Slowness on a new database

Tom Kyte - Tue, 2016-11-22 11:26
I have a primary database and a standby database. On daily basis, during night, we stop the replication between primary and standby and using a storage technology I create a clone of that standby database which I open as an standalone database in rea...
Categories: DBA Blogs

Delete/Insert

Jonathan Lewis - Tue, 2016-11-22 06:59

Many of the questions that appear on OTN are deceptively simple until you start thinking carefully about the implications; one such showed up a little while ago:

What i want to do is to delete rows from table where it matches condition upper(CATEGORY_DESCRIPTION) like ‘%BOOK%’.

At the same time i want these rows to be inserted into other table.

The first problem is this: how carefully does the requirement need to be stated before you can decide how to address it? Trying to imagine awkward scenarios, or boundary conditions, can help to clarify the issue.

If you delete before you insert, how do you find the data to insert ?

If you insert before you delete, what happens if someone updates a row you’ve copied so that it no longer matches the condition. Would it matter if the update changes the row in a way that leaves it matching the condition (what you’ve inserted is not totally consistent with what you’ve deleted).

If you insert before you delete, and someone executes some DML that makes another row match the requirement should you delete it (how do you avoid deleting it) or leave it in place.

Once you start kicking the problem about you’ll probably come to the conclusion that the requirement is for the delete and insert to be self-consistent – in other words what you delete has to be an exact match for what you insert as at the time you inserted it. You’ll ignore rows that come into scope in mid-process due to other activity, and you’ll have to stop people changing rows that are being transferred (in case there’s an audit trail that subsequently says that there was, at some point in time, a row that matched the condition but never arrived – and a row that has arrived that didn’t match the final condition of the rows that disappeared).

Somehow your code needs to lock the set of rows to be transferred and then transfer those rows and eliminate them. There are two “obvious” and simple strategies – readers are invited to propose others (or criticise the two I – or any of the comments – suggest). I’ll start with a simple data setup for testing:


create table t1
as
select  object_id, object_name, owner
from    all_objects
;

alter table t1 add constraint t1_pk primary key(object_id);

create table t2
as
select  * from t1
where   rownum = 0
;

execute dbms_stats.gather_table_stats(user,'t1')
execute dbms_stats.gather_table_stats(user,'t2')

Option 1:

The simplest approach is often the best – until, perhaps, you spot the drawbacks – do a basic delete of the data to be transferred (which handles the locking) but wrap the statement in a PL/SQL block that captures the data (using the returning clause) and then inserts it into the target table as efficiently as possible. With thanks to Andrew Sayer who prompted this blog post:

declare
        type t1_rows is table of t1%rowtype;
        t1_deleted t1_rows;

begin
        delete from t1 where owner = 'SYSTEM'
        returning object_id, object_name, owner bulk collect into t1_deleted;

        forall i in 1..t1_deleted.count
                insert into t2 values t1_deleted(i);

        commit;
end;
/

The drawback to this, of course, is that if the volume to be transferred is large (where “large” is probably a fairly subjective measure) then you might not want to risk the volume of memory (PGA) it takes to gather all the data with the bulk collect.

Option 2:

For large volumes of data we could reduce the threat to the PGA by gathering only the rowids of the rows to be transferred (locking the rows as we do so) then do the insert and delete based on the rowids:

declare
        type rid_type is table of rowid;
        r rid_type;

        cursor c1 is select rowid from t1 where owner = 'SYSTEM' for update;

begin
        open c1;
        fetch c1 bulk collect into r;
        close c1;

        forall i in 1..r.count
                insert into t2 select * from t1 where rowid = r(i);

        forall i in 1..r.count
                delete from t1 where rowid = r(i);

        commit;
end;
/

Note, particularly, the “for update” in the driving select.

Inevitably there is a drawback to this strategy as well (on top of the threat that the requirement for memory might still be very large even when the return set is restricted to just rowids). We visit the source data (possibly through a convenient index and avoid visiting the table, of course) to collect rowids; then we visit the data again by rowid (which is usually quite efficient) to copy it, then we visit it again (by rowid) to delete it. That’s potentially a significant increase in buffer cache activity (especially latching) over the simple “delete returning” strategy; moreover the first strategy gives Oracle the option to use the index-driven optimisation for maintaining indexes and this method doesn’t. You might note, by the way, that you could include an “order by rowid” clause on the select; depending on your data distribution and indexes this might reduce the volume of random I/O you have to do as Oracle re-visits the table for the inserts and deletes.

We can address the PGA threat, of course, by fetching the rowids with a limit:


declare
        type rid_type is table of rowid;
        r rid_type;

        cursor c1 is select rowid from t1 where owner = 'SYSTEM' for update;

begin
        open c1;

--      dbms_lock.sleep(60);

        loop
                fetch c1 bulk collect into r limit 5;

                forall i in 1..r.count
                        insert into t2 select * from t1 where rowid = r(i);

                forall i in 1..r.count
                        delete from t1 where rowid = r(i);

                exit when r.count != 5;
        end loop;

        close c1; 

        commit;
end;
/

One thing to be aware of is that even though we fetch the rowids in small batches we lock  all the relevant rows when we open the cursor, so we don’t run into the problem of inserting thousands of rows into t2 and then finding that the next batch we select from t1 has been changed or deleted by another session. (The commented out call to dbms_lock.sleep() was something I included as a way of checking that this claim was true.) This doesn’t stop us running into a locking (or deadlocking) problem, of course; if it takes us 10 seconds to lock 1M rows in our select for update another user might manage to lock our millionth row before we get there; if, a few seconds later, it then gets stuck in a TX/6 wait trying to lock one of our locked rows after we start waiting in a TX/6 wait for our millionth row our session will time out after 3 further seconds with an ORA-00060 deadlock error.

The limit of 5 is just for demonstration purposes, of course – there were 9 rows in all_objects that matched the select predicate; in a production system I’d probably raise the limit as high as 255 (which seems to be the limit of Oracle’s internal array-processing).

You’ll notice, of course, that we can’t use this limited fetch approach with the delete command – the entire delete would take place as we opened the equivalent cursor and, though we can use the bulk collect with the returning clause, there is no syntax that allows something like the fetch with limit to take place.

Discarded option

My first thought was to play around with the AS OF SCN clause.  Select the current SCN from v$database and then do things like delete “as of scn”, or “select for update as of scn” – there were ways of getting close, but invariably I ended up running into Oracle error: “ORA-08187: snapshot expression not allowed here”. But maybe someone else can come up with a way of doing this that doesn’t add significant overheads and doesn’t allow for inconsistent results.


Oracle CORR Function with Examples

Complete IT Professional - Tue, 2016-11-22 05:00
In this article, I’ll explain what the Oracle CORR function does and show you some examples. Purpose of the Oracle CORR Function The Oracle CORR function returns the “coefficient of correlation” of a set of number pairs. What is a coefficient of correlation? Here’s an definition of what it means. This function can be used […]
Categories: Development

Catalog Validation: Why, What, When, Where and How?

Rittman Mead Consulting - Tue, 2016-11-22 02:04

One of the features everybody "loved" about OBIEE 11g were the Global Unique Identifiers (GUIDs), used to recognize users and groups based on an identifier that could be different from the username. The original aim of GUIDs was being able to distinguish different users sharing the same username coming from multiple Authentication Providers.

The GUIDs management could be tricky especially if they are not in sync between different environments, and could cause a wide range of errors like the inability to login or to see parts of the catalog.

[2016-10-20T09:19:04.000+02:00] [OBIPS] [ERROR:1] [] [saw.security.validate.indexes] [ecid: 0058cGJgGOkBh4sawh3j6G0001QC00000B,0] [tid: 2002437888] XXXX's guid 0A8AC9E0811D11E4AF4FE155B36CBFFD in catalog doesn't match guid 49BB3BB0629311E5BFFE71BB91F31C2B in backend! Aborting! Please UpdateGuids!

After checking the Presentation Services logs (sawlog.log), the solution for most of those errors was simply regenerating GUIDs. The GUIDs regeneration method however isn't something easily doable in a production system since it requires some downtime (a reboot of both the Oracle BI Server and Presentation Services is required).

Why Would you Run Catalog Validation?

You may ask yourself:

Why is he talking about GUIDs when they have been removed in OBIEE 12c?

And you would be perfectly correct. GUIDs misalignment is not a problem anymore in OBIEE 12c but was historically only one of the issues causing catalog corruption and that would require afterwards a catalog validation.

Even without GUIDs catalog corruption is still something that could happen in OBIEE 12c: objects (e.g. analysis, dashboards, agents) owned by deleted users, broken links, corrupted files in the server are only some of the issues that could be present in any OBIEE implementation no matter which version it's installed.
Most of the time corrupted catalogs generate errors which are difficult to diagnose and the manual fixing is not always possible and never easy to do.

The Catalog Validation process, available since OBIEE 11g, is very powerful since provides a detailed analysis - and an automated fix if configured - of all the catalog corruptions.

What is Catalog Validation?

As per Oracle's documentation, the Catalog Validation (CV) procedure does the following checks:

  • Ensures that each object in the catalog is larger than zero bytes: any object with zero bytes size is probably due to corruption and should be removed.
  • Ensures that each item in the catalog has a valid corresponding .atr file: the .atr file contains the properties (permissions, ownership, creation date, modification date etc.) of any object in the catalog. An object without related .atr file is not visible in OBIEE's front-end.
  • Ensures that each link in the catalog is valid: links to deleted or renamed dashboards and analysis will cause an error when clicked.
  • Ensures that the files in the account cache are valid: this step checks that all the accounts are valid and the cache entries (storing user related information) are up to date.
  • Ensures that all XML objects in the catalog pass schema validation: every object (dashboard, analysis, prompt etc.) in the catalog is stored as XML file. This step checks that the XML is valid.
  • Attempts to repair object names that were damaged by ftp programs: moving catalog objects using ftp programs could corrupt the object name.
When Should You Run Catalog Validation?

I've seen Catalog Validation being used only when problems were raised, however it is a good practice to validate the catalog every time a major change is made that impacts it or on a schedule in environments where end users can directly create content.

The following is a list of cases when running a Catalog Validation could be useful:

  • Before an upgrade: running CV before an upgrade and ensuring the consistency helps avoiding problems related to possible corruptions
  • After an upgrade: running CV after an upgrade to ensure that content and security migration worked
  • After a major change: when a major change happens in the catalog CV ensures to missing links or ownership problems are present
  • After a deployment: executing CV after a production deployment to check the content migration and verify the security.
  • On a schedule: execute CV on instances where end-users can create content and to verify accounts.

Please note that a catalog can have corruption even if no front-end enhancements have been made, the following are just some examples:

  • Developer account deletions: all objects owned by that account will be flagged as corrupted
  • Security changes: changing/deleting security roles impact catalog privileges
  • File System corruption: data can be badly written in file system
  • Content deletions: deleting content makes referring objects corrupted

Sometimes the OBIEE environment continues working as expected even if some of the above corruptions are present. Nevertheless on a long period those may be cause of errors especially if upgrades or changes in the security are planned.

Where Do You Run Catalog Validation?

Catalog Validation can be run in every OBIEE instance available, however the following use cases could be particularly interesting:

  • Validating development catalog: once consistency of development catalog is ensured it can then be migrated forward to production
  • Validating production (or smoke test) catalog: validating production catalog to ensure that code promotions happened consistently, that user homes are valid and that no objects (user created or promoted) are broken.

A particularity to note down is that if running CV with a production catalog in a different environment (e.g. development) with a different security store, then many accounts and their related content could be flagged as not-existent and deleted. As a general rule CV should be run on environments sharing the same security as where the catalog is sourced from, allowing a genuine check of the security settings.

Performing a Catalog Validation in production environment is not always possible due to the processes restarts required, a smoke test environment sharing the same security settings would be the perfect target for the test. When running Catalog Validation on a live catalog or when taking a catalog backup ensure that "Maintenance Mode" is activated: setting this flag ON (that can be found under Administration page in OBIEE's front-end) ensures that no changes can be performed in the catalog during the check or upgrade.

Maintenance Mode

How Do You Run Catalog Validation?

In order to run Catalog Validation you need to:

  • Stop Presentation Service[s] (obips): Stopping the component can be achieved either in Enterprise Manager or via command line. Command line syntax has changed between OBIEE 11g and 12c, you can find the two statements in below code
# 11g Syntax
$INSTANCE_HOME/bin/opmnctl stopproc ias-component=obips1
# 12c Syntax
$INSTANCE_HOME/bitools/bin/stop.sh -i OBIPS
  • Create a backup of the catalog: when performing a catalog backup 7-Zip should be the chosen tool. WinZip has know problems with catalog files (see Oracle's doc, chapter "17.10 Archiving and Unarchiving Using Catalog Manager").
  • Create a backup of instanceconfig.xml file (under $INSTANCE_HOME/config/fmwconfig/biconfig/OBIPS)
  • Change instanceconfig.xml file in order to include the validation tags explained in the following section
  • Start Presentation Service[s]: like the stop operation, this can be performed either via EM or command line. Below the code for 11g and 12c
# 11g Syntax
$INSTANCE_HOME/bin/opmnctl startproc ias-component=obips1
# 12c Syntax
$INSTANCE_HOME/bitools/bin/start.sh -i OBIPS
  • Repeat the steps above until the catalog is fully validated: As explained in section below, several different assessment and automated fixes can be performed. The sawlog.log files will contain entries when corrupted object are present in the catalog. A catalog is fully validated when no corrupted objects are found during CV.
  • Stop Presentation Service[s]
  • Restore original instanceconfig.xml file
  • Start Presentation Service[s]
Catalog Validation configuration

The following tags must be inserted under <ServerInstance><Catalog> tag.

<?xml version="1.0" encoding="UTF-8" standalone="no"?>  
    <!-- Oracle Business Intelligence Presentation Services Configuration File -->
    <WebConfig xmlns="oracle.bi.presentation.services/config/v1.1">
    <ServerInstance>
        [...]
        <Catalog>
              <Validate>OnStartupAndExit</Validate>
              <ValidateAccounts>Clean</ValidateAccounts>
              <ValidateHomes>Report</ValidateHomes>
              <ValidateItems>Clean</ValidateItems>
              <ValidateLinks>Clean</ValidateLinks>
        </Catalog>
        [...]
    </ServerInstance>
</WebConfig>  

The tags do the following. See below for an explanation of the values that can be specified:

  • Validate: Main configuration tag. Possible values are
    • None: No Catalog Validation is going to happen, however all the privileges and each object ACLs are cleaned for non-existing accounts
    • OnStartupAndExit: Presentation Service is started, performs the validation based on the following tags and stops. This process can be reiterated multiple times with different options for each element.
  • ValidateAccounts: Verifies the consistency of users, roles and groups.
  • ValidateHomes: Verifies all user's homes, is executed only if ValidateAccounts is set to Report or Clean
  • ValidateItems: Verifies if catalog items are consistent - size greater than zero and valid xml.
  • ValidateLinks: Verifies the consistency of all links in the catalog (e.g. all analysis contained in a dashboard).

The accepted values for all settings except Validate are: are the following:

  • None: no validation will be performed
  • Report: a log is written for every inconsistent item in sawlog.log file under $INSTANCE_HOME/servers/obips1/logs
  • Clean: does the same step as Report plus removing from the catalog the inconsistent object.

As you understand the "Clean" option isn't suggested for all tags, you don't want a dashboard to be deleted only because the owner doesn't exist anymore, but it is the desired choice when you need to remove all the old or corrupted user homes. The "Report" option on the other side provides a way of logging all the corrupted items and fixing them manually.

Catalog Validation is an extremely useful tool, allowing an automated check (and fix) of all the corrupted items in the catalog. Using Catalog Validation together with Baseline Validation Tool provides a way of ensuring the correctness of migrations and developments:

  • Running Catalog Validation before the migration to ensure all objects to promote are consistent
  • Running Catalog Validation after the migration to ensure the consistency of all promoted objects and security
  • Running Baseline Validation Tool between source and target environment to ensure the expected outputs are matching.

Summarizing Catalog Validation and Baseline Validation Tool can be considered complementary: the first checks the catalog objects and security consistency, the second analyses and compares the expected results. Running both alongside any code promotion process should be considered a good practice.

Categories: BI & Warehousing

Difference between SQL and PL/SQL

Tom Kyte - Mon, 2016-11-21 17:06
Hi, What is the Exact Difference Between SQL and Pl/SQL?How This Both are Integrated to Achieve the data from database? Thank you.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator