Feed aggregator

Rows to Columns Concatenating Values

Tom Kyte - Mon, 2017-06-12 18:46
Hi, <code>Seq_num ITEM_CODE ATTRIBUTE_NAME ATTRIBUTE_VALUE LAST_MODIFIED ==================================================================================================================== 25521604 "HTC-UA-BUNDLE" "Alkaline_Battery_Count" 4...
Categories: DBA Blogs

Your actual view on IT Architecture for where to put the logics: inside or outside the database

Tom Kyte - Mon, 2017-06-12 18:46
Hi Tom and team This is the first question ever I am posting here, until now I was using Asktom as a silent reader only - and I want to say thank You a thousand times to keep this site up for so long, helping many people to find explanations and s...
Categories: DBA Blogs

Reverse Key

Tom Kyte - Mon, 2017-06-12 18:46
Tom, I have many primary keys that are sequences. Thinking of using reverse keys. How do I change the index created by the primary key? Rebuild as a reverse key? Thanks. Keith Cutler More info: Reading through the books I received wi...
Categories: DBA Blogs

data retention from 68 table and show it

Tom Kyte - Mon, 2017-06-12 18:46
Hi Tom I have one requirement please let me know the solution Requirement: ============ I have 68 table names that I get from user_tab_columns using some conditions and I achieve it easily But the challenge is I want to show all the co...
Categories: DBA Blogs

MView fast refresh - snapshot to old

Tom Kyte - Mon, 2017-06-12 18:46
Hi, When executing fast refresh over DbLink on MView "snapshot to old" error is returned. Refresh is executed at database DB_MV: <code>dbms_mview.refresh( list => 'MV_TABLE_NAME');</code> Master table is located on DB_MASTER At execut...
Categories: DBA Blogs

Table creation script on a Select statement take longer than Select Statement time

Tom Kyte - Mon, 2017-06-12 18:46
Hi, I have a Select Statement completing in 20 minutes, but when I try to create a Table with the Select statement(more than 10M Records) it takes more than 2 hours to complete. I have ti use the Table in a PL/SQL program for Data manipulation. I ...
Categories: DBA Blogs

Oracle Releases PeopleSoft Cloud Manager Image 04

PeopleSoft Technology Blog - Mon, 2017-06-12 15:35

PeopleSoft Cloud Manager Image 04 is now available in Oracle Cloud Marketplace.  This new version brings in a number of new enhancements and fixes that customers were asking for.  We’re listening to feedback and working hard to incorporate your ideas into the product as quickly as possible. Here are some of the new features to try out in image 04.  Many thanks to Nagendra Krishnappa for putting this together.

Multiple Middle-Tier Provisioning

The prior images of Cloud Manager had a limitation on the number of middle-tiers that could be provisioned.  With this new image, you can create multiple middle-tiers and connect them all to the same database. In the topology definition, you can now add multiple middle-tier nodes for a topology.  When you add/use this topology in a template, you can configure attributes for each middle-tier individually.

Elastic Search deployment and Configuration

In image 03, Cloud Manager was deploying the Elastic Search server, but users had to manually configure and integrate it with the environment.  In image 04, Cloud Manager can now automatically integrate the Elastic Search server - one less administrative task for you. 

Support Lift of Campus Solutions On-Premise Environments

If you are using Campus Solutions, we now have the capability to migrate (“Lift and Shift”) your on-premise environment to Oracle Cloud.  Cloud Manager’s “Lift” utility will pack up your on-premise environment and “Shift” it to Oracle Cloud automatically. 

Configurable Persistent Boot Volume for Deployed Instances 

Boot volumes of PeopleSoft VMs were not configurable, thereby limiting the size of boot volumes to a standard image size.  You can now modify this setting to specify the larger sizes that you may need.

Security Updates

A couple of security fixes has been addressed in this release related to masking of sensitive data and hardening rules to control network access between provisioned VMs.

Bug Fixes

Improvements and fixes in areas of DBCS provisioning and environment management were addressed in this image. 

New Oracle Linux image for Cloud Manager

Along with the latest Cloud Manager image, we now have a reference Oracle Linux image to get you started quickly.  You can find it in Marketplace: PeopleSoft Linux Image for Cloud Manager.  Configure this image as the Linux image in Cloud Manager Settings and it will be used for all instance provisioning.  

You may also create your own custom image starting with Cloud Manager image 04.  To do that, first deploy an instance of the reference Oracle Linux image.  Next, connect to it and install/upgrade required packages, modify kernel parameters and customize configurations to suit your needs.  When you are done with customizations, create a new image from it and use it with Cloud Manager.  You can find more details about creating a custom Linux image for PeopleSoft Cloud Manager in the Creating a Custom Linux Image for PeopleSoft Cloud Manager tutorial.

Go ahead and enjoy the latest image, and continue to let us know what additional capabilities you would like to have in Cloud Manager!

 

12cR2 PDB refresh as a poor-man standby?

Yann Neuhaus - Mon, 2017-06-12 15:04
Disclaimer

My goal here is only to show that the Refreshable PDB feature works by shipping and applying redo, and then can synchronize a copy of the datafiles. I do not recommend to use it for disaster recovery in any production environment yet. Even if I’m using only supported features, those features were not designed for this usage, and are quite new and not stable yet. Disaster Recovery must use safe and proven technologies and this is why I’ll stick with Dbvisit standby for disaster recovery in Standard Edition.

This post explains what I had in my mind whith the following tweet:
CapturePoorManSBY

Primary PRDPDB

On my primary server, I have a CDB1 container database in Standard Edition with one Pluggable Database: PDRDPDB:

21:36:45 SQL> connect sys/oracle@//192.168.78.105/CDB1 as sysdba
Connected.
 
21:36:46 SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PRDPDB READ WRITE NO

I need a user there to be able to remote clone from it:

21:36:46 SQL> grant create session, sysoper, dba to C##DBA identified by oracle container=all;
Grant succeeded.

Standby server

On my standby server, I have a CDB1 container database in Standard Edition, where I create a database link to the production CDB using the user created above to connect to it:

21:36:46 SQL> connect sys/oracle@//192.168.78.106:1522/CDB1 as sysdba
Connected.
21:36:46 SQL> create database link CDB1A connect to C##DBA identified by oracle using '//192.168.78.105/CDB1A';
Database link created.

My standby server runs Grid Infrastructure and has the database created on /acfs which is an ACFS filesystem. We will see the reason later when we will need to create a PDB snapshot copy. Any filesystem where you can use PDB snapshot copy would be fine.

Standby SBYPDB

The creation of the ‘standby’ pluggable database is done with a simple remote clone command and can be run in 12cR2 with the source PRDPDB still opened read write:


21:36:46 SQL> create pluggable database SBYPDB from PRDPDB@CDB1A
21:36:46 2 file_name_convert=('/u01/oradata/CDB1A/PRDPDB','/acfs/oradata/CDB1/SBYPDB')
21:36:46 3 refresh mode every 1 minutes;
 
Pluggable database created.

The REFRESH MODE is a 12cR2 feature which primary goal is to maintain and refresh a master clone for further provisioning of thin clones. This clone is refreshed every 1 minute, which means that I expect to have at most a one minute gap between PRDPDB and SBYPDB data, with the additional time to apply the 1 minute redo, of course.

Activity on the source

I will simulate a crash of the primary server and a failover to the standby, when transactions are running. I’ll run this activity on the SCOTT.EMP table:

21:39:03 SQL> connect scott/tiger@//192.168.78.105/PRDPDB;
Connected.
 
21:39:04 SQL> select * from emp where ename='KING';
 
EMPNO ENAME JOB MGR HIREDATE SAL
---------- ---------- --------- ---------- -------------------- ----------
7839 KING PRESIDENT 17-nov-1981 00:00:00 5000

I’m now updating the date and incrementing the number each second.

21:39:09 SQL> exec for i in 1..150 loop update emp set hiredate=sysdate, sal=sal+1; dbms_lock.sleep(1); commit; end loop
 
PL/SQL procedure successfully completed.

Here is the latest data on the primary server:

21:41:39 SQL> select * from emp where ename='KING';
 
EMPNO ENAME JOB MGR HIREDATE SAL
---------- ---------- --------- ---------- -------------------- ----------
7839 KING PRESIDENT 11-jun-2017 21:41:38 5150

Crash the primary

The primary server is not supposed to be accessible in case of Disaster Recovery, so I’m crashing it:

21:41:39 SQL> disconnect
Disconnected from Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
21:41:39 SQL> connect / as sysdba
Connected.
21:41:39 SQL> shutdown abort
ORACLE instance shut down.

Activate the standby

The datafiles are up to date, with a maximum 1 minute gap and all I want is open it and have the application re-connect to it. However a refreshable clone can be opened only read-only. This makes sense: you cannot apply more redo from source once opened read-write. So my first idea was to stop the refresh mode:

21:41:45 SQL> connect sys/oracle@//192.168.78.106:1522/CDB1 as sysdba
Connected.
21:41:45 SQL> alter session set container=SBYPDB;
Session altered.
 
21:41:45 SQL> alter pluggable database SBYPDB refresh mode none;
alter pluggable database SBYPDB refresh mode none
*
ERROR at line 1:
ORA-17627: ORA-12514: TNS:listener does not currently know of service requested
in connect descriptor
ORA-17629: Cannot connect to the remote database server

It seems that Oracle tries to do one last refresh when you stop the refresh mode, but this fails here because the source is not accessible. I think that it should be possible to open read-write without applying more redo. However, these refreshable clones were not designed for failover.

I hope that one day we will just be able to end refresh mode without connecting to source, accepting to lose the latest transactions.

Open Read Only

Without an access to the source, I stay in refresh mode and I can only open read only:
21:41:45 SQL> alter pluggable database SBYPDB open read-only;
Pluggable database altered.
 
21:41:47 SQL> alter session set container=SBYPDB;
Session altered.
&nsbp;
21:41:47 SQL> alter session set nls_date_format='dd-mon-yyyy hh24:mi:ss';
Session altered.
 
21:41:47 SQL> select * from scott.emp where ename='KING';
 
EMPNO ENAME JOB MGR HIREDATE SAL
---------- ---------- --------- ---------- -------------------- ----------
7839 KING PRESIDENT 11-jun-2017 21:41:01 5113

My data is there, with my less than one minute gap, but that’s not sufficient for me. I want to run my application on it.

Snapshot Clone

My first idea to get the PDB read write on the standby server is to clone it. Of course, the failover time should not depend on the size of the database, so my idea is to do a snapshot copy, and this is why I’ve setup my standby CDB on ACFS. Here I’m cloning the SBYPDB to the same name as the primary: PRDPDB

21:41:47 SQL> alter session set container=CDB$ROOT;
Session altered.
 
21:41:47 SQL> create pluggable database PRDPDB from SBYPDB file_name_convert=('SBYPDB','PRDPDB') snapshot copy;
Pluggable database created.
 
21:42:03 SQL> alter pluggable database PRDPDB open;
Pluggable database altered.

I have now my new PRDPDB opened read write with the latest data that was refreshed:

21:42:26 SQL> alter session set container=PRDPDB;
Session altered.
 
21:42:26 SQL> alter session set nls_date_format='dd-mon-yyyy hh24:mi:ss';
Session altered.
 
21:42:26 SQL> select * from scott.emp where ename='KING';
 
EMPNO ENAME JOB MGR HIREDATE SAL
---------- ---------- --------- ---------- -------------------- ----------
7839 KING PRESIDENT 11-jun-2017 21:41:01 5113

I’m running on a snapshot here. I can stay like that, or plan to move it out of the snapshot in the future. There is no online datafile move in Standard Edition, but there is the online pluggable database relocate. Anyway, running the database in a snapshot is sufficient to run a production after a Disaster Recovery and I can remove the SBYPRD so that there is no need to copy the ACFS extents on future writes.

Keep the snapshot

At that point, you should tell me that I cannot snapshot copy a PDB within the same CDB here because I’m in Standard Edition. And that’s right: you can create only one PDB there and you are supposed to get a ‘feature not enabled’. But I was able to do it here in my lab, with a small trick to inverse the CON_ID sequence:

Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PRDPDB READ WRITE NO
4 SBYPDB MOUNTED

Remote snapshot clone should be possible as well. But there’s another licensing issue here. Using ACFS snapshots for the database is not allowed in Standard Edition. This means that this solution probably requires another snapshot solution than the one I’m using here in my lab.

If you don’t fear to violate the single-tenant rules, you may prefer to keep the SBYPRD for a while. Imagine that you are able to restart the crashed server for a few minutes, then you can do the last refresh of SBYPRD to have a look at the transactions that were lost in the 1 minute window.

I re-start the crashed CDB:

21:42:26 SQL> connect / as sysdba
Connected to an idle instance.
21:42:27 SQL> startup
ORACLE instance started.
 
Total System Global Area 859832320 bytes
Fixed Size 8798552 bytes
Variable Size 356519592 bytes
Database Buffers 486539264 bytes
Redo Buffers 7974912 bytes
Database mounted.
Database opened.

and now, on my standby server, I can finally stop the refresh mode:

21:42:51 SQL> connect sys/oracle@//192.168.78.106:1522/CDB1 as sysdba
Connected.
21:42:52 SQL> alter pluggable database SBYPDB close;
Pluggable database altered.
 
21:42:52 SQL> alter session set container=SBYPDB;
Session altered.
 
21:42:52 SQL> alter pluggable database SBYPDB refresh mode none;
Pluggable database altered.

Be careful not to have jobs or services starting here because your production is now on the snapshot clone PRDPDB running on the same server. Let’s open it:

21:43:02 SQL> alter pluggable database SBYPDB open restricted;
Pluggable database altered.
 
21:43:24 SQL> select * from scott.emp where ename='KING';
 
EMPNO ENAME JOB MGR HIREDATE SAL
---------- ---------- --------- ---------- -------------------- ----------
7839 KING PRESIDENT 11-jun-2017 21:41:38 5150

And here we are with the data at the moment of the crash. Then, the application owner can manually check what was missed between the last refresh (which made its way to PRDPDB) and the crash (visible in SBYPDB).

Unplug/Plug

I was not very satisfied by the snapshot clone because of the limitations in Standard Edition, which is where this solution may be interesting. I have the datafiles but cannot open the SBYPDB read write. I tried to unplug them but cannot because of the refresh mode:

SQL> alter pluggable database SBYPDB unplug into '/tmp/tmp.xml';
 
Error starting at line : 1 in command -
alter pluggable database SBYPDB unplug into '/tmp/tmp.xml'
Error report -
ORA-01113: file 23 needs media recovery
ORA-01110: data file 23: '/acfs/oradata/CDB1/SBYPDB/undotbs01.dbf'
01113. 00000 - "file %s needs media recovery"
*Cause: An attempt was made to online or open a database with a file that
is in need of media recovery.
*Action: First apply media recovery to the file.

I know that I don’t need more recovery. So let’s unplug it in another way:

SQL> alter pluggable database SBYPDB open read only;
Pluggable database SBYPDB altered.
 
SQL> exec dbms_pdb.describe('/tmp/tmp.xml','SBYPDB');
PL/SQL procedure successfully completed.

Then drop it but keep the datafiles:

SQL> alter pluggable database SBYPDB close;
Pluggable database SBYPDB altered.
 
SQL> drop pluggable database SBYPDB;
Pluggable database SBYPDB dropped.

And plug it back:

SQL> create pluggable database SBYPDB using '/tmp/tmp.xml';
Pluggable database SBYPDB created.
 
SQL> alter pluggable database SBYPDB open;
Pluggable database SBYPDB altered.

Here it is. This takes a bit longer than the snapshot solution but still ready to activate the ‘standby’ PDB without copying datafiles.

So what?

All the new 12cR2 multitenant features are available in all Editions, which is very good. Here with ALTER PLUGGABLE DATABASE … REFRESH we have log shipping and apply, for free in Standard Edition, at PDB level. And I’ve tested two ways to open this standby PDB in case of disaster recovery. I’m using only supported features here, but be careful that those features were not designed for this goal. The normal operations on refreshable clone require that the remote CDB is accessible. But there are workarounds here because you can describe/drop/plug or snapshot clone from a PDB that you can open read only.

 

Cet article 12cR2 PDB refresh as a poor-man standby? est apparu en premier sur Blog dbi services.

WebLogic, Docker, OCCS: Most Watched 2 Minute Tech Tips - June 5-11, 2017

OTN TechBlog - Mon, 2017-06-12 10:22

Sure, you can read through page after page of documentation. But why bother when you can get useful technical tips in 2-minute video chunks? Here's this week's list of the most-watched Tech Tip videos.

And if you have a tip of your own to share, let's make it happen. Contact me: bob.rhubart@oracle.com

1 Running WebLogic Applications on Docker using the OCCS | Craig Barr
May 12, 2017
Last week: #7
5th Top 10 appearance 2 New Features in Oracle Database 12.2 | Chris Saxon
February 22, 2017
Last week: #3
16th Top 10 appearance 3 Heap Tables and Index Organized Tables in Oracle | Chris Saxon
March 4, 2015
Last week: #6
49th Top 10 appearance 4 Why Learn Oracle Identity and Access Management | Atul Kumar
February 24, 2016
Last Top 10 appearance: May 29, 2017
22nd Top 10 appearance 5 When to Use Oracle SOA Cloud Service | Robert van Molken
December 1, 2016
Last week: #2
20th Top 10 appearance 6 Using the Declarative Features in Oracle APEX | Joel Kallman
March 5, 2015
Last Top 10 appearance: March 21, 2016
2nd Top 10 appearance 7 Programming Languages for the Raspberry Pi | Lonneke Dikmans
August 12, 2015
Last week: #10
28th Top 10 appearance 8 Oracle GoldenGate for Big Data | Michael Rainey
August 12, 2016
Last Top 10 appearance: May 22, 2017
12th Top 10 appearance 9 Oracle Function Result Cache | Steven Feuerstein
February 11, 2016
Last Top 10 appearance: September 12, 2016
5th Top 10 appearance 10 Oracle EBS integration with Identity and Access Management for SSO | Atul Kumar
Feb 14, 2017
Last Top 10 appearance: May 8, 2017
6th Top 10 appearance Last Week's Top 10

Integration, DevOps, Testing: Most-Watched 2 Minute Tech Tips - May 29-June 4, 2017

ETL using Oracle DBMS_HS_PASSTHROUGH and SQL Server

Amis Blog - Mon, 2017-06-12 10:04

While I prefer a “loosely coupled architecture” for replication between Oracle and SQL Server, sometimes a direct (database) link cannot be avoided. By using DBMS_HS_PASSTHROUGH for data extraction the 2 other ETL processes (transformation and load) can be configured and administered with more flexibility, providing an almost acceptable level of “loosely coupled processing“.
Consider this as a really simple ETL config:

    Extract: Select SQL Server data with native sql, using DBMS_PASSTHROUGH and a PIPELINED function.
    Transform: Define a view on top of the function and transform column_names and column datatypes correctly.
    Load: SQL> insert into oracle_table select * from oracle_view;

When you use DBMS_HS_PASSTHROUGH Oracle doesn’t interpret the data you receive from SQL Server. By default this is done by the dg4odbc process, and the performance benefit in bypassing this process is considerable. Also, you’re not restricted by the limitations of dg4odbc and can transform the data into anything you need.

Like dg4odbc DBMS_HS_PASSTHROUGH depends on Heterogeneous Services (a component built-in to Oracle) to provide the connectivity between Oracle and SQL Server. Installation of unixODBC and a freeTDS driver on Linux is required to setup the SQL Server datasource… installation and configuration steps can be found here and here. DBMS_HS_PASSTHROUGH is invoked through an Oracle database link. The package conceptually resides at SQL Server but, in reality, calls to this package are intercepted and mapped to one or more Heterogeneous Services calls. The freeTDS driver, in turn, maps these calls to the API of SQL Server. More about DBMS_HS_PASSTHROUGH here.

Next a short example of how to setup data extraction from SQL Server with DBMS_HS_PASSTHROUGH and data transformation within the definition of a view. In this example the SQL Server column names differ from the ones in Oracle in case, length and/or in name and/or in datatype, and are transformed by the view. NLS_DATE_FORMAT synchronization is an exception… it’s done in the extract package itself. Reason is that all dates in this particular SQL Server database use a specific format, and it doesn’t really obscure the code. But if you choose to refrain from all transformation code in the extract package, you could create types with VARCHAR2’s only, and put all your to_number, to_date and to_timestamp conversion code in the view definition.

Extract

-- create Oracle types for uninterpreted SQL Server data
CREATE OR REPLACE TYPE E01_REC 
AS OBJECT(
  C01    NUMBER(8),
  C02    VARCHAR2(25 CHAR),
  C03    VARCHAR2(3 CHAR),
  C04    NUMBER(8),
  C05    DATE,
  C06    DATE );
/

CREATE OR REPLACE TYPE E01_TAB AS TABLE OF E01_REC;
/

-- create the extract package
CREATE OR REPLACE PACKAGE E AUTHID DEFINER AS
--------------------------------------------------------- 
  FUNCTION E01 RETURN E01_TAB PIPELINED;
---------------------------------------------------------
END E;
/

-- create the extract package body
CREATE OR REPLACE PACKAGE BODY E AS
  v_cursor   BINARY_INTEGER;  
  v_out_e01  E01_REC:=E01_REC(NULL,NULL,NULL,NULL,NULL,NULL);
-------------------------------------------------------------------------
  v_stat_e01 VARCHAR2(100):= 'Select SiteID
                                   , SiteName
                                   , SiteMnemonic
                                   , PointRefNumber
                                   , OpeningDate
                                   , ClosingDate
                               From ObjSite';
                                 
-------------------------------------------------------------------------
FUNCTION E01
RETURN E01_TAB PIPELINED
  IS
BEGIN
  execute immediate 'alter session set NLS_DATE_FORMAT = ''YYYY-MM-DD HH24:MI:SS'' ';
  v_cursor := DBMS_HS_PASSTHROUGH.OPEN_CURSOR@<DBLINK>;
  DBMS_HS_PASSTHROUGH.PARSE@<DBLINK>(v_cursor,v_stat_e01);
  WHILE DBMS_HS_PASSTHROUGH.FETCH_ROW@<DBLINK>(v_cursor) > 0
    LOOP
      DBMS_HS_PASSTHROUGH.GET_VALUE@<DBLINK>(v_cursor,1,v_out_e01.c01);
      DBMS_HS_PASSTHROUGH.GET_VALUE@<DBLINK>(v_cursor,2,v_out_e01.c02);
      DBMS_HS_PASSTHROUGH.GET_VALUE@<DBLINK>(v_cursor,3,v_out_e01.c03);
      DBMS_HS_PASSTHROUGH.GET_VALUE@<DBLINK>(v_cursor,4,v_out_e01.c04);
      DBMS_HS_PASSTHROUGH.GET_VALUE@<DBLINK>(v_cursor,5,v_out_e01.c05);
      DBMS_HS_PASSTHROUGH.GET_VALUE@<DBLINK>(v_cursor,6,v_out_e01.c06);
    PIPE ROW(v_out_e01);
    END LOOP;
  DBMS_HS_PASSTHROUGH.CLOSE_CURSOR@<DBLINK>(v_cursor);
  RETURN;
EXCEPTION
  WHEN NO_DATA_NEEDED THEN
    DBMS_HS_PASSTHROUGH.CLOSE_CURSOR@<DBLINK>(v_cursor);
  WHEN OTHERS THEN
    DBMS_HS_PASSTHROUGH.CLOSE_CURSOR@<DBLINK>(v_cursor);
    DBMS_OUTPUT.PUT_LINE(SQLERRM||'--'||DBMS_UTILITY.FORMAT_ERROR_BACKTRACE);
  RAISE;
END E01;
------------------------------------------------------------------------
END E;
/

Transform

CREATE OR REPLACE FORCE VIEW SITE_VW
AS 
SELECT TO_NUMBER(C01) SITEID,
       C02            STATIONNAME,
       C03            SITEMNEMONIC,
       TO_NUMBER(C04) STATIONID,
       C05            OPENINGDATE,
       C06            CLOSINGDATE
FROM TABLE(E.E01);

Load

INSERT INTO SITE SELECT * FROM SITE_VW;
COMMIT;

The post ETL using Oracle DBMS_HS_PASSTHROUGH and SQL Server appeared first on AMIS Oracle and Java Blog.

Unify: See Your Data From Every Perspective

Rittman Mead Consulting - Mon, 2017-06-12 09:09
 See Your Data From Every Perspective

 See Your Data From Every Perspective

Ad hoc access to accurate and secured data has always been the goal of business intelligence platforms. Yet, most fall short of balancing the needs of business users with the concerns of IT.

Rittman Mead has worked with hundreds of organizations representing all points on the spectrum between agility and governance. Today we're excited to announce our new product, Unify, which allows Tableau users to directly connect to OBIEE, providing the best of both worlds.

Governed Data Discovery

Business users get Tableau's intuitive data discovery features and the agility they need to easily blend their departmental data without waiting on IT to incorporate it into a warehouse. IT gets peace of mind, knowing their mission-critical data is protected by OBIEE's semantic layer and row-level security.

Unify Essentials

Unify runs as a desktop app, making it easy for departmental Tableau users to connect to a central OBIEE server. Unify also has a server option that runs alongside OBIEE, for organizations with a large Tableau user base or those using Tableau Server.

Desktop installation and configuration is simple. Once installed, users can query OBIEE from within Tableau with just a few clicks. Have a look at these short videos demonstrating setup and use of Unify.

Available Today

Download your free 7-day trial of Unify Desktop here.

No Tableau Desktop license? No problem. Unify is compatible with Tableau Public.

Categories: BI & Warehousing

Log Buffer #513: A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2017-06-12 08:57

This Log Buffer Edition includes blog posts from Oracle, SQL Server and MySQL.

Oracle:

In 12.2 you can: Convert a simple table to partitioned with multi-column automatic list partitions, partially indexed, with read only segments, filtering out unwanted data, online in one operation.

12c How to UN Expire Password for CDB Users

rman auxiliary (for duplicate)

How long will Oracle APEX remain an included feature of the Oracle Database?

How to easily delete files in the Oracle Cloud using CloudBerry Explorer

SQL Server:

Understanding OPENJSON and FOR JSON in Azure and SQL 2016

How to setup Machine Learning Services in SQL Server 2017 CTP2

The OUTPUT Clause for the MERGE Statements

Simple script to backup all SQL Server databases

Working with Windows Containers and Docker: Into your Stride

MySQL:

MySQL on Docker: Running Galera Cluster on Kubernetes

Summary of recent performance tests for MySQL 5.6, 5.7 and 8

A Quick Look at Parallel Rsync and How it Can Save a System Hours

Docker, MySQL and Experience in Containers

HopsFS running on top of MySQL Cluster 7.5 wins IEEE Scale Challenge 2017

Categories: DBA Blogs

dbms_sqldiag

Jonathan Lewis - Mon, 2017-06-12 06:48

If you’re familiar with SQL Profiles and SQL Baselines you may also know about SQL Patches – a feature that allows you to construct hints that you can attach to SQL statements at run-time without changing the code. Oracle 12c Release 2 introduces a couple of important changes to this feature:

  • It’s now official – the feature had been moved from package dbms_sqldiag_internal to package dbms_sqldiag.
  • The limitation of 500 characters has been removed from the hint text – it’s now a CLOB column.

H/T to Nigel Bayliss for including this detail in his presentation to the UKOUG last week, and pointing out that it’s also available for Standard Edition.

There are a couple of other little changes as you can see below from the two extract from the 12.2 declarations of dbms_sqldiag and dbms_sqldiag_internal below:


dbms_sqldiag
------------
FUNCTION CREATE_SQL_PATCH RETURNS VARCHAR2
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SQL_TEXT                       CLOB                    IN
 HINT_TEXT                      CLOB                    IN
 NAME                           VARCHAR2                IN     DEFAULT
 DESCRIPTION                    VARCHAR2                IN     DEFAULT
 CATEGORY                       VARCHAR2                IN     DEFAULT
 VALIDATE                       BOOLEAN                 IN     DEFAULT

FUNCTION CREATE_SQL_PATCH RETURNS VARCHAR2
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SQL_ID                         VARCHAR2                IN
 HINT_TEXT                      CLOB                    IN
 NAME                           VARCHAR2                IN     DEFAULT
 DESCRIPTION                    VARCHAR2                IN     DEFAULT
 CATEGORY                       VARCHAR2                IN     DEFAULT
 VALIDATE                       BOOLEAN                 IN     DEFAULT

dbms_sqldiag_internal
---------------------
FUNCTION I_CREATE_PATCH RETURNS VARCHAR2
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SQL_ID                         VARCHAR2                IN
 HINT_TEXT                      CLOB                    IN
 CREATOR                        VARCHAR2                IN
 NAME                           VARCHAR2                IN     DEFAULT
 DESCRIPTION                    VARCHAR2                IN     DEFAULT
 CATEGORY                       VARCHAR2                IN     DEFAULT
 VALIDATE                       BOOLEAN                 IN     DEFAULT

FUNCTION I_CREATE_PATCH RETURNS VARCHAR2
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SQL_TEXT                       CLOB                    IN
 HINT_TEXT                      CLOB                    IN
 CREATOR                        VARCHAR2                IN
 NAME                           VARCHAR2                IN     DEFAULT
 DESCRIPTION                    VARCHAR2                IN     DEFAULT
 CATEGORY                       VARCHAR2                IN     DEFAULT
 VALIDATE                       BOOLEAN                 IN     DEFAULT

  • The function names changes from i_create_patch to create_patch when exposed in dbms_sqldiag.
  • There are two versions of the function – one that requires you to supply the exact SQL text, and a new version that allows you to supply an SQL ID.
  • The internal function also adds a creator to the existing parameter list – and it doesn’t have a default, so if you’ve got some code to use the internal version already it’s not going to work on 12.2 until you change it.

I was prompted to write this note by a tweet asking me if there’s any SQL available to see the contents of an SQL Profile in 11g and 12c. (I published some simple code several years ago for 10g, but Oracle changed the base tables in 11g). The answer is yes, probably on the Internet somewhere, but here’s some code I wrote a couple of years ago to scan the more recent versions of Oracle:

rem
rem     sql_profile_baseline_11g.sql
rem     J.P.Lewis
rem     July 2010
rem

set pagesize 60
set linesize 132
set trimspool on

column hint format a70 wrap word
column signature format 999,999,999,999,999,999,999

break on signature skip 1 on opt_type skip 1 on plan_id skip 1

spool sql_profile_baseline_11g

select
        prf.signature,
        decode(
                obj_type,
                1,'Profile',
                2,'Baseline',
                3,'Patch',
                'Other'
        )       opt_type,
        prf.plan_id,
        extractvalue(value(tab),'.')    hint
from
        (
        select
                /*+ no_eliminate_oby */
                *
        from
                sqlobj$data
        where
                comp_data is not null
        order by
                signature, obj_type, plan_id
        )       prf,
        table(
                xmlsequence(
                        extract(xmltype(prf.comp_data),'/outline_data/hint')
                )
        )       tab
;


This will report the hints associated with SQL Baselines, SQL Profiles, and SQL Patches – all three store the data in the same base table. As a minor variation I also have a query that will reported a named profile/baseline/patch, but this requires a join to the sqlobj$ table. As you can see from the substitution variable near the end of the text, the script will prompt you for an object name.


set pagesize 60
set linesize 180
set trimspool on

column  plan_name format a32
column  signature format 999,999,999,999,999,999,999
column  category  format a10
column  hint format a70 wrap word

break on plan_name skip 1 on signature skip 1 on opt_type skip 1 on category skip 1 on plan_id skip 1

spool sql_profile_baseline_11g

select
        prf.plan_name,
        prf.signature,
        decode(
                obj_type,
                1,'Profile',
                2,'Baseline',
                3,'Patch',
                  'Other'
        )       opt_type,
        prf.category,
        prf.plan_id,
        extractvalue(value(hnt),'.') hint
from
        (
        select
                /*+ no_eliminate_oby */
                so.name         plan_name,
                so.signature,
                so.category,
                so.obj_type,
                so.plan_id,
                sod.comp_data
                from
                        sqlobj$         so,
                        sqlobj$data     sod
                where
                        so.name = '&m_plan_name'
                and     sod.signature = so.signature
                and     sod.category = so.category
                and     sod.obj_type = so.obj_type
                and     sod.plan_id = so.plan_id
                order by
                        signature, obj_type, plan_id
        )       prf,
        table (
                select
                        xmlsequence(
                                extract(xmltype(prf.comp_data),'/outline_data/hint')
                        )
                from
                        dual
        )       hnt
;


Lagniappe:

One of the enhancements that appeared in 12c for SQL Baselines was that the plan the baseline was supposed to produce was stored in the database so that Oracle could check that the baseline would still reproduce the expected plan before applying it. These plans (also generated for Profiles and Patches) are stored in the table sqlobj$plan, and the dbms_xplan package has been enhanced with three new functions to report them:


FUNCTION DISPLAY_SQL_PATCH_PLAN RETURNS DBMS_XPLAN_TYPE_TABLE
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 NAME                           VARCHAR2                IN
 FORMAT                         VARCHAR2                IN     DEFAULT

FUNCTION DISPLAY_SQL_PLAN_BASELINE RETURNS DBMS_XPLAN_TYPE_TABLE
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SQL_HANDLE                     VARCHAR2                IN     DEFAULT
 PLAN_NAME                      VARCHAR2                IN     DEFAULT
 FORMAT                         VARCHAR2                IN     DEFAULT

FUNCTION DISPLAY_SQL_PROFILE_PLAN RETURNS DBMS_XPLAN_TYPE_TABLE
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 NAME                           VARCHAR2                IN
 FORMAT                         VARCHAR2                IN     DEFAULT

e.g.
SQL> select * from table(dbms_xplan.display_sql_profile_plan('SYS_SQLPROF_015c9bd3bceb0000'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------

--------------------------------------------------------------------------------
SQL text: select        t1.id, t2.id from       t1, t2 where    t1.id between 10000 and
          20000 and     t2.n1 = t1.n1 and       t2.n1 = t2.v2
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
SQL Profile Name: SYS_SQLPROF_015c9bd3bceb0000
Status:           ENABLED
Plan rows:        From dictionary
--------------------------------------------------------------------------------

Plan hash value: 3683239666

-----------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |          | 10501 |   287K|   248   (4)| 00:00:01 |        |      |            |
|   1 |  PX COORDINATOR         |          |       |       |     0   (0)|          |        |      |            |
|   2 |   PX SEND QC (RANDOM)   | :TQ10002 | 10501 |   287K|   248   (4)| 00:00:01 |  Q1,02 | P->S | QC (RAND)  |
|*  3 |    HASH JOIN BUFFERED   |          | 10501 |   287K|   248   (4)| 00:00:01 |  Q1,02 | PCWP |            |
|   4 |     PX RECEIVE          |          | 10002 |    97K|   123   (3)| 00:00:01 |  Q1,02 | PCWP |            |
|   5 |      PX SEND HASH       | :TQ10000 | 10002 |    97K|   123   (3)| 00:00:01 |  Q1,00 | P->P | HASH       |
|   6 |       PX BLOCK ITERATOR |          | 10002 |    97K|   123   (3)| 00:00:01 |  Q1,00 | PCWC |            |
|*  7 |        TABLE ACCESS FULL| T1       | 10002 |    97K|   123   (3)| 00:00:01 |  Q1,00 | PCWP |            |
|   8 |     PX RECEIVE          |          |   104K|  1845K|   124   (4)| 00:00:01 |  Q1,02 | PCWP |            |
|   9 |      PX SEND HASH       | :TQ10001 |   104K|  1845K|   124   (4)| 00:00:01 |  Q1,01 | P->P | HASH       |
|  10 |       PX BLOCK ITERATOR |          |   104K|  1845K|   124   (4)| 00:00:01 |  Q1,01 | PCWC |            |
|* 11 |        TABLE ACCESS FULL| T2       |   104K|  1845K|   124   (4)| 00:00:01 |  Q1,01 | PCWP |            |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("T2"."N1"="T1"."N1")
   7 - filter("T1"."ID"<=20000 AND "T1"."ID">=10000)
  11 - filter("T2"."N1"=TO_NUMBER("T2"."V2"))

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 2

Disclaimer – I’ve checked only the SQL_PROFILE function call on 12.2, after creating a profile to check that my old 11g report still worked in 12c.

 


KeePass 2.36

Tim Hall - Mon, 2017-06-12 03:12

KeePass 2.36 was released a few days ago. You can download it here.

You can read about how I use KeePass and KeePassX2 on my Mac, Windows and Android devices here.

Cheers

Tim…

KeePass 2.36 was first posted on June 12, 2017 at 9:12 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Where Did the EBS Certifications Page Go?

Steven Chan - Mon, 2017-06-12 02:00

Regular readers of this blog know that they could find my one-page summary of all important E-Business Suite technology stack component certifications under the "Certifications" link in the sidebar menu.

Since Oracle's migration of our blogging engine to Compendium in April, some people have been having trouble finding the new location.  It's here:

You can access it from the new blog's "Quick Links" button in the top banner:

Due to some restrictions about how the new blogging engine displays content, this page might look a bit different than before.  But regardless, it still lists technology stack certifications for EBS 12.2, 12.1, 12.0, and 11i, covering everything all three tiers: desktop and mobile clients, application tier servers, and database tier servers.

 

Categories: APPS Blogs

GoldenGate 12.2 TROUBLESHOOTING REPLICAT LAG

Michael Dinh - Sun, 2017-06-11 09:16

Time Since Chkpt and Lag at Chkpt from replicat keep increasing

Program     Status      Group       Lag at Chkpt  Time Since Chkpt
REPLICAT    RUNNING     R_NEW12C    03:49:45      06:37:47    

This occurs for due to the following reasons:

Delivering a long running transaction
Waiting on a full table scan
Blocked by another sessions
Primary extract lag or pump lag keeps increasing

Is There a Way to Make Long-running Transactions Checkpoint? (Doc ID 969684.1)

The tradeoff with GROUPTRANSOPS is with efficiency. 
The tradeoff with MAXTRANSOPS is efficiency and transaction integrity. 

There are no long running transaction from the extract and is monitored using WARNLONGTRANS 15m, CHECKINTERVAL 3m.

grep "Long Running Transaction" dirrpt/E_OLD10G.rpt

My suspicion is FTS, but how to find out?

SQL> r
  1  select
  2  -- SQL_ID,PLAN_HASH_VALUE,
  3  OBJECT_OWNER,OBJECT_NAME, min(TIMESTAMP) min_ts, max(TIMESTAMP) max_ts, count(*) ct
  4  from DBA_HIST_SQL_PLAN
  5  where operation='TABLE ACCESS'
  6  and options='FULL'
  7  and NOT REGEXP_LIKE(object_owner,'SYS|SYSTEM|DBSNMP')
  8  and TIMESTAMP > TO_DATE('01-JUN-2017','DD-MON-YYYY')
  9  group by
 10  -- SQL_ID,PLAN_HASH_VALUE,
 11  OBJECT_OWNER,OBJECT_NAME
 12  order by count(*), OBJECT_OWNER,OBJECT_NAME
 13*
 
OBJECT_OWNER         OBJECT_NAME                    MIN_TS               MAX_TS                       CT
-------------------- ------------------------------ -------------------- -------------------- ----------
XXX                  THISISAREALLYLONGTABLENAME     01-JUN-2017 00:01:00 11-JUN-2017 02:55:36        114

SQL> select index_name from dba_indexes where table_name='THISISAREALLYLONGTABLENAME';
no rows selected
SQL> 

ASC 606/IFRS 15 & Oracle’s Revenue Management Cloud Service (RMCS)

OracleApps Epicenter - Sun, 2017-06-11 06:13
As we know , the entire accounting approach to revenue recognition is undergoing a wholesale re-write. Rather than relying on using a deferred revenue account, you will need to identify and account for performance obligations. WHAT CHANGES ARE BEING INTRODUCED? Revenue recognition is taking a more scenario and rules based approach to what can be […]
Categories: APPS Blogs

query for report generation in oracle 11g

Tom Kyte - Sun, 2017-06-11 06:06
how to generate a month wise,year wise report for banking?
Categories: DBA Blogs

How solve incident Error?

Tom Kyte - Sun, 2017-06-11 06:06
I search incident error suddenly and my database immediate down and show error message client connect could not hand-off. And I see alter log file and search on internet also but i can't able to find information for first argument mention in the or...
Categories: DBA Blogs

Unable to install Oracle 11g on Windows 10

Tom Kyte - Sun, 2017-06-11 06:06
Hello Oracle Experts, I am unable to install Oracle 11g on Windows 10 (64 Bit). Error Encountered : ORA 12631 UserName retrieval failed Database instance is created but the .DBF files are not created including the control files. Please note...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator