Feed aggregator

ora-01008, what is the bind variable's name?

Tom Kyte - Tue, 2016-07-05 20:26
Good time of day, Tom! I run several SQL via DBMS_sql package. Each of that SQL has a set of bind variables. Is there any feature to get a list of variables' names for given SQL? For instance. I wonder to get a list of ':v_name',':p_result' ...
Categories: DBA Blogs

Generate tree paths for hierarchy

Tom Kyte - Tue, 2016-07-05 20:26
Hello , I have one question which are asked into interview ,To make a tree when user insert a node into table its path get automatically reflected into table Table: Tree ---------------------- node(int) parentNode(int) path(...
Categories: DBA Blogs

LEAST AND GREATEST functions

Tom Kyte - Tue, 2016-07-05 20:26
Hello, I am trying to use the below SQL : SELECT least ( DECODE (:VAR1, 9999, NULL, :VAR1), DECODE (:VAR2,9999, NULL,:VAR2) ) FROM DUAL; VAR1 & VAR2 need to be NUMBERs (not varchar) the above SQL seems to work for all numbers exce...
Categories: DBA Blogs

trigger

Tom Kyte - Tue, 2016-07-05 20:26
Hi, my table is with fist name , last name , status. Now the thing is I want to change the status to "APPROVED" as soon as I made the entry in last name, if last name column is empty status should be default lets say "PENDING". I tried it u...
Categories: DBA Blogs

Compare source and target in a Dbvisit replication

Yann Neuhaus - Tue, 2016-07-05 13:51

You’ve setup a logical replication, and you trust it. But before the target goes into production, it will be safer to compare source and target. At least count the number of rows.
But tables are continuously changing, so how can you compare? Not so difficult thanks to Dbvisit replicate heartbeat table and Oracle flashback query.

Here is the state of the replication, with activity on the source and real-time replication to the target:
| Dbvisit Replicate 2.7.06.4485(MAX edition) - Evaluation License expires in 29 days
MINE IS running. Currently at plog 368 and SCN 6119128 (07/06/2016 04:15:21).
APPLY IS running. Currently at plog 368 and SCN 6119114 (07/06/2016 04:15:19).
Progress of replication dbvrep_XE:MINE->APPLY: total/this execution
--------------------------------------------------------------------------------------------------------------------------------------------
REPOE.CUSTOMERS: 100% Mine:961/961 Unrecov:0/0 Applied:961/961 Conflicts:0/0 Last:06/07/2016 04:12:12/OK
REPOE.ADDRESSES: 100% Mine:961/961 Unrecov:0/0 Applied:961/961 Conflicts:0/0 Last:06/07/2016 04:12:12/OK
REPOE.CARD_DETAILS: 100% Mine:894/894 Unrecov:0/0 Applied:894/894 Conflicts:0/0 Last:06/07/2016 04:12:12/OK
REPOE.ORDER_ITEMS: 100% Mine:5955/5955 Unrecov:0/0 Applied:5955/5955 Conflicts:0/0 Last:06/07/2016 04:12:12/OK
REPOE.ORDERS: 99% Mine:4781/4781 Unrecov:0/0 Applied:4780/4780 Conflicts:0/0 Last:06/07/2016 04:12:12/OK
REPOE.INVENTORIES: 100% Mine:5825/5825 Unrecov:0/0 Applied:5825/5825 Conflicts:0/0 Last:06/07/2016 04:12:12/OK
REPOE.LOGON: 99% Mine:6175/6175 Unrecov:0/0 Applied:6173/6173 Conflicts:0/0 Last:06/07/2016 04:12:12/OK
--------------------------------------------------------------------------------------------------------------------------------------------
7 tables listed.

If you wand to compare the rows from source and target, you will always see a difference because modifications on source arrive on target a few seconds later.

Source and target SCN

The first thing to do is to determine a consistent point in time where source and target are the same. This point in time exists because the redo log is sequential by nature, and the commits are done in the same order on target than source. And this order is visible with the SCN. The only problem is that on a logical replication the SCN on source and target are completely different and have their own life.

The first step is determine an SCN from the target and an SCN on the source that show the same state of transactions.

But before that, let’s connect to the target and set the environment:

$ sqlplus /nolog @ compare.sql
 
SQL*Plus: Release 11.2.0.2.0 Production on Tue Jul 5 18:15:34 2016
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
 
SQL> define table_owner=REPOE
SQL> define table_name=ORDERS
SQL>
SQL> connect system/manager@//192.168.56.67/XE
Connected.
SQL> alter session set nls_date_format='DD-MON-YYYY HH24:mi:ss';
Session altered.
SQL> alter session set nls_timestamp_format='DD-MON-YYYY HH24:mi:ss';
Session altered.

My example is on the #repattack environment, with Swingbench running on the source, and I’ll compare the ORDER table.

Heartbeat table

Each Dbvisit replicate configuration comes with an heartbeat table created in the Dbvisit schema on the source and replicated to the target. This table is updated every 10 seconds on the source with timestamp and SCN. This is a great way to check how the replication is working.Here it will be the way to get the SCN information from the source.

Flashback query

Oracle flashback query offers a nice way to get the commit SCN for the rows updated in the heartbeat table. From the target database, this is the commit SCN for the replication transaction (the APPLY process) and it can be displayed along with the SCN from the source transaction that is recorded in the heartbeat table and replicated to the target.

SQL> column versions_startscn new_value scn_target
SQL> column source_scn new_value scn_source
SQL> column mine_process_name format a12
SQL> column versions_starttime format a21
 
SQL> select mine_process_name,wallclock_date,mine_date,source_scn,mine_scn,versions_startscn,versions_starttime,versions_endscn
from DBVREP.DBRSCOMMON_HEARTBEAT versions between timestamp(sysdate-1/24/60) and sysdate
order by versions_endscn nulls last ;
 
MINE_PROCESS WALLCLOCK_DATE MINE_DATE SOURCE_SCN MINE_SCN VERSIONS_STARTSCN VERSIONS_STARTTIME VERSIONS_ENDSCN
------------ -------------------- -------------------- -------------------- -------------------- -------------------- --------------------- --------------------
MINE 06-JUL-2016 04:14:27 06-JUL-2016 04:14:22 6118717 6118661 4791342
MINE 06-JUL-2016 04:14:37 06-JUL-2016 04:14:31 6118786 6118748 4791342 06-JUL-2016 04:11:29 4791376
MINE 06-JUL-2016 04:14:47 06-JUL-2016 04:14:41 6118855 6118821 4791376 06-JUL-2016 04:11:39 4791410
MINE 06-JUL-2016 04:14:57 06-JUL-2016 04:14:51 6118925 6118888 4791410 06-JUL-2016 04:11:49 4791443
MINE 06-JUL-2016 04:15:07 06-JUL-2016 04:15:01 6119011 6118977 4791443 06-JUL-2016 04:11:59 4791479
MINE 06-JUL-2016 04:15:17 06-JUL-2016 04:15:11 6119091 6119059 4791479 06-JUL-2016 04:12:09 4791515
MINE 06-JUL-2016 04:15:27 06-JUL-2016 04:15:21 6119162 6119128 4791515 06-JUL-2016 04:12:19

This shows that the current version of the heartbeat table on target was commited at SCN 4791515 and we know that this state matches the SCN 6119162 on the source. You can choose any pair you want but the latest will probably be the fastest to query.

Counting rows on source

I’ll use flashback query to count the rows from the source at SCN 6119162. I’m doing it in parallel query, but be careful when the table has high modification activity there will be lot of undo blocks to read.

SQL> connect system/manager@//192.168.56.66/XE
Connected.
SQL> alter session force parallel query parallel 8;
Session altered.
 
SQL> select count(*) from "&table_owner."."&table_name." as of scn &scn_source;
old 1: select count(*) from "&table_owner."."&table_name." as of scn &scn_source
new 1: select count(*) from "REPOE"."ORDERS" as of scn 6119162
 
COUNT(*)
--------------------
775433

Counting rows on target

I’m doing the same fron the target, but with the SCN 4791515
SQL> connect system/manager@//192.168.56.67/XE
Connected.
SQL> alter session force parallel query parallel 8;
Session altered.
 
SQL> select count(*) from "&table_owner."."&table_name." as of scn &scn_target;
old 1: select count(*) from "&table_owner."."&table_name." as of scn &scn_target
new 1: select count(*) from "REPOE"."ORDERS" as of scn 4791515
 
COUNT(*)
--------------------
775433

Good. Same number of rows. This proves that even with constantly inserted tables we can find a point of comparison, thanks to Dbvisit heartbeat table and thanks to Oracle flashback query. If you are replicating with another logical replication product, you can simulate the heartbeat table with a job that updates the current SCN to a single row table, and replicate it. If your target is not Oracle, then there are good chances that you cannot do that kind of ‘as of’ query which means that you need to lock the table on source for the time you compare.

ORA_HASH

If you think that counting the rows is not sufficient, you can compare a hash value from the columns. Here is an example.
I get the list of columns, with ORA_HASH() function on it, and sum() between them:

SQL> column columns new_value columns
SQL> select listagg('ORA_HASH('||column_name||')','+') within group (order by column_name) columns
2 from dba_tab_columns where owner='&table_owner.' and table_name='&table_name';
old 2: from dba_tab_columns where owner='&table_owner.' and table_name='&table_name'
new 2: from dba_tab_columns where owner='REPOE' and table_name='ORDERS'
 
COLUMNS
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ORA_HASH(CARD_ID)+ORA_HASH(COST_OF_DELIVERY)+ORA_HASH(CUSTOMER_CLASS)+ORA_HASH(CUSTOMER_ID)+ORA_HASH(DELIVERY_ADDRESS_ID)+ORA_HASH(DELIVERY_TYPE)+ORA_HASH(INVOICE_ADDRESS_ID)+ORA_HASH(ORDER_DATE)+ORA_
HASH(ORDER_ID)+ORA_HASH(ORDER_MODE)+ORA_HASH(ORDER_STATUS)+ORA_HASH(ORDER_TOTAL)+ORA_HASH(PROMOTION_ID)+ORA_HASH(SALES_REP_ID)+ORA_HASH(WAIT_TILL_ALL_AVAILABLE)+ORA_HASH(WAREHOUSE_ID)

With this list defined in a substitution variable, I can compare the sum of hash values:

SQL> select count(*),avg(&columns.) hash from "&table_owner."."&table_name." as of scn &scn_target;
old 1: select count(*),sum(&columns.) hash from "&table_owner."."&table_name." as of scn &scn_target
new 1: select count(*),sum(ORA_HASH(CARD_ID)+ORA_HASH(COST_OF_DELIVERY)+ORA_HASH(CUSTOMER_CLASS)+ORA_HASH(CUSTOMER_ID)+ORA_HASH(DELIVERY_ADDRESS_ID)+ORA_HASH(DELIVERY_TYPE)+ORA_HASH(INVOICE_ADDRESS_ID)+ORA_HASH(ORDER_DATE)+ORA_HASH(ORDER_ID)+ORA_HASH(ORDER_MODE)+ORA_HASH(ORDER_STATUS)+ORA_HASH(ORDER_TOTAL)+ORA_HASH(PROMOTION_ID)+ORA_HASH(SALES_REP_ID)+ORA_HASH(WAIT_TILL_ALL_AVAILABLE)+ORA_HASH(WAREHOUSE_ID)) hash from "REPOE"."ORDERS" as of scn 4791515
 
COUNT(*) HASH
-------------------- --------------------
775433 317531150805040439
 
SQL> connect system/manager@//192.168.56.66/XE
Connected.
SQL> alter session force parallel query parallel 8;
Session altered.
 
SQL> select count(*),avg(&columns.) hash from "&table_owner."."&table_name." as of scn &scn_source;
old 1: select count(*),sum(&columns.) hash from "&table_owner."."&table_name." as of scn &scn_source
new 1: select count(*),sum(ORA_HASH(CARD_ID)+ORA_HASH(COST_OF_DELIVERY)+ORA_HASH(CUSTOMER_CLASS)+ORA_HASH(CUSTOMER_ID)+ORA_HASH(DELIVERY_ADDRESS_ID)+ORA_HASH(DELIVERY_TYPE)+ORA_HASH(INVOICE_ADDRESS_ID)+ORA_HASH(ORDER_DATE)+ORA_HASH(ORDER_ID)+ORA_HASH(ORDER_MODE)+ORA_HASH(ORDER_STATUS)+ORA_HASH(ORDER_TOTAL)+ORA_HASH(PROMOTION_ID)+ORA_HASH(SALES_REP_ID)+ORA_HASH(WAIT_TILL_ALL_AVAILABLE)+ORA_HASH(WAREHOUSE_ID)) hash from "REPOE"."ORDERS" as of scn 6119162
 
COUNT(*) HASH
-------------------- --------------------
775433 17531150805040439

Note that this is only an example. You must adapt for your needs: precision of the comparison and performance.

So what?

Comparing source and target is not a bad idea. With Dbvisit replicate, if you defined the replication properly and did the initial import with the SCN provided by the setup wizard, you should not miss transactions, even when there is lot of activity on source, and even without locking the source for the initialisation. But it’s always good to compare, especially before the ‘Go’ decision of a migration done with Dbvisit replicate to lower the downtime (and the stress). Thanks to heartbeat table and flashback query, a checksum is not too hard to implement.

 

Cet article Compare source and target in a Dbvisit replication est apparu en premier sur Blog dbi services.

Fujitsu and Oracle Team Up to Drive Cloud Computing

Oracle Press Releases - Tue, 2016-07-05 11:41
Press Release
Fujitsu and Oracle Team Up to Drive Cloud Computing Strategic alliance provides robust cloud offering to customers in Japan and their subsidiaries around the world

Tokyo and Redwood Shores, Calif.—Jul 5, 2016

Fujitsu Limited, Oracle Corporation, and Oracle Corporation Japan today announced that they have agreed to form a new strategic alliance to deliver enterprise-grade, world-class cloud services to customers in Japan and their subsidiaries around the world.

In order to take advantage of cloud computing to speed innovation, reduce costs and drive business growth, organizations need IT partners that can deliver the performance, security and management capabilities that are demanded by enterprise workloads. To help organizations in Japan capitalize on this opportunity and confidently move enterprise workloads to the cloud, Oracle Cloud Application and Platform services—such as Oracle Database Cloud Service and Oracle Human Capital Management (HCM) Cloud—will be powered by Fujitsu’s datacenters in Japan. Under the new strategic alliance, Fujitsu will work to drive sales of robust cloud offerings to companies in Japan and their subsidiaries around the world.

By bringing Oracle Cloud Application and Platform services to FUJITSU Cloud Service K5, Fujitsu and Oracle will provide a high-performance cloud environment to meet the IT and business needs of customers. Specifically, Fujitsu will install the Oracle Cloud services in its datacenters in Japan and connect them to its K5 service in order to deliver enterprise-grade cloud services. The first Oracle Cloud Application that will be offered to Fujitsu customers under the joint offering is Oracle HCM Cloud. As part of the agreement, Fujitsu will implement Oracle HCM Cloud to gain unprecedented insight into its workforce throughout the company’s worldwide network of offices.

"We at Fujitsu support the digital transformation of our customers, and aim to contribute to optimized customer systems and business growth with the roll out of our Digital Business Platform MetaArc," said Shingo Kagawa, SEVP, Head of Digital Services Business & CTO, Fujitsu Limited. "In particular, we offer the core cloud service on MetaArc, K5, which addresses systems of engagement (SoE)(*1) and systems of record (SoR)(*2). Oracle is a leader in Japan's database market segment and possesses strong capabilities in the SoR domain. Now, as we look to strengthen MetaArc and K5, taking part in this strategic alliance with Oracle will work to meet the cloud needs of our customers."

“In order to realize the full business potential of cloud computing, organizations need secure, reliable and high-performing cloud solutions,” said Edward Screven, Chief Corporate Architect, Oracle. “For over three decades, Oracle and Fujitsu have worked together using our combined R&D, product depth and global reach to create innovative solutions enabling customers to scale their organizations and achieve a competitive advantage. Oracle’s new strategic alliance with Fujitsu will allow companies in Japan to take advantage of an integrated cloud offering to support their transition to the cloud.”

“We strongly believe this cloud alliance will support Japanese companies to drive digital transformation,” said Hiroshige Sugihara, President and CEO, Oracle Corporation Japan. “This will be a gateway for customers to achieve standardization, modernization, and globalization.  This initiative will differentiate us from other cloud providers by emphasizing real enterprise cloud solutions, while offering Japanese companies access to best of breed technology in the new Cloud era.”

The combination of these innovative solutions including Oracle Database Cloud Service, Oracle HCM Cloud, and K5, will enable Fujitsu and Oracle to deliver mission critical systems over a cloud environment within Fujitsu’s datacenters while maintaining the high levels of performance and reliability that had previously been achieved in on-premise environments. Furthermore, with the Oracle Cloud provided from Fujitsu’s state-of-the-art datacenters, which boast a high level of capabilities in Japan, customers using K5 or Fujitsu’s hosting services will have access to use invaluable cloud services.

Contact Info
Fujitsu Limited
Public and Investor Relations Division
Candice van der Laan
Oracle
+1.650.464.3186
candice.van.der.laan@oracle.com
Junko Ishikawa, Norihito Yachita
Oracle Japan
pr-room_jp@oracle.com
Notes

1. Systems-of-Engagement (SoE)
Systems that implement digital transformations, including business-process transformation and new-business development.

2. Systems-of-Record (SoR)
Existing systems that record company data and perform business processes.

About Fujitsu

Fujitsu is the leading Japanese information and communication technology (ICT) company, offering a full range of technology products, solutions, and services. Approximately 156,000 Fujitsu people support customers in more than 100 countries. We use our experience and the power of ICT to shape the future of society with our customers. Fujitsu Limited (TSE: 6702) reported consolidated revenues of 4.7 trillion yen (US$41 billion) for the fiscal year ended March 31, 2016. For more information, please see www.fujitsu.com.

About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

About Oracle Japan

Oracle Corporation Japan was established in 1985 as Oracle Corporation’s subsidiary in Japan. With the goal of becoming the number one cloud company, it provides a comprehensive and fully integrated stack of cloud applications and cloud platforms, a suite of products to generate valuable information from big data, and a wide variety of services to support the use of these products. It was listed on the first section of the Tokyo Stock Exchange in 2000 (Company code: 4716). Visit oracle.com/jp.

Fujitsu and Oracle Alliance History

Since entering into a database OEM contract in 1989, the two companies have been providing customers with optimal solutions. Currently, as an Oracle Partner Network (OPN) Diamond level partner, Fujitsu is providing system integration services worldwide. In addition, in the SPARC/Solaris server business, Fujitsu entered into a sales contract with Sun Microsystems in 1983 and a development agreement for SPARC chips in 1988, and further strengthened the relationship with Sun Microsystems through a Solaris OEM contract in 1993. Since Oracle's subsequent acquisition of Sun Microsystems, the two companies have maintained a close, collaborative relationship to the present day.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates in the US and other countries. Other names may be trademarks of their respective owners. This press release is solely for the purpose of providing information and does not constitute an implied contract.

All company or product names mentioned herein are trademarks or registered trademarks of their respective owners. Information provided in this press release is accurate at time of publication and is subject to change without advance notice.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Fujitsu Limited

Candice van der Laan

  • +1.650.464.3186

Junko Ishikawa, Norihito Yachita

Hollander Adds New Channel, Goes Direct to Consumers Fast With Oracle Commerce Cloud

Linda Fishman Hoyle - Tue, 2016-07-05 11:27

Headquartered in Boca Raton, Florida, Hollander Sleep Products is the largest bed pillow and mattress pad manufacturer in North America. The company also has major distribution agreements across North America with brands including Beautyrest, Ralph Lauren, and Nautica, among others.

Hollander has been very successful in the B2B space, manufacturing and marketing its in-house brand Live Comfortably. It also has done very well distributing other company’s products to retailers. But, as you know, retail and distributor businesses have been under great pressure in recent years. So in 2015, the company looked to expand its business model and take its in-house brand Live Comfortably direct to consumers—but it had no direct, consumer retail/commerce experience. To accomplish its goal, Hollander teamed up with Oracle and put Oracle Commerce Cloud to work. Listen to the story from these two Hollander executives:

ODTUG Kscope16

Oracle AppsLab - Tue, 2016-07-05 10:05

Just like last year, a few members (@jkuramot, @noelportugal, @YuhuaXie, Tony and myself) of @theappslab attended Kscope16 to run a Scavenger Hunt, speak and enjoy one of the premier events for Oracle developers. It was held in Chicago this time around, and here are my impressions.

Lori and Tony Blues

Lori and Tony Blues

Since our Scavenger Hunt was quite a success the previous year, we were asked to run it again to spice up the conference a bit.  This is the 4th time we ran the Scavenger Hunt (if you want to learn more about the game itself, check out Noel’s post on the mechanics) and by now it runs like a well-oiled machine.  The competition was even fiercer than last year with a DJI Phantom at stake but in the end @alanarentsen prevailed, congratulations to Alan.  @briandavidhorn was the runner up and walked away with an Amazon Echo and in 3rd place, @GoonerShre got a Raspberry Pi for his efforts.

IMG_20160626_081906

Sam Hetchler and Noel engage in a very deep IoT discussion.

There were also consolation prizes for the next 12 places, they each got both a Google Chromcast and a Tile.  All-in-all it was another very successful run of the Scavenger Hunt with over 170 participants and a lot of buzz surrounding the game, here’s a quote from one of the players:

“I would not have known so many things, and tried them out, if there were not a Scavenger Hunt. It is great.”

Better than Cats. We haven’t decided yet if we are running the Scavenger Hunt again next year, if we do, it will probably be in a different format; our brains are already racing.

Our team also had a few sessions, Noel talked broadly about OAUX, and I had a presentation about Developer Experience or DX.  As is always the case at Kscope, the sessions are pretty much bi-directional, with the audience participating as you deliver your presentation.  Some great questions were asked during my talk, and I even was able to record a few requirements for API Maker, a tool we are building for DX.

Judging by the participation of the attendees, there seems to be a lot of enthusiasm in the developer community for both API Maker and 1CSS, another tool we are creating for DX.  As a result of the session, we have picked up a few contacts within Oracle which we will explore further to push these tools and get them out sooner rather than later.

In addition to all those activities, Raymond ran a preview of an IoT workshop we plan to replicate at OpenWorld and JavaOne this year. I won’t give away too much, but it involves a custom PCB.

IMG_20160626_104141

The makings of our IoT workshop, coming to OOW and J1.

Unfortunately, my schedule (Scavenger Hunt, presentation) didn’t really allow me to attend any sessions but other members of our team attended a few, so I will let them talk about that. I did, however, get a chance to play some video games.

IMG_20160627_183615

I really, really like video games.

And have some fun, as is customary at Kscope.

Cl6f96VWEAABRUp

A traditional Chicago dog.

Cheers,

Mark.

Possibly Related Posts:

PeopleSoft Logging and Auditing

Logging and auditing are one of the pillars of PeopleSoft Security.  Both application and database auditing is required. Logging and auditing support a trust-but-verify approach which is often deemed required to secure the activities of privileged system and database administrators.

While both the application and database offer sophisticated auditing solutions, one key feature Integrigy always recommends is to ensure that EnableDBMononitoring is enabled within the psappssrv.cfg file. This is set by default but we at times find it disabled.

When enabled EnableDBMononitoring allows PeopleSoft application auditing to bridge or flow into database auditing. This is done by populating the Oracle Client_Info variable with the PeopleSoft User Id, IP address and program name. With Oracle RDBMS auditing enabled, anything written to Client_Info is also written into the database audit logs.

In other words, with both database and EnableDBMononitoring enabled, you can report on which user updated what and when – not just that the PeopleSoft application or ‘Access ID’ issued an update statement.

The graphics below we commonly use to help review Integrigy’s approach to PeopleSoft logging and auditing.

If you have questions, please contact us at info@integrigy.com

Michael A. Miller, CISSP-ISSMP, CCSP

References

PeopleSoft Database Security

PeopleSoft Security Quick Reference

Auditing, Oracle PeopleSoft, Auditor
Categories: APPS Blogs, Security Blogs

High Soft Parsing

Tom Kyte - Tue, 2016-07-05 02:06
Hi Tom, We are experiencing high Soft parsing in our databases , though we have enabled session cached cursors and all our SQL/PL SQL blocks using bind variables. We are using Pro C as a host language interact with the back end database. Load...
Categories: DBA Blogs

Can we do a CTAS (Create table as) without the NOT NULL constraints?

Tom Kyte - Tue, 2016-07-05 02:06
Can we do a CTAS (Create table as) and create the new table without the NOT NULL constraints? select * from v$version; Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production PL/SQL Release 11.2.0.3.0 - Production CORE ...
Categories: DBA Blogs

Format the Number for display

Tom Kyte - Tue, 2016-07-05 02:06
<code>Hello Guru, Q1) I need to display numbers from a database in a specific format. Table Tn (n number(6,3)); The format is 999.999 SQL> insert into tn values( 123.123) ; 1 row created. SQL> insert into tn values(0) ; 1 row created. SQL...
Categories: DBA Blogs

NOLOGGING

Tom Kyte - Tue, 2016-07-05 02:06
Hi, We have a problem, each day our DR setup is getting about 300 GB of data. We have found this is due to the huge amount of archive logs being written. We have certain bulk operations that are taking place, most of which are from bulk deletes or...
Categories: DBA Blogs

Retreive userid who has taken training more than once

Tom Kyte - Tue, 2016-07-05 02:06
Hi Tom, I have 2 tables user and training User Userid Username Trainingid 1 A 1 2 B 2 3 C 2 4 D 3 5 E 2 Training Trainingid trainername userid countoftrainings Date 1 X 1 2 ...
Categories: DBA Blogs

Script to suggest FK indexes

Yann Neuhaus - Mon, 2016-07-04 10:55

In Oracle, when the referenced key is deleted (by delete on parent table, or update on the referenced columns) the child tables(s) are locked to prevent any concurrent insert that may reference the old key. This lock is a big issue on OLTP applications because it’s a TM Share lock, usually reserved for DDL only, and blocking any modification on the child table and blocking some modifications on tables that have a relationship with that child table. This problem can be be overcome when an index structure which allows to find concurrent inserts that may reference the old value. Here is the script I use to find which index is missing.

The idea is not to suggest to index all foreign keys for three reasons:

  • when there are no delete or update in parent side, you don’t have that locking issue
  • when there is minimal write activity on child side, the lock may not have big consequence
  • you probably have indexes build for performance reasons that can be used to avoid locking even when they have more columns or have different column order

The idea is not to suggest an index for each locking issue but only when blocking locks have been observed. Yes, it is a reactive solution, but proactive ones cannot be automatic. If you know your application well and then you know what you ave to index, then you don’t need this script. If you don’t, then proactive suggestion will suggest too many indexes.

Here is the kind of output that I get with this script:
-- DELETE on APP1.GCO_GOOD has locked APP1.FAL_TASK in mode 5 for 8 minutes between 14-sep 10:36 and 14-sep 10:44
-- blocked statement: DELETE FAL LOT LOT WHERE C FAB TYPE AND EXISTS SELECT
-- blocked statement: UPDATE DOC POSITION DETAIL SET DOC POSITION DETAIL ID B
-- blocked statement: delete from C AP GCO GOOD where rowid doa rowid
-- blocked statement: DELETE FROM FAL LOT WHERE FAL LOT ID B
-- blocked statement: DELETE FROM FAL TASK LINK PROP WHERE FAL LOT PROP ID B
-- blocked statement: INSERT INTO FAL LOT PROGRESS FAL LOT PROGRESS ID FAL LOT
-- blocked statement: insert into FAL TASK LINK FAL SCHEDULE STEP ID
-- FK chain: APP1.GCO_GOOD referenced by(cascade delete) APP1"."GCO_SERVICE referenced by(cascade set null) APP1"."FAL_TASK (APP1.FAL_TASK_S_GCO_SERV) unindexed
-- FK column GCO_GCO_GOOD_ID
-- Suggested index: CREATE INDEX ON "APP1"."FAL_TASK" ("GCO_GCO_GOOD_ID");
-- Other existing Indexes: CREATE INDEX "APP1"."FAL_TASK_S_DIC_FR_TASK_COD7_FK" ON "APP1"."FAL_TASK" ("DIC_FREE_TASK_CODE7_ID")
-- Other existing Indexes: CREATE INDEX "APP1"."FAL_TASK_S_DIC_FR_TASK_COD9_FK" ON "APP1"."FAL_TASK" ("DIC_FREE_TASK_CODE9_ID")
-- Other existing Indexes: CREATE INDEX "APP1"."FAL_TASK_S_PPS_TOOLS13_FK" ON "APP1"."FAL_TASK" ("PPS_TOOLS13_ID")

I’ll detail each part.

ASH

Yes we have to detect blocking issues from the past and I use ASH for that. If you don’t have Diagnostic Pack, then you have to change the query with another way to sample V$SESSION.
-- DELETE on APP1.GCO_GOOD has locked APP1.FAL_TASK in mode 5 for 8 minutes between 14-sep 10:36 and 14-sep 10:44
-- blocked statement: DELETE FAL LOT LOT WHERE C FAB TYPE AND EXISTS SELECT
-- blocked statement: UPDATE DOC POSITION DETAIL SET DOC POSITION DETAIL ID B
-- blocked statement: delete from C AP GCO GOOD where rowid doa rowid
-- blocked statement: DELETE FROM FAL LOT WHERE FAL LOT ID B
-- blocked statement: DELETE FROM FAL TASK LINK PROP WHERE FAL LOT PROP ID B
-- blocked statement: INSERT INTO FAL LOT PROGRESS FAL LOT PROGRESS ID FAL LOT
-- blocked statement: insert into FAL TASK LINK FAL SCHEDULE STEP ID

The first part of the output comes from ASH and detects the blocking situations: which statement, how long, and the statements that were blocked.
This part of the script will probably need to be customized: I join with DBA_HIST_SQL_PLAN supposing that the queries have been captured by AWR as long running queries. I check last 15 days of ASH. You may change those to fit the blocking situation encountered.

Foreign Key

Then, we have to find the unindexed foreign key which is responsible for those locks.

-- FK chain: APP1.GCO_GOOD referenced by(cascade delete) APP1"."GCO_SERVICE referenced by(cascade set null) APP1"."FAL_TASK (APP1.FAL_TASK_S_GCO_SERV) unindexed
-- FK column GCO_GCO_GOOD_ID

Here you see that it’s not easy. Actually, all scripts I’ve seen do not detect that situation where the CASCADE SET NULL cascades the issue. Here “APP1″.”GCO_SERVICE” has its foreign key indexed but the SET NULL, even when not on the referenced column, locks the child (for no reason as far as I know, but it does).
My script goes up to a level 10 using a connect by query to detect this situation.

Suggested Index

The suggested index is an index on the foreign key column:

-- Suggested index: CREATE INDEX ON "APP1"."FAL_TASK" ("GCO_GCO_GOOD_ID");

This is only a suggestion. Any regular index that starts with foreign key column in whatever order can be used to avoid the lock.
Remember to think about performance first. The index may be used to navigate from parent to child.

Existing Index

Finally, when adding an index it’s good to check if there are other indexe that would not be needed anymore, so my script displays all of them.
If you think that some indexes are not required, remember that in 12c you can make them invisible for a while and you will be able to bring them back to visible quickly in case of regression.

Script

Here is the script. Sorry, no comments on it yet and a few display things to change. Just try it, it’s just a query on AWR (need Diag. Pack) and table/index/constraint metadata. You can customize it and don’t hesitate to comment if you have ideas to improve. I used it in several environments and it has always found the chain of foreign key that is responsible for an ‘enq: TM’ blocking situation. And believe me this is not always easy to do just by looking at the data model.


set serveroutput on
declare
procedure print_all(s varchar2) is begin null;
dbms_output.put_line(s);
end;
procedure print_ddl(s varchar2) is begin null;
dbms_output.put_line(s);
end;
begin
dbms_metadata.set_transform_param(dbms_metadata.session_transform,'SEGMENT_ATTRIBUTES',false);
for a in (
select count(*) samples,
event,p1,p2,o.owner c_owner,o.object_name c_object_name,p.object_owner p_owner,p.object_name p_object_name,id,operation,min(p1-1414332420+4) lock_mode,min(sample_time) min_time,max(sample_time) max_time,ceil(10*count(distinct sample_id)/60) minutes
from dba_hist_active_sess_history left outer join dba_hist_sql_plan p using(dbid,sql_id) left outer join dba_objects o on object_id=p2 left outer join dba_objects po on po.object_id=current_obj#
where event like 'enq: TM%' and p1>=1414332420 and sample_time>sysdate-15 and p.id=1 and operation in('DELETE','UPDATE','MERGE')
group by
event,p1,p2,o.owner,o.object_name,p.object_owner,p.object_name,po.owner,po.object_name,id,operation
order by count(*) desc
) loop
print_ddl('-- '||a.operation||' on '||a.p_owner||'.'||a.p_object_name||' has locked '||a.c_owner||'.'||a.c_object_name||' in mode '||a.lock_mode||' for '||a.minutes||' minutes between '||to_char(a.min_time,'dd-mon hh24:mi')||' and '||to_char(a.max_time,'dd-mon hh24:mi'));
for s in (
select distinct regexp_replace(cast(substr(sql_text,1,2000) as varchar2(60)),'[^a-zA-Z ]',' ') sql_text
from dba_hist_active_sess_history join dba_hist_sqltext t using(dbid,sql_id)
where event like 'enq: TM%' and p2=a.p2 and sample_time>sysdate-90
) loop
print_all('-- '||'blocked statement: '||s.sql_text);
end loop;
for c in (
with
c as (
select p.owner p_owner,p.table_name p_table_name,c.owner c_owner,c.table_name c_table_name,c.delete_rule,c.constraint_name
from dba_constraints p
join dba_constraints c on (c.r_owner=p.owner and c.r_constraint_name=p.constraint_name)
where p.constraint_type in ('P','U') and c.constraint_type='R'
)
select c_owner owner,constraint_name,c_table_name,connect_by_root(p_owner||'.'||p_table_name)||sys_connect_by_path(decode(delete_rule,'CASCADE','(cascade delete)','SET NULL','(cascade set null)',' ')||' '||c_owner||'"."'||c_table_name,' referenced by') foreign_keys
from c
where level<=10 and c_owner=a.c_owner and c_table_name=a.c_object_name
connect by nocycle p_owner=prior c_owner and p_table_name=prior c_table_name and ( level=1 or prior delete_rule in ('CASCADE','SET NULL') )
start with p_owner=a.p_owner and p_table_name=a.p_object_name
) loop
print_all('-- '||'FK chain: '||c.foreign_keys||' ('||c.owner||'.'||c.constraint_name||')'||' unindexed');
for l in (select * from dba_cons_columns where owner=c.owner and constraint_name=c.constraint_name) loop
print_all('-- FK column '||l.column_name);
end loop;
print_ddl('-- Suggested index: '||regexp_replace(translate(dbms_metadata.get_ddl('REF_CONSTRAINT',c.constraint_name,c.owner),chr(10)||chr(13),' '),'ALTER TABLE ("[^"]+"[.]"[^"]+") ADD CONSTRAINT ("[^"]+") FOREIGN KEY ([(].*[)]).* REFERENCES ".*','CREATE INDEX ON \1 \3;'));
for x in (
select rtrim(translate(dbms_metadata.get_ddl('INDEX',index_name,index_owner),chr(10)||chr(13),' ')) ddl
from dba_ind_columns where (index_owner,index_name) in (select owner,index_name from dba_indexes where owner=c.owner and table_name=c.c_table_name)
and column_name in (select column_name from dba_cons_columns where owner=c.owner and constraint_name=c.constraint_name)
)
loop
print_ddl('-- Existing candidate indexes '||x.ddl);
end loop;
for x in (
select rtrim(translate(dbms_metadata.get_ddl('INDEX',index_name,index_owner),chr(10)||chr(13),' ')) ddl
from dba_ind_columns where (index_owner,index_name) in (select owner,index_name from dba_indexes where owner=c.owner and table_name=c.c_table_name)
)
loop
print_all('-- Other existing Indexes: '||x.ddl);
end loop;
end loop;
end loop;
end;
/

I didn’t take time to document/comment the script but don’t hesitate to ask what you don’t understand there.

You should not see any ‘enq: TM’ from an OLTP application. If you have them, even short, they will become problematic one day. It’s the kind of thing that can block the whole database.

 

Cet article Script to suggest FK indexes est apparu en premier sur Blog dbi services.

oracle text with order by clause

Tom Kyte - Mon, 2016-07-04 07:46
Dir Sir: I am developing an anti-plagiarism system, I am using Oracle text to search for my text. my corpus contains more than 30 million of records. I want to check my document against this corpus. I want to fetch the highest score only, not all...
Categories: DBA Blogs

Can be differentiate cascade delete or statement(delete from query) inside the table trigger

Tom Kyte - Mon, 2016-07-04 07:46
Hi Tom, i have question about cascade delete, How its work internally and how to differentiate that row by delete from delete statement on child table or row deleted by a cascade delete. because i am facing problem and try to solve Mutating error...
Categories: DBA Blogs

Partitioning Questions

Tom Kyte - Mon, 2016-07-04 07:46
Hello I would like to post a question related to which partitioning I can go for based on the below scenario. I have 2 tables named table1 and table2. Both the tables are having customer I'd and user id column. The hierarchy is multiple users...
Categories: DBA Blogs

the maximum number of logical operators(AND/OR) can be used in where clause

Tom Kyte - Mon, 2016-07-04 07:46
Hi Tom, How many maximum number of logical operators(either AND/OR) are allowed in where clause in select statement. eg select * from employees where first_name = 'abc' or first_name = 'cde' or first_name = 'def' or .... .... .......
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator