Skip navigation.

Oracle in Action

Syndicate content
Let's do it simply...
Updated: 14 hours 26 min ago

ORA-39070: Unable to open the log file

Tue, 2015-12-08 03:56

RSS content

I received this error message when I was trying to perform a data pump export of SH schema in parallel in a RAC database. I proceeded as follows:

Current scenario:
Name of the cluster: cluster01
Number of nodes : 3 (host01, host02, host03)
RAC Database version: 11.2.0.3
Name of RAC database : orcl
Number of instances : 3

  • Created a directory object  pointing to shared storage which is accessible by all the three instances of the database
SQL>drop directory dp_shared_dir;
SQL>create directory DP_SHARED_DIR as '+DATA/orcl/';
SQL>grant read, write on directory dp_shared_dir to public;
  • Issued the command to export SH schema in parallel across all active Oracle RAC instances with parallelism = 6 which resulted in error ORA-39070
[oracle@host01 root]$ expdp system/oracle@orcl schemas=sh directory=dp_shared_dir parallel=6 cluster=y dumpfile='expsh%U.dmp' reuse_dumpfiles=y

Export: Release 11.2.0.3.0 - Production on Tue Dec 8 14:45:39 2015
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
 With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
 Data Mining and Real Application Testing options
 ORA-39002: invalid operation
 ORA-39070: Unable to open the log file.
 ORA-29283: invalid file operation
 ORA-06512: at "SYS.UTL_FILE", line 536
 ORA-29283: invalid file operation

Cause:
The error message indicates that Log file cannot be opened. Since directory parameter points to a shared location on an ASM disk group and log file is not supported on it, I received the above error.

Solution:
I modified my command and explicitly specified log file to be created on  local file system pointed to by the directory object DATA_PUMP_DIR. Subsequently, export was performed successfully.

[oracle@host01 root]$ expdp system/oracle@orcl schemas=sh directory=dp_shared_dir parallel=6 cluster=y logfile=data_pump_dir:expsh.log dumpfile='expsh%U.dmp' reuse_dumpfiles=y

Export: Release 11.2.0.3.0 - Production on Tue Dec 8 15:14:11 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_10": system/********@orcl schemas=sh directory=dp_shared_dir parallel=6 cluster=y logfile=data_pump_dir:expsh.log dumpfile=expsh%U.dmp reuse_dumpfiles=y
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 273.8 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
.....
.....
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/INDEX/DOMAIN_INDEX/INDEX
Processing object type SCHEMA_EXPORT/MATERIALIZED_VIEW
Processing object type SCHEMA_EXPORT/DIMENSION
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_10" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_10 is:
+DATA/orcl/expsh01.dmp
+DATA/orcl/expsh02.dmp
+DATA/orcl/expsh03.dmp
+DATA/orcl/expsh04.dmp
+DATA/orcl/expsh05.dmp
+DATA/orcl/expsh06.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_10" successfully completed at 15:20:49

I hope it helps!!!

—————————————————————————————————————-

Related links:   Home 11gR2 RAC Index

Tags:  

Del.icio.us
Digg

Comments:  3 comments on this item
You might be interested in this:  
Copyright © ORACLE IN ACTION [ORA-39070: Unable to open the log file], All Right Reserved. 2016.

The post ORA-39070: Unable to open the log file appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Webinar: Histograms: Pre-12c and Now

Fri, 2015-10-30 23:49

RSS content

To improve optimizer estimates in case of skewed data distribution , histograms can be created. Prior to 12c, based on No. of distinct values (NDV) in a column two types of histograms could be created :

if no. of buckets >= NDV, frequency histogram is created and the optimizer makes accurate estimates.

If no. of buckets < NDV, height balanced histogram is created and accuracy of optimizer estimates depends on whether a key value is an endpoint or not.

The problem of optimizer mis-estimates in case of height balanced histograms is resolved to a large extent in Oracle Database 12c by introducing top-frequency and hybrid histograms which are created if no. of buckets < NDV.

I will present a webinar on “Histograms: Pre-12c and now” on Saturday, November 7th  at 10:00 AM – 11:00 AM (IST) organized by All India Oracle User Group – North India Chapter.

This webinar explores Pre as well post-12c histograms while highlighting the top-frequency and hybrid histograms introduced in Oracle Database 12c.

Everyone can join this Live Webinar @

https://plus.google.com/u/0/events/cgrgqlm5f7nuecdpjoc85d1u6eo
or
https://www.youtube.com/watch?v=xfwbDczWFXo

Hope to meet you at the webinar!!!



Tags:  

Del.icio.us
Digg

Comments:  3 comments on this item
You might be interested in this:  
Copyright © ORACLE IN ACTION [Webinar: Histograms: Pre-12c and Now], All Right Reserved. 2015.

The post Webinar: Histograms: Pre-12c and Now appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Speaking at SANGAM 2015

Wed, 2015-10-21 00:27

RSS content

AIOUG meet “SANGAM  – Meeting of Minds” is the Largest Independent Oracle Event in India, organized annually in the month of November. This year’s Sangam (Sangam15 - 7th Annual Oracle Users Group Conference) will be held in Hyderabad International Convention Centre, Hyderabad on Saturday 21st & Sunday 22nd November 2015.

I will be speaking at this year’s SANGAM about Oracle Database 12c new feature : Highly Available NFS (HANFS) over ACFS.

HANFS over ACFS enables highly available NFS servers to be configured using Oracle ACFS clusters. The NFS exports are exposed through Highly Available VIPs (HAVIPs), and this allows Oracle’s Clusterware agents to ensure that HAVIPs and NFS exports are always available. If the node hosting the export(s) fails, the corresponding HAVIP and hence its corresponding NFS export(s) will automatically fail over to one of the surviving nodes so that the NFS client continues to receive uninterrupted service of NFS exported paths.

My session will be held on Saturday November 21, 2015  from 5:10pm to 6:00pm in
Hall 5 (Ground Floor). 

Hope to meet you there!!

 

 



Tags:  

Del.icio.us
Digg

Comments:  3 comments on this item
You might be interested in this:  
Copyright © ORACLE IN ACTION [Speaking at SANGAM 2015], All Right Reserved. 2015.

The post Speaking at SANGAM 2015 appeared first on ORACLE IN ACTION.

Categories: DBA Blogs