Feed aggregator

Dynamic SQL

Tom Kyte - Tue, 2017-02-07 13:46
Hi Tom, I have the following dynamic sql statement, I want to return the rowid of the inserted record into a variable but I get a command not properly ended error. What is the problem? stmt:= 'INSERT INTO ' || destination_table_name || ' ( ' |...
Categories: DBA Blogs

Parting With ocm.rsp Isn’t Really Sweet Sorrow…

Pythian Group - Tue, 2017-02-07 13:00

Now that we’re well into 2017, let’s sit back and ask the question: “What was the best news of 2016 in the Oracle ecosystem?”

I’ll give you a clue.  The best news was something that sneakily appeared between April and October 2016 in your patch maintenance logs :

[root@myclusterdb02 OPatch]# ./opatchauto apply /patches/OCT2016_bundle_patch/24436624/Database/12.1.0.2.0/12.1.0.2.161018DBBP/24448103 -oh /u01/app/oracle/product/12.1.0.2/dbhome_1 -ocmrf /u01/app/oracle/product/12.1.0.2/dbhome_1/OPatch/ocm/bin/ocm.rsp
System initialization log file is /u01/app/oracle/product/12.1.0.2/dbhome_1/cfgtoollogs/opatchautodb/systemconfig2016-12-11_05-23-36PM.log.
Session log file is /u01/app/oracle/product/12.1.0.2/dbhome_1/cfgtoollogs/opatchauto/opatchauto2016-12-11_05-24-09PM.log
WARNING: the option -ocmrf is deprecated and no longer needed. OPatch no longer checks for OCM configuration. It will be removed in a future release.

The interesting part is in that 4th line :

WARNING: the option -ocmrf is deprecated and no longer needed. OPatch no longer checks for OCM configuration. It will be removed in a future release.

This is awesome! We no longer need this annoying ocm.rsp file !

When I discovered this back in November 2016, I searched for an official feature announcement but found nothing (it looks like an opatch 13c new feature backported in 11g and 12c). This ocm.rsp file was still needed when I applied the April 2016 PSU, but from which opatch version was it implemented?

First, I looked at the 2016 dev comments in opatch 11.2.0.3.15 and 12.2.0.1.7 (which were the last version when I was applying this Exadata October 2016 PSU) and I sadly discovered that no specific comments were talking about this new feature (even if it would obviously deserve some):

#  opatch  04/08/16  Update CAS_VERSION to 12.2.0.1.3
#  opatch  04/11/16  Consolidate all 4 opatch scripts
#  opatch  06/07/16  Calling auto_patch.pl using the perl from Oracle Home Location
#  opatch  06/30/16  bug 23721730 - default memory was too high for 32 bit so removed and disabled opatch_env.sh script call

 

After a quick look at the code, I found the amazing piece of code that would ease our lives in the future (the function is named ocmOptionDetect()):

# Option 'ocmrf' is deprecated in OPatch 13.3 - Detect and give warning message
if [ "$OCMRF_OPTION" = "1" ] && [ "$VERSION_13_X" = "1" ]; then
echo "WARNING: the option \"-ocmrf\" is deprecated and no longer needed.  OPatch no longer checks for OCM configuration. It will be removed in a future release."

 

I then performed a few greps in different opatch versions that I had and I found that this feature has been implemented in:

  • 11.2.0.3.13 or 11.2.0.3.14 for 11g (I was unable to find any of these versions so I couldn’t check)
  • Between versions 12.1.0.1.10 and 12.2.0.1.7 for 12c (again, I was unable to find any of these versions so I couldn’t check)

If you come across any of the opatch versions specified above, please “grep -i ocmrf opatch” it and let me know so I can find when this feature has been implemented.

You can now upgrade all your opatches to become ocm.rsp-free and this (for me) was definitely the best new of 2016!

Categories: DBA Blogs

Setting Invalid Fields for the UI in ADF BC Groovy

Andrejus Baranovski - Tue, 2017-02-07 10:23
What if you have entity level validation rule and want to attach validation error message to specific field. By default this is not possible - all entity level validation error messages are displayed in the popup and are not attached to the fields (differently than attribute level validation rule messages).

Apparently there is a way to achieve such requirement with Groovy expression, this can be executed from entity level validation - adf.error.addAttribute('Salary'). In addAttribute you need to provide attribute name which will be assigned with the error. Complete expression for entity validator:


Result displayed on UI - validation error message is assigned to the field, which was changed:


Download sample application - GroovyADFApp_v2.zip.

Profiles of the Future

Michael Armstrong-Smith - Tue, 2017-02-07 09:37
I've been thinking about the laws of the universe recently and came across a book written by the science fiction master Arthur C. Clarke called Profiles of the Future. Over the course of many years and several revisions he put forward the hypothesis that there are 3 laws for predicting the future. These are:
Clarke's first law
When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
Clarke's second law
The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
Clarke's third law
Any sufficiently advanced technology is indistinguishable from magic.

The third one has various alternative explanations such as:

Any technology, no matter how primitive, is magic to those who don't understand it ,and
Any sufficiently ancient recovered wisdom or artifact is also indistinguishable from magic
There's a couple of variations I like that are particularly related to the field of computing in which I work. 
These are:
Any sufficiently advanced technology is indistinguishable from a rigged demo, and
Software tests that inexplicably pass or fail are indistinguishable from voodoo
Over the course of my career I've seen the truth of these last two. Many years ago I had the privilege of being invited to the demo of a new piece of software that was slated to be years ahead of its time. It was being shown to us by an executive of that company and what we saw really was a leap beyond anything any of us had seen before that is, until one of us spotted that what appeared to be buttons being clicked on a screen actually looked suspiciously like a PowerPoint presentation moving from one picture to another. The buttons were not depressing and the positioning of some of the buttons moved ever so slightly from one screen to another.
At one point a question was asked from the audience if we could go back to the previous screen to look and see what one of the other buttons could do. Of course this would have been impossible but the presenter was slick and had anticipated such a request when his reply was that we had so much to cover in a short period of time and would love to show that feature in a separate demo after the main presentation.
When the presentation was over, some of us went to ask about that feature to be told that the link back to the database at head office had gone down and therefore we would not be able to see a demo of that feature after all.

CIO Review: The Future of Content is NOW

WebCenter Team - Tue, 2017-02-07 08:59

By this time, we are all bought into the value that a Cloud infrastructure brings. So, if you are in Information Technology (IT), you are looking to best leverage a single, centralized Cloud platform to meet your needs. As you think through your Cloud investments, consider an iterative approach where you start off looking for the biggest bang for your money and an initiative that earns you instant results (and credibility) with the various lines of businesses.

I recently wrote a feature for the CIO Review publication in which I  explored how a Cloud content hub would create value for both IT and the different lines of businesses. In it, I also shared an example of one of our customers, Omni Financiera to discuss how they are choosing to get immediate value from their Cloud investments. Take a read and see if this could be your next step in the Cloud. As always, I would welcome an open dialog and look forward to hearing from you on your thoughts.

Hope you enjoy the feature.


Database 11.2 Extended Support Fee Waived through December 2018

Steven Chan - Tue, 2017-02-07 02:06

Oracle's Lifetime Support policy has three phases:  Premier Support, Extended Support, and Sustaining Support.  For details about coverage during each phase, see:

You can purchase a support plan for your licensed products to obtain Premier Support.  There is an additional fee for Extended Support. 

Extended Support for Database 11.2 runs to December 31, 2020. The Extended Support fee for Oracle Database 11gR2 11.2 has been waived to December 31, 2018.  See:

Related Articles


Categories: APPS Blogs

Links for 2017-02-06 [del.icio.us]

Categories: DBA Blogs

Query for generating resultset with continuos dates for existing data

Tom Kyte - Mon, 2017-02-06 19:26
Dear Tom, I have a table which stores all the account balances with dates. A row is inserted into this table only for those days where the account has transactions. The table looks as below. create table account_balance (account varchar2(16), b...
Categories: DBA Blogs

Oracle AQ and wait delay issue

Tom Kyte - Mon, 2017-02-06 19:26
I wonder if you can shed any light on the following problem we're seeing when we dequeue payloads from AQ. What seems to be happening is whenever we set a delay then all delayed payloads seem to be taking ~3 seconds longer to dequeue than the wait we...
Categories: DBA Blogs

how to use ANSI outer join in subquery

Tom Kyte - Mon, 2017-02-06 19:26
how to write this query in ANSI STANDARD (I mean without using + symbol) select a.ename, a.empno from dept a where exists(select null from emp b where a.deptno=b.deptno(+))
Categories: DBA Blogs

Why i am getting invalid cursor error/

Tom Kyte - Mon, 2017-02-06 19:26
<code>DECLARE CURSOR C_Emp IS SELECT Ename,Deptno FROM Emp WHERE Ename = 'RINTU'; V_Ename Emp.Ename%TYPE; V_Deptno Emp.Deptno%TYPE; BEGIN DBMS_OUTPUT.PUT_LINE('LOOP STAR...
Categories: DBA Blogs

How to identify sql query running against a database

Tom Kyte - Mon, 2017-02-06 19:26
Hi Toms Team, First of all thanks to you for sharing information using "Ask Tom" platform. My question is When we run any sql against a database (I mean simple select sql eg. select 1 from dual) we can see this information by querying against ...
Categories: DBA Blogs

Execute Shell Script from Oracle Database Program.

Tom Kyte - Mon, 2017-02-06 19:26
Hi Experts, I have done the required setup to execute unix OS commands using Java class as mentioned in the link(https://oracle-base.com/articles/8i/shell-commands-from-plsql) from database programs , I was able to run echo command from the Pl/SQL...
Categories: DBA Blogs

DBA_HIGH_WATER_MARK_STATISTICS Dictinoary view has cpu_count. is this value dynamically read while DB startup.

Tom Kyte - Mon, 2017-02-06 19:26
DBA_HIGH_WATER_MARK_STATISTICS Dictinoary view has cpu_count. is this value dynamically read while DB startup. if it's static can it be possible to change. we are moving OS and database disk' from 8 core to 4 core system to fix license issue. so want...
Categories: DBA Blogs

How find deleted row without using flashback

Tom Kyte - Mon, 2017-02-06 19:26
After commit how to find a deleted row from emp table in oracle?
Categories: DBA Blogs

Installing Pervasive and Oracle on the same physical server

Tom Kyte - Mon, 2017-02-06 19:26
My company is moving from Pervasive to Oracle and we are about to install Oracle 12c. We would like to know if there are any issues that we may encounter if we installed Oracle DB on the same physical server the Pervasive DB is on.
Categories: DBA Blogs

Exadata Express Cloud Service: SQL and Optimizer trace

Yann Neuhaus - Mon, 2017-02-06 15:51

The Oracle PDBaaS is for database developers. And database developers may need to trace what happens with their queries: SQL trace and Optimizer trace. Let’s see what we can do on Exadata Express Cloud Service

V$DIAG_TRACE_FILE_CONTENTS

On the managed PDBaaS you don’t have access to the filesystem. But in 12.2 you have a view that can display the content of the trace files: V$DIAG_TRACE_FILE lists the files you can access (that pertain to your container) and V$DIAG_TRACE_FILE_CONTENTS can display the content.

Here is an example how to read it.
I get my tracefile name from v$process:

SQL> column tracefile new_value tracefile
SQL> select tracefile from v$process where addr=(select paddr from v$session where sid=sys_context('userenv','sid'));
 
TRACEFILE
--------------------------------------------------------------------------------
/u02/app/oracle/diag/rdbms/cfcdba1/cfcdba1_1/trace/cfcdba1_1_ora_24389.trc

And read the ‘payload’ from the view:

SQL> set linesize 4000 pagesize 0 trimspool on
SQL> spool last.trc
SQL> select payload from V$DIAG_TRACE_FILE_CONTENTS where trace_filename='&tracefile';
old 1: select payload from V$DIAG_TRACE_FILE_CONTENTS where trace_filename='&tracefile'
new 1: select payload from V$DIAG_TRACE_FILE_CONTENTS where trace_filename='/u02/app/oracle/diag/rdbms/cfcdba1/cfcdba1_1/trace/cfcdba1_1_ora_24389.trc'
 
no rows selected

Of course, I didn’t enable any trace, so the file is empty.

10046

You can’t enable SQL Trace in the Exadata Express Cloud Service. This is disabled by lockdown profile and by not granting execute to the packages that can do it.
Here is what I tried. Please tell me if you have other ideas:
SQL> alter session set sql_trace=true;
ERROR:
ORA-01031: insufficient privileges
 
SQL> exec sys.dbms_support.start_trace;
BEGIN sys.dbms_support.start_trace; END;
 
*
ERROR at line 1:
ORA-06550: line 1, column 7:
PLS-00201: identifier 'SYS.DBMS_SUPPORT' must be declared
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
 
SQL> exec dbms_session.set_sql_trace(true);
BEGIN dbms_session.set_sql_trace(true); END;
 
*
ERROR at line 1:
ORA-01031: insufficient privileges
ORA-06512: at "SYS.DBMS_SESSION", line 178
ORA-06512: at line 1
 
SQL> exec for i in (select sid,serial# from v$session where sid=sys_context('userenv','sid')) loop sys.dbms_system.set_sql_trace_in_session(i.sid,i.serial#,true); end loop;
BEGIN for i in (select sid,serial# from v$session where sid=sys_context('userenv','sid')) loop sys.dbms_system.set_sql_trace_in_session(i.sid,i.serial#,true); end loop; END;
 
*
ERROR at line 1:
ORA-06550: line 1, column 96:
PLS-00201: identifier 'SYS.DBMS_SYSTEM' must be declared
ORA-06550: line 1, column 96:
PL/SQL: Statement ignored
 
SQL> exec for i in (select sid,serial# from v$session where sid=sys_context('userenv','sid')) loop sys.dbms_support.start_trace_in_session(i.sid,i.serial#,true); end loop;
BEGIN for i in (select sid,serial# from v$session where sid=sys_context('userenv','sid')) loop sys.dbms_support.start_trace_in_session(i.sid,i.serial#,true); end loop; END;
 
*
ERROR at line 1:
ORA-06550: line 1, column 96:
PLS-00201: identifier 'SYS.DBMS_SUPPORT' must be declared
ORA-06550: line 1, column 96:
PL/SQL: Statement ignored
 
SQL> alter session set events 'sql_trace';
ERROR:
ORA-01031: insufficient privileges
 
SQL> exec execute immediate q'{alter session set events 'sql_trace'}';
BEGIN execute immediate q'{alter session set events 'sql_trace'}'; END;
 
*
ERROR at line 1:
ORA-01031: insufficient privileges
ORA-06512: at line 1
 
SQL> create or replace procedure sys.my_sql_trace as begin execute immediate q'{alter session set events 'sql_trace'}'; end;
2 /
create or replace procedure sys.my_sql_trace as begin execute immediate q'{alter session set events 'sql_trace'}'; end;
*
ERROR at line 1:
ORA-01031: insufficient privileges

No doubt, we can’t sql_trace. I’m not sure it is a big problem because we have all options in the Exadata Express Cloud Service so we have ASH, SQL monitor, etc. I’m not saying that sql_trace is not useful anymore – there are cases where you need to see the wait events one by one – but from development point of view an plan with execution statistics should be sufficient.

10053

Tracing the CBO is a different thing. There is no alternative when you want to understand the choices of the optimizer. It is not something I use every day, but there are some cases where it is the only tool to troubleshoot.

SQL> alter session set events 'trace [SQL_Optimizer.*]';
ERROR:
ORA-01031: insufficient privileges

The ALTER SESSION SET EVENT is blocked by the S20 lockdown profile. But there is another way.

I’ve run the following query:

SQL> select * from dual;
X
 
SQL> select * from table(dbms_xplan.display_cursor);
SQL_ID a5ks9fhw2v9s1, child number 0
-------------------------------------
select * from dual
 
Plan hash value: 272002086
 
----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 (100)| |
| 1 | TABLE ACCESS STORAGE FULL| DUAL | 1 | 2 | 2 (0)| 00:00:01 |
----------------------------------------------------------------------------------
 
13 rows selected.

Now I want more information about the execution plan.


SQL> exec dbms_sqldiag.dump_trace('a5ks9fhw2v9s1', 0, 'Compiler','Compiler');
PL/SQL procedure successfully completed.

Cool. It seems I’m allowed to do that. This procedures sets the SQL Compiler event, but it seems that it is allowed from the procedure.

I can now get the trace:


SQL> select tracefile from v$process where addr=(select paddr from v$session where sid=sys_context('userenv','sid'));
/u02/app/oracle/diag/rdbms/cfcdba1/cfcdba1_1/trace/cfcdba1_1_ora_24389_Compiler.trc
 
SQL> spool last.trc
SQL> select payload from V$DIAG_TRACE_FILE_CONTENTS
2 where adr_home=regexp_replace('&tracefile','^(.*)/trace/([^/]*)','\1')
3 and trace_filename=regexp_replace('&tracefile','^(.*)/trace/([^/]*)','\2')
4 ;
old 2: where adr_home=regexp_replace('&tracefile','^(.*)/trace/([^/]*)','\1')
new 2: where adr_home=regexp_replace('/u02/app/oracle/diag/rdbms/cfcdba1/cfcdba1_1/trace/cfcdba1_1_ora_24389_Compiler.trc','^(.*)/trace/([^/]*)','\1')
old 3: and trace_filename=regexp_replace('&tracefile','^(.*)/trace/([^/]*)','\2')
new 3: and trace_filename=regexp_replace('/u02/app/oracle/diag/rdbms/cfcdba1/cfcdba1_1/trace/cfcdba1_1_ora_24389_Compiler.trc','^(.*)/trace/([^/]*)','\2')
Trace file /u02/app/oracle/diag/rdbms/cfcdba1/cfcdba1_1/trace/cfcdba1_1_ora_24389_Compiler.trc

This outputs the full 10053 trace:


Oracle Database 12c Enterprise Edition Release 12.2.0.0.3 - 64bit Production
Build label: RDBMS_12.2.0.0.3_LINUX.X64_160720
ORACLE_HOME: /u01/app/oracle/product/12.2.0.0.3/dbhome_1
System name: Linux
Node name: cfcldx0171.usdc2.oraclecloud.com
Release: 2.6.39-400.264.1.el6uek.x86_64
Version: #1 SMP Wed Aug 26 16:42:25 PDT 2015
Machine: x86_64
Storage: Exadata
Instance name: cfcdba1_1
Redo thread mounted by this instance: 1
Oracle process number: 109
Unix process pid: 24389, image: oracle@cfcldx0171.usdc2.oraclecloud.com
 
*** 2017-02-05T20:42:38.580323+00:00 (EPTDOJVM1KG(47))
 
*** SESSION ID:(2343.25237) 2017-02-05T20:42:38.580349+00:00
*** CLIENT ID:() 2017-02-05T20:42:38.580354+00:00
*** SERVICE NAME:(eptdojvm1kg.usdc2.oraclecloud.com) 2017-02-05T20:42:38.580358+00:00
*** MODULE NAME:(SQL*Plus) 2017-02-05T20:42:38.580363+00:00
*** ACTION NAME:() 2017-02-05T20:42:38.580368+00:00
*** CLIENT DRIVER:(SQL*PLUS) 2017-02-05T20:42:38.580372+00:00
*** CONTAINER ID:(47) 2017-02-05T20:42:38.580377+00:00
 
Enabling tracing for cur#=6 sqlid=bqf9h9bhb6c88 recursive
Parsing cur#=6 sqlid=bqf9h9bhb6c88 len=45
sql=/* SQL Analyze(2343,0) */ select * from dual
 
End parsing of cur#=6 sqlid=bqf9h9bhb6c88
Semantic Analysis cur#=6 sqlid=bqf9h9bhb6c88
OPTIMIZER INFORMATION
 
******************************************
 
----- Current SQL Statement for this session (sql_id=3u37gu3fhx3q1) -----
/* SQL Analyze(2343,0) */ select * from dual
----- PL/SQL Stack -----
----- PL/SQL Call Stack -----
object line object
handle number name
0x3c5cdebc38 161 package body SYS.DBMS_SQLTUNE_INTERNAL.I_PROCESS_SQL_CALLOUT
0x3c5cdebc38 14080 package body SYS.DBMS_SQLTUNE_INTERNAL.I_PROCESS_SQL
0x3e17d89590 1599 package body SYS.DBMS_SQLDIAG.DUMP_TRACE
0x25dda27db0 1 anonymous block
...

Unified Trace Service views

Those views are new in 12.2 and are declined to show only part of files so that you can grant access to read those files only for 10046 (SQL Trace) or for 10053 (Optimizer Trace), for all sessions or only the user’s one.

GV$DIAG_TRACE_FILE (from x$dbgtflist) and GV$DIAG_TRACE_FILE_CONTENTS (from x$dbgtfview) show all files from the ADR traces
GV$DIAG_APP_TRACE_FILE (from x$dbgatflist) is a subset showing all files containing SQL Trace or Optimizer Trace
GV$DIAG_SQL_TRACE_RECORDS (from x$dbgtfsqlt) is a subset showing all files containing SQL Trace
GV$DIAG_OPT_TRACE_RECORDS (from x$dbgtfoptt) is a subset showing all files containing Optimizer Trace
V$DIAG_SESS_SQL_TRACE_RECORDS (from x$dbgtfssqlt) and V$DIAG_SESS_OPT_TRACE_RECORDS (from x$dbgtfsoptt) are similar, but for your session (and then no GV$ as your session is on one instance only).

A new role APPLICATION_TRACE_VIEWER grants to select on views that show only SQL and Optimizer traces.

Alert.log

This is not new in 12.2, you can query V$DIAG_ALERT_EXT to see the alert.log content. Of course, from the PDB you will see only PDB related content:

Endian type of dictionary set to little
Autotune of undo retention is turned on.
This instance was first to open pluggable database EPTDOJVM1KG (container=47)
attach called for domid 47 (domuid: 0x7fa9e59a, options: 0x0, pid: 111297)
queued attach broadcast request 0x3e26185d68
[111297] Successfully onlined Undo Tablespace 2.
Undo initialization finished serial:0 start:2570370849 end:2570371107 diff:258 ms (0.3 seconds)
Database Characterset for EPTDOJVM1KG is AL32UTF8
JIT: pid 111297 requesting full stop
detach called for domid 47 (domuid: 0x7fa9e59a, options: 0x0, pid: 111297)
queued detach broadcast request 0x3e26185d18
Autotune of undo retention is turned on.
This instance was first to open pluggable database EPTDOJVM1KG (container=47)
attach called for domid 47 (domuid: 0x7fa9e59a, options: 0x0, pid: 111297)
queued attach broadcast request 0x3e26185cc8
Endian type of dictionary set to little
[111297] Successfully onlined Undo Tablespace 2.
Undo initialization finished serial:0 start:2570382111 end:2570382464 diff:353 ms (0.4 seconds)
Deleting old file#8 from file$
Deleting old file#9 from file$
Deleting old file#10 from file$
Deleting old file#11 from file$
Deleting old file#12 from file$
Deleting old file#14 from file$
Deleting old file#17 from file$
Adding new file#486 to file$(old file#8)
Adding new file#487 to file$(old file#9)
Adding new file#488 to file$(old file#10)
Adding new file#489 to file$(old file#11)
Adding new file#490 to file$(old file#12)
Adding new file#491 to file$(old file#14)
Adding new file#492 to file$(old file#17)
Successfully created internal service eptdojvm1kg.usdc2.oraclecloud.com at open
Database Characterset for EPTDOJVM1KG is AL32UTF8
Opatch validation is skipped for PDB EPTDOJVM1KG (con_id=0)
Opening pdb with no Resource Manager plan active
Creating new database key for new master key and wallet
Creating new database key with the new master key
Retiring: ena 2 flag 6 mkloc 0
encrypted key 32ce589b50b3105f9419d0a86ce8e7f400000000000000000000000000000000
mkid cbcd33d929f64fd0bf1367f6e69f2dd5
Creating: ena 2 flag e mkloc 1
encrypted key baf27c1e11623fe9837f678df2e48f8a00000000000000000000000000000000
mkid 2d99ba8694054fd5bf41f998513e1d4c
New database key and new master key created successfully
create temporary tablespace "EPTDOJVM1KG_TEMP" tempfile size 20M extent management local uniform size 16M
Force tablespace EPTDOJVM1KG_TEMP to be encrypted with AES128
Completed: create temporary tablespace "EPTDOJVM1KG_TEMP" tempfile size 20M extent management local uniform size 16M
create BIGFILE tablespace "EPTDOJVM1KG_DATA" datafile size 10485760 autoextend on next 1M maxsize UNLIMITED extent management local autoallocate segment space management auto
Force tablespace EPTDOJVM1KG_DATA to be encrypted with AES128
Completed: create BIGFILE tablespace "EPTDOJVM1KG_DATA" datafile size 10485760 autoextend on next 1M maxsize UNLIMITED extent management local autoallocate segment space management auto
Resize operation completed for file# 486, old size 307200K, new size 317440K
ALTER SYSTEM SET pdb_lockdown='S20' SCOPE=BOTH PDB='EPTDOJVM1KG';
...

So what?

We can access SQL Trace and Optimizer trace and this is very nice. Access to trace has always been a problem because they are on the server, may contain sensitive data, etc. Having views to give access easily and in a controlled way is a very nice 12.2 feature.
I don’t know if enabling Optimizer trace on Exadata Express Cloud Service is expected, or just something that was forgotten by the lockdown profile. I hope the first hypothesis is the right one and I hope that we will be allowed to enable SQL Trace as well. This service is for developers, and I’m a big advocate of giving all tools to developers so that performance is addressed before production.

 

Cet article Exadata Express Cloud Service: SQL and Optimizer trace est apparu en premier sur Blog dbi services.

Netflix Genres

Bradley Brown - Mon, 2017-02-06 11:10
Today Netflix posted a "secret" list of "easy to access" genres...easy for the technical person anyway.  So I created this list to help you find them anytime you want.  Save this post and just go directly to any genre.

Action & Adventure: Go
Action Comedies: Go
Action Sci-Fi & Fantasy: Go
Action Thrillers: Go
Adult Animation: Go
Adventures: Go
African Movies: Go
Alien Sci-Fi: Go
Animal Tales: Go
Anime: Go
Anime Action: Go
Anime Comedies: Go
Anime Dramas: Go
Anime Fantasy: Go
Anime Features: Go
Anime Horror: Go
Anime Sci-Fi: Go
Anime Series: Go
Art House Movies: Go
Asian Action Movies: Go
Australian Movies: Go
B-Horror Movies: Go
Baseball Movies: Go
Basketball Movies: Go
Belgian Movies: Go
Biographical Docs: Go
Biographical Dramas: Go
Boxing Movies: Go
British Movies: Go
British TV Shows: Go
Campy Movies: Go
Children & Family Movies: Go
Chinese Movies: Go
Classic Action & Adventure: Go
Classic Comedies: Go
Classic Dramas: Go
Classic Foreign Movies: Go
Classic Movies: Go
Classic Musicals: Go
Classic Romantic Movies: Go
Classic Sci-Fi & Fantasy: Go
Classic Thrillers: Go
Classic TV Shows: Go
Classic War Movies: Go
Classic Westerns: Go
Comedies: Go
Comic Book and Superhero: Go
Country & Western/Folk: Go
Courtroom Dramas: Go
Creature Features: Go
Crime Action & Adventure: Go
Crime Documentaries: Go
Crime Dramas: Go
Crime Thrillers: Go
Crime TV Shows: Go
Cult Comedies: Go
Cult Horror Movies: Go
Cult Movies: Go
Cult Sci-Fi & Fantasy: Go
Cult TV Shows: Go
Dark Comedies: Go
Deep Sea Horror Movies: Go
Disney: Go
Disney Musicals: Go
Documentaries: Go
Dramas: Go
Dramas based on Books: Go
Dramas based on real life: Go
Dutch Movies: Go
Eastern European Movies: Go
Education for Kids: Go
Epics: Go
Experimental Movies: Go
Faith & Spirituality: Go
Faith & Spirituality Movies: Go
Family Features: Go
Fantasy Movies: Go
Film Noir: Go
Food & Travel TV: Go
Football Movies: Go
Foreign Action & Adventure: Go
Foreign Comedies: Go
Foreign Documentaries: Go
Foreign Dramas: Go
Foreign Gay & Lesbian Movies: Go
Foreign Horror Movies: Go
Foreign Movies: Go
Foreign Sci-Fi & Fantasy: Go
Foreign Thrillers: Go
French Movies: Go
Gangster Movies: Go
Gay & Lesbian Dramas: Go
German Movies: Go
Greek Movies: Go
Historical Documentaries: Go
Horror Comedy: Go
Horror Movies: Go
Independent Action & Adventure: Go
Independent Comedies: Go
Independent Dramas: Go
Independent Movies: Go
Independent Thrillers: Go
Indian Movies: Go
Irish Movies: Go
Italian Movies: Go
Japanese Movies: Go
Jazz & Easy Listening: Go
Kids Faith & Spirituality: Go
Kids Music: Go
Kids' TV: Go
Korean Movies: Go
Korean TV Shows: Go
Late Night Comedies: Go
Latin American Movies: Go
Latin Music: Go
Martial Arts Movies: Go
Martial Arts, Boxing & Wrestling: Go
Middle Eastern Movies: Go
Military Action & Adventure: Go
Military Documentaries: Go
Military Dramas: Go
Military TV Shows: Go
Miniseries: Go
Mockumentaries: Go
Monster Movies: Go
Movies based on children's books: Go
Movies for ages 0 to 2: Go
Movies for ages 2 to 4: Go
Movies for ages 5 to 7: Go
Movies for ages 8 to 10: Go
Movies for ages 11 to 12: Go
Music & Concert Documentaries: Go
Music: Go
Musicals: Go
Mysteries: Go
New Zealand Movies: Go
Period Pieces: Go
Political Comedies: Go
Political Documentaries: Go
Political Dramas: Go
Political Thrillers: Go
Psychological Thrillers: Go
Quirky Romance: Go
Reality TV: Go
Religious Documentaries: Go
Rock & Pop Concerts: Go
Romantic Comedies: Go
Romantic Dramas: Go
Romantic Favorites: Go
Romantic Foreign Movies: Go
Romantic Independent Movies: Go
Romantic Movies: Go
Russian: Go
Satanic Stories: Go
Satires: Go
Scandinavian Movies: Go
Sci-Fi & Fantasy: Go
Sci-Fi Adventure: Go
Sci-Fi Dramas: Go
Sci-Fi Horror Movies: Go
Sci-Fi Thrillers: Go
Science & Nature Documentaries: Go
Science & Nature TV: Go
Screwball Comedies: Go
Showbiz Dramas: Go
Showbiz Musicals: Go
Silent Movies: Go
Slapstick Comedies: Go
Slasher and Serial Killer Movies: Go
Soccer Movies: Go
Social & Cultural Documentaries: Go
Social Issue Dramas: Go
Southeast Asian Movies: Go
Spanish Movies: Go
Spiritual Documentaries: Go
Sports & Fitness: Go
Sports Comedies: Go
Sports Documentaries: Go
Sports Dramas: Go
Sports Movies: Go
Spy Action & Adventure: Go
Spy Thrillers: Go
Stage Musicals: Go
Stand-up Comedy: Go
Steamy Romantic Movies: Go
Steamy Thrillers: Go
Supernatural Horror Movies: Go
Supernatural Thrillers: Go
Tearjerkers: Go
Teen Comedies: Go
Teen Dramas: Go
Teen Screams: Go
Teen TV Shows: Go
Thrillers: Go
Travel & Adventure Documentaries: Go
TV Action & Adventure: Go
TV Cartoons: Go
TV Comedies: Go
TV Documentaries: Go
TV Dramas: Go
TV Horror: Go
TV Mysteries: Go
TV Sci-Fi & Fantasy: Go
TV Shows: Go
Urban & Dance Concerts: Go
Vampire Horror Movies: Go
Werewolf Horror Movies: Go
Westerns: Go
World Music Concerts: Go
Zombie Horror Movies: Go

Oracle – RMAN Backups to CIFS

Yann Neuhaus - Mon, 2017-02-06 08:26

Not to often, but sometimes you need to offload your RMAN backups and your DataPump exports to CIFS. You might run out of space on your NFS server, or you are a Windows shop who has only CIFS shares, and then you might need to put your RMAN backups to CIFS.

One very important MOS Note regarding RMAN backups to CIFS is the following

Is use of CIFS Protocol for RMAN backups supported? (Doc ID 444809.1)

Running a RDBMS on CIFS is clearly not supported because of CIFS does not guarantee atomic write of 512 byte blocks. On the other hand, RMAN is not relying on 512-byte atomic writes, and so it is fine to use RMAN to CIFS.

However, CIFS is not certified by Oracle and the RDBMS development team can not be involved is case there are any issue. The MOS note was last updated on November 2013, so it might be not a bad idea to double check it with Oracle again.

In earlier OS and DB version, there used to be issues with filesystemio_options SETALL,DIRECT or ASYNCH, and so I wanted to double check if this is still the case with more recent releases.

I have done the tests with Oracle 11.2.0.4 and 12.1.0.2 on OEL 6.8 with Samba 3.6.

[root@oel001 ~]# cat /etc/oracle-release
Oracle Linux Server release 6.8
[root@oel001 ~]# uname -a
Linux oel001 4.1.12-61.1.25.el6uek.x86_64 #2 SMP Wed Jan 11 19:13:03 PST 2017 x86_64 x86_64 x86_64 GNU/Linux
[root@oel001 ~]# rpm -qa | grep samba-3.6.23
samba-3.6.23-36.0.1.el6_8.x86_64

I start with filesystemio_options SETALL and RMAN and Datapump with 18 parallel sessions, to make sure that I hit the CIFS share quite hard. Before we start the Test, we have to mount the CIFS share. All mount options are documented at the following link.

https://www.samba.org/samba/docs/man/manpages-3/mount.cifs.8.html

For my case, the most important CIFS mount options are:

user = M$ User to connect to the CIFS share
rw = mount read write
uid = sets the uid that will own all files or directories on the mounted filesystem
gid = sets the gid that will own all files or directories on the mounted filesystem
rsize = default network read size
wsize = default network write size

Regarding the rsize/wsize I have used the ones, which I usually take when using NFS mounts.

[root@oel001 ~]# mount -t cifs //10.0.2.2/rman /rman -o rsize=32768,wsize=32768,rw,user=wrs,uid=oracle,gid=dba --verbose
Password:
mount.cifs kernel mount options: ip=10.0.2.2,unc=\\10.0.2.2\rman,rsize=32768,wsize=32768,uid=54321,gid=54322,ver=1,user=wrs,pass=***                          *****

[root@oel001 ~]# mount | grep rman
//10.0.2.2/rman on /rman type cifs (rw)

Before running any RMAN backups to it, make sure that you can create files with the oracle user. If it is failing at this step already, then you can stop here. This has to work before continuing any further.

oracle@oel001:/home/oracle/ [OCM121] cd /rman/backup/
oracle@oel001:/rman/backup/ [OCM121] touch new_file
oracle@oel001:/rman/backup/ [OCM121] ls -l new_file
-rwxr-xr-x 1 oracle dba 0 Feb  6 09:22 new_file
oracle@oel001:/rman/backup/ [OCM121] rm new_file

Ok. Let’s start with 11.2.0.4 first. The database has filesystemio_options=SETALL, meaning that the database uses aynch and directio at the same time.

My RMAN backup tests will be a regular backup with 16 parallel sessions to that share, and a backup of the Fast Recovery Area to the same share as well.

CONFIGURE DEVICE TYPE DISK PARALLELISM 16 BACKUP TYPE TO COMPRESSED BACKUPSET;
BACKUP DATABASE FORMAT '/rman/backup/%U';
BACKUP RECOVERY AREA TO DESTINATION '/rman/backup';

My DataPump test will be a full backup with 16 parallel sessions.

expdp system/manager directory=rman dumpfile=rman%U.dmp parallel=16 full=y

ok. Let’s start doing the first backup.

oracle@oel001:/home/oracle/ [OCM112] rman target /

Recovery Manager: Release 11.2.0.4.0 - Production on Mon Feb 6 13:31:37 2017

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: OCM112 (DBID=177109848)

RMAN> backup database format '/rman/backup/%U';

Starting backup at 06-FEB-2017 13:31:45
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=128 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=249 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=366 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=12 device type=DISK
allocated channel: ORA_DISK_5
channel ORA_DISK_5: SID=129 device type=DISK
allocated channel: ORA_DISK_6
channel ORA_DISK_6: SID=250 device type=DISK
allocated channel: ORA_DISK_7
channel ORA_DISK_7: SID=360 device type=DISK
allocated channel: ORA_DISK_8
channel ORA_DISK_8: SID=14 device type=DISK
allocated channel: ORA_DISK_9
channel ORA_DISK_9: SID=130 device type=DISK
allocated channel: ORA_DISK_10
channel ORA_DISK_10: SID=251 device type=DISK
allocated channel: ORA_DISK_11
channel ORA_DISK_11: SID=367 device type=DISK
allocated channel: ORA_DISK_12
channel ORA_DISK_12: SID=15 device type=DISK
allocated channel: ORA_DISK_13
channel ORA_DISK_13: SID=131 device type=DISK
allocated channel: ORA_DISK_14
channel ORA_DISK_14: SID=252 device type=DISK
allocated channel: ORA_DISK_15
channel ORA_DISK_15: SID=368 device type=DISK
allocated channel: ORA_DISK_16
channel ORA_DISK_16: SID=16 device type=DISK
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA11G/ocm112/datafile/system.261.927382121
channel ORA_DISK_1: starting piece 1 at 06-FEB-2017 13:31:48
channel ORA_DISK_2: starting compressed full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00003 name=+DATA11G/ocm112/datafile/undotbs1.263.927382145
channel ORA_DISK_2: starting piece 1 at 06-FEB-2017 13:31:48
channel ORA_DISK_3: starting compressed full datafile backup set
channel ORA_DISK_3: specifying datafile(s) in backup set
input datafile file number=00002 name=+DATA11G/ocm112/datafile/sysaux.262.927382135
channel ORA_DISK_3: starting piece 1 at 06-FEB-2017 13:31:48
channel ORA_DISK_4: starting compressed full datafile backup set
channel ORA_DISK_4: specifying datafile(s) in backup set
input datafile file number=00004 name=+DATA11G/ocm112/datafile/example.265.927382167
channel ORA_DISK_4: starting piece 1 at 06-FEB-2017 13:31:48
channel ORA_DISK_5: starting compressed full datafile backup set
channel ORA_DISK_5: specifying datafile(s) in backup set
input datafile file number=00005 name=+DATA11G/ocm112/datafile/users.266.927382177
channel ORA_DISK_5: starting piece 1 at 06-FEB-2017 13:31:48
channel ORA_DISK_5: finished piece 1 at 06-FEB-2017 13:32:03
piece handle=/rman/backup/0krrtai4_1_1 tag=TAG20170206T133147 comment=NONE
channel ORA_DISK_5: backup set complete, elapsed time: 00:00:15
channel ORA_DISK_3: finished piece 1 at 06-FEB-2017 13:32:13
piece handle=/rman/backup/0irrtai4_1_1 tag=TAG20170206T133147 comment=NONE
channel ORA_DISK_3: backup set complete, elapsed time: 00:00:25
channel ORA_DISK_4: finished piece 1 at 06-FEB-2017 13:32:13
piece handle=/rman/backup/0jrrtai4_1_1 tag=TAG20170206T133147 comment=NONE
channel ORA_DISK_4: backup set complete, elapsed time: 00:00:25
channel ORA_DISK_1: finished piece 1 at 06-FEB-2017 13:32:23
piece handle=/rman/backup/0grrtai4_1_1 tag=TAG20170206T133147 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:35
channel ORA_DISK_2: finished piece 1 at 06-FEB-2017 13:32:23
piece handle=/rman/backup/0hrrtai4_1_1 tag=TAG20170206T133147 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:35
Finished backup at 06-FEB-2017 13:32:23

Starting Control File and SPFILE Autobackup at 06-FEB-2017 13:32:23
piece handle=+FRA11G/ocm112/autobackup/2017_02_06/s_935242343.266.935242347 comment=NONE
Finished Control File and SPFILE Autobackup at 06-FEB-2017 13:32:30

RMAN>

Cool .. it seems to work perfectly. Ok. Let’s do the backup of the FRA to CIFS as well.

RMAN> BACKUP RECOVERY AREA TO DESTINATION '/rman/backup';

Starting backup at 06-FEB-2017 14:48:35
using channel ORA_DISK_1
using channel ORA_DISK_2
using channel ORA_DISK_3
using channel ORA_DISK_4
using channel ORA_DISK_5
using channel ORA_DISK_6
using channel ORA_DISK_7
using channel ORA_DISK_8
using channel ORA_DISK_9
using channel ORA_DISK_10
using channel ORA_DISK_11
using channel ORA_DISK_12
using channel ORA_DISK_13
using channel ORA_DISK_14
using channel ORA_DISK_15
using channel ORA_DISK_16
specification does not match any datafile copy in the repository
specification does not match any backup set in the repository
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=77 RECID=3 STAMP=935241746
channel ORA_DISK_1: starting piece 1 at 06-FEB-2017 14:48:36
channel ORA_DISK_2: starting compressed archived log backup set
channel ORA_DISK_2: specifying archived log(s) in backup set
input archived log thread=1 sequence=78 RECID=4 STAMP=935241790
channel ORA_DISK_2: starting piece 1 at 06-FEB-2017 14:48:36
channel ORA_DISK_3: starting compressed archived log backup set
channel ORA_DISK_3: specifying archived log(s) in backup set
input archived log thread=1 sequence=79 RECID=5 STAMP=935242521
channel ORA_DISK_3: starting piece 1 at 06-FEB-2017 14:48:36
channel ORA_DISK_4: starting compressed archived log backup set
channel ORA_DISK_4: specifying archived log(s) in backup set
input archived log thread=1 sequence=80 RECID=6 STAMP=935242549
channel ORA_DISK_4: starting piece 1 at 06-FEB-2017 14:48:37
channel ORA_DISK_5: starting compressed archived log backup set
channel ORA_DISK_5: specifying archived log(s) in backup set
input archived log thread=1 sequence=81 RECID=7 STAMP=935242639
channel ORA_DISK_5: starting piece 1 at 06-FEB-2017 14:48:37
channel ORA_DISK_6: starting compressed archived log backup set
channel ORA_DISK_6: specifying archived log(s) in backup set
input archived log thread=1 sequence=82 RECID=8 STAMP=935242724
channel ORA_DISK_6: starting piece 1 at 06-FEB-2017 14:48:38
channel ORA_DISK_1: finished piece 1 at 06-FEB-2017 14:48:38
piece handle=/rman/backup/OCM112/backupset/2017_02_06/o1_mf_annnn_TAG20170206T144835_d9jzond8_.bkp tag=TAG20170206T144835 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
channel ORA_DISK_2: finished piece 1 at 06-FEB-2017 14:48:39
piece handle=/rman/backup/OCM112/backupset/2017_02_06/o1_mf_annnn_TAG20170206T144835_d9jzondy_.bkp tag=TAG20170206T144835 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_3: finished piece 1 at 06-FEB-2017 14:48:39
piece handle=/rman/backup/OCM112/backupset/2017_02_06/o1_mf_annnn_TAG20170206T144835_d9jzonqf_.bkp tag=TAG20170206T144835 comment=NONE
channel ORA_DISK_3: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_4: finished piece 1 at 06-FEB-2017 14:48:39
piece handle=/rman/backup/OCM112/backupset/2017_02_06/o1_mf_annnn_TAG20170206T144835_d9jzoop8_.bkp tag=TAG20170206T144835 comment=NONE
channel ORA_DISK_4: backup set complete, elapsed time: 00:00:02
channel ORA_DISK_5: finished piece 1 at 06-FEB-2017 14:48:39
piece handle=/rman/backup/OCM112/backupset/2017_02_06/o1_mf_annnn_TAG20170206T144835_d9jzooxq_.bkp tag=TAG20170206T144835 comment=NONE
channel ORA_DISK_5: backup set complete, elapsed time: 00:00:02
channel ORA_DISK_6: finished piece 1 at 06-FEB-2017 14:48:39
piece handle=/rman/backup/OCM112/backupset/2017_02_06/o1_mf_annnn_TAG20170206T144835_d9jzopqx_.bkp tag=TAG20170206T144835 comment=NONE
channel ORA_DISK_6: backup set complete, elapsed time: 00:00:01
Finished backup at 06-FEB-2017 14:48:39

Starting Control File and SPFILE Autobackup at 06-FEB-2017 14:48:39
piece handle=+FRA11G/ocm112/autobackup/2017_02_06/s_935246919.286.935246921 comment=NONE
Finished Control File and SPFILE Autobackup at 06-FEB-2017 14:48:46

RMAN>

Worked very well as well. Now lets do the DataPump Test.

SQL> connect system/manager
Connected.

SQL> create directory rman as '/rman/backup';

Directory created.

oracle@oel001:/home/oracle/ [OCM112] expdp system/manager directory=rman dumpfile=rman%U.dmp parallel=16 full=y

Export: Release 11.2.0.4.0 - Production on Mon Feb 6 13:33:41 2017

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Automatic Storage Management, Oracle Label Security, OLAP,
Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_FULL_01":  system/******** directory=rman dumpfile=rman%U.dmp parallel=16 full=y
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 351.8 MB
. . exported "SH"."COSTS":"COSTS_Q1_1999"                183.5 KB    5884 rows
. . exported "SH"."COSTS":"COSTS_Q2_2000"                119.0 KB    3715 rows
. . exported "SH"."COSTS":"COSTS_Q1_2001"                227.8 KB    7328 rows
. . exported "SH"."COSTS":"COSTS_Q1_2000"                120.6 KB    3772 rows
. . exported "SH"."CUSTOMERS"                            9.853 MB   55500 rows
. . exported "SH"."COSTS":"COSTS_Q3_1998"                131.1 KB    4129 rows
. . exported "SH"."COSTS":"COSTS_Q2_1998"                79.52 KB    2397 rows
. . exported "SH"."COSTS":"COSTS_Q2_1999"                132.5 KB    4179 rows
. . exported "SH"."COSTS":"COSTS_Q1_1998"                139.5 KB    4411 rows
. . exported "SH"."COSTS":"COSTS_Q3_2000"                151.4 KB    4798 rows
. . exported "SH"."COSTS":"COSTS_Q3_2001"                234.4 KB    7545 rows
...
...
...
Processing object type DATABASE_EXPORT/SCHEMA/PUBLIC_SYNONYM/SYNONYM
Processing object type DATABASE_EXPORT/SCHEMA/SYNONYM
Processing object type DATABASE_EXPORT/SCHEMA/TYPE/INC_TYPE
Processing object type DATABASE_EXPORT/SCHEMA/TYPE/TYPE_SPEC
Processing object type DATABASE_EXPORT/SCHEMA/TYPE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PRE_SYSTEM_ACTIONS/PROCACT_SYSTEM
. . exported "ORDDATA"."ORDDCM_MAPPING_DOCS"             7.890 KB       1 rows
Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PROCOBJ
Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/POST_SYSTEM_ACTIONS/PROCACT_SYSTEM
Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
Processing object type DATABASE_EXPORT/SCHEMA/XMLSCHEMA/XMLSCHEMA
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/PRE_TABLE_ACTION
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/COMMENT
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE/PACKAGE_SPEC
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/FUNCTION/FUNCTION
Processing object type DATABASE_EXPORT/SCHEMA/FUNCTION/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/PROCEDURE
Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
Processing object type DATABASE_EXPORT/SCHEMA/FUNCTION/ALTER_FUNCTION
Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/ALTER_PROCEDURE
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/CONSTRAINT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
Processing object type DATABASE_EXPORT/SCHEMA/VIEW/VIEW
Processing object type DATABASE_EXPORT/SCHEMA/VIEW/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/VIEW/COMMENT
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE_BODIES/PACKAGE/PACKAGE_BODY
Processing object type DATABASE_EXPORT/SCHEMA/TYPE/TYPE_BODY
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/BITMAP_INDEX/INDEX
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/BITMAP_INDEX/INDEX_STATISTICS
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/DOMAIN_INDEX/INDEX
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/POST_TABLE_ACTION
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TRIGGER
Processing object type DATABASE_EXPORT/SCHEMA/VIEW/TRIGGER
Processing object type DATABASE_EXPORT/SCHEMA/EVENT/TRIGGER
Processing object type DATABASE_EXPORT/SCHEMA/MATERIALIZED_VIEW
Processing object type DATABASE_EXPORT/SCHEMA/JOB
Processing object type DATABASE_EXPORT/SCHEMA/DIMENSION
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/POST_INSTANCE/PROCACT_INSTANCE
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/POST_INSTANCE/PROCDEPOBJ
Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCOBJ
Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCACT_SCHEMA
Processing object type DATABASE_EXPORT/AUDIT
Master table "SYSTEM"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_FULL_01 is:
  /rman/backup/rman01.dmp
  /rman/backup/rman02.dmp
  /rman/backup/rman03.dmp
  /rman/backup/rman04.dmp
  /rman/backup/rman05.dmp
  /rman/backup/rman06.dmp
  /rman/backup/rman07.dmp
  /rman/backup/rman08.dmp
  /rman/backup/rman09.dmp
  /rman/backup/rman10.dmp
  /rman/backup/rman11.dmp
  /rman/backup/rman12.dmp
  /rman/backup/rman13.dmp
  /rman/backup/rman14.dmp
  /rman/backup/rman15.dmp
  /rman/backup/rman16.dmp

Job "SYSTEM"."SYS_EXPORT_FULL_01" completed at Mon Feb 6 13:37:32 2017 elapsed 0 00:03:44

 

Cool. That also worked perfectly.

I have repeated the same test with 12.1.0.2 also with no issues. In earlier releases I had issues with the filesystemio_options=SETALL, where I had to disable directio, and sometimes direct and aynch io. But it is not the case anymore with more resent OEL and Samba releases.

Conclusion

I would not use RMAN backups to CIFS as a permanent solution, but if you are running out of space on your NFS share, or maybe for migrations it is a good alternative.

 

 

Cet article Oracle – RMAN Backups to CIFS est apparu en premier sur Blog dbi services.

Difference Between CHAR, VARCHAR, and VARCHAR2 Data Types in Oracle

Complete IT Professional - Mon, 2017-02-06 05:00
Oracle has several character data types that are commonly used and can also be confusing if you don’t know the differences between them. Learn what they are and the differences between them in this article. The Oracle character data types that are used most often are CHAR and VARCHAR2. There is also a VARCHAR data […]
Categories: Development

Pages

Subscribe to Oracle FAQ aggregator