Skip navigation.

DBA Blogs

SQL Server Row Level Security

Pythian Group - Tue, 2015-10-27 08:27


Row Level Security (RLS) has been implemented in SQL Server 2016 for both on-premise and v12 of Azure instances.

The problem this solves is: a company with multiple applications accessing sensitive data in one or more tables.

How do you ensure the data being read or written is only the data that login is authorized to see? In the past, this has been accomplished with a complicated series of views or functions, and there’s no guarantee a bug or malicious user wouldn’t be able to bypass those measures. With Row Level Security, it doesn’t matter what privileges you have (including sysadmin) or how you try to access the data.

How it Works

Row Level Security has two options: You can either FILTER the rows or BLOCK the operation entirely. The BLOCK functionality is not yet implemented in CTP 2.4, but the FILTER logic works like a charm.

The steps are very simple:
1 – Figure out what you’re going to associate with your users and data. You will need to create some link between your data and a login’s or user’s properties. Something that will allow the engine to say This Row is ok for This User.

2 – Create a Function defining the relationship between users and the data.

3 – Create a Security Policy for the function and table(s). You can use the same policy on multiple tables or views.

Once the Security Policy has been created, every query or DML operation on the tables or views you’re filtering will automatically have the function applied to the WHERE or HAVING clause. You can see the filter working by reviewing the execution plan as well. SQL Server will generate the Plan Hash value with the filtering logic in place. This allows Plan Re-Use with Row Level Security, and it’s a big improvement over Oracle’s implementation which doesn’t do this (as of Oracle 10g, the last time I worked with it) :-). See the bottom of this post for an example of the same query with RLS turned on & off.

What is particularly nice about these policies is that the data is filtered regardless of the user’s privileges. A sysadmin or other superuser who disables the policy is just an Audit log review away from having to explain what they were doing.

Row Level Security Walk Through

This is an example of setting up an RLS system for the Credit Card data in the AdventureWorks database. After this is completed, only users associated with a Business Entity in the Person.Person table will be able to see or update any credit card information, and the data they can touch will be limited to just their business.

Step 1: Add user_name column to Person.Person table

In this example, I’m associating the user_name() function’s value for each login with the BusinessEntityID. Of course, you can use any value you want, as long as you can access it from a SELECT statement in a Schema-Bound function. This means many system tables are off-limits.

USE AdventureWorks

ALTER TABLE person.person
ADD UserName nvarchar(128) NULL;

— Associate some person.person rows with a login too.
UPDATE person.person
SET UserName = ‘Business1’
BusinessEntityID IN (301, 303, 305);

Step 2: Create Users to Test

I’m just creating a login named Business1 to demonstrate this. Note that the user has db_owner in AdventureWorks

USE [master] GO
USE [AdventureWorks] GO
CREATE USER [business1] FOR LOGIN [business1] GO
USE [AdventureWorks] GO
ALTER ROLE [db_owner] ADD MEMBER [business1] GO

Step 3: Create Function to Filter Data

This function finds all credit cards for the user_name() running the query. Any values not returned by this function will be inaccessible to this user.

CREATE FUNCTION [Sales].[fn_FindBusinessCreditCard] (@CreditCardID INT)

1 AS result
person.person p INNER JOIN
sales.PersonCreditCard pcc ON p.BusinessEntityID = pcc.BusinessEntityID INNER JOIN
sales.CreditCard cc ON pcc.CreditCardID = cc.CreditCardID
cc.CreditCardID = @CreditCardID AND
p.UserName = user_name();

Step 4: Create a Security Policy

This creates a security policy on the Sales.CreditCard table.

CREATE SECURITY POLICY sales.RestrictCreditCardToBusinessEntity
ADD FILTER PREDICATE sales.fn_FindBusinessCreditCard(CreditCardID)
ON Sales.CreditCard

Step 5: Test Away

For all of the following examples, You should be logged in as the Business1 user who can only see 3 credit cards. In reality, there are 19,118 rows in that table.

--Will return three records
pcc.*, cc.*
Sales.PersonCreditCard pcc INNER JOIN
Sales.CreditCard cc ON cc.CreditCardID = pcc.CreditCardID

— Will only update three records
UPDATE Sales.CreditCard
ExpYear = ‘2020’

These are the execution plans for the above query with Row Level Security turned on and off:

Turned On: (and missing an index…)

Execution Plan With Security Turned On

Execution Plan With Security Turned On.

Turned Off:

Execution Plan With Security Turned Off

Execution Plan With Security Turned Off.


Discover more about our expertise in SQL Server.

Categories: DBA Blogs

A Cassandra Consistency Use Case

Pythian Group - Tue, 2015-10-27 08:07


I recently completed a project where I worked with a company using Cassandra to keep metadata about objects stored in an Object Store. The application keeps track of individual objects as rows within a partition based on user id. In an effort to save space there is also a mechanism to track duplicate references to the objects in another table. Object writes take place as background activity and the time it takes to complete those writes is invisible to the applications end users. The time it takes to retrieve an object though is very visible to the end user. The keyspace was defined as network topology using two data centers (actual data centers here about 50 ms apart) with replication factor 3 in both data centers.

Initially the application was set up to use consistency ONE for both writes and reads. This seemed to be working okay until we started doing failure testing. At which point objects would come up missing due to the delay time in pushing hinted hand offs from one node to another. A simple solution to this was to make all writes and reads LOCAL_QUORUM. In fact doing so did resolve pretty much all of the testing errors but at a much increased latency, about 3 times longer than with consistency ONE, on both reads and writes. Even so, the latencies were deemed to be acceptable since they were still well under anticipated network latencies outside of the data centers which is what the users would be seeing.

Could we have done better than that though?

The writes are a background activity not visible to the end user. The increased write latency is probably reasonable there. The read latency is visible to the user. There is an option which guarantees finding the stored object references while still keeping the latency to a minimum. This is what I propose, the default read consistency is set back to ONE and most of the time a read to Cassandra will find the object reference as was clear in the initial testing. But, if a read returns no object reference then a second read is issued using LOCAL_QUORUM. This way most, more than 99%, of all reads are satisfied with the much lower latency consistency of ONE only occasionally needing the second read. This can be extended further to a full QUORUM read if the LOCAL_QUORUM read fails.

It is important to note that this approach only works if there are no row versions. E.G. rows only exist or do not exist. If a row may have different versions over time as you might have if the row were updated rather than just inserted and then later deleted. It is also important to note that its possible to find a deleted row this way. For this use case these qualifications are not issues.


Discover more about our expertise in Cassandra.

Categories: DBA Blogs

Running SQL Server Jobs Against an Azure SQL Database

Pythian Group - Tue, 2015-10-27 07:50


I recently had a client ask how to run SQL Server jobs against an Azure SQL Database, and because SQL DB doesn’t have an SQL Agent like most other versions of SQL Server, it isn’t always obvious how to implement. Fortunately, we have several options in Azure and within a “normal” instance.

Options to Run Jobs

The first three options require a non-Azure version of SQL Server to be running and to have connectivity to the Azure SQL DB.

  1. Linked Server
  2. Maintenance Plan w/new connection
  3. Powershell Script
  4. Azure Services
Elastic DB Jobs

As mentioned by Warner Chaves in the comments, and currently (Nov 2015) in preview, the new Elastic Database Jobs might also work well.

Linked Server

To create a linked server to an Azure DB, you just need to get the ODBC connection string from the Azure portal and use it as the “Provider String” when defining the LS.




Maintenance Plan w/New Connection

Simply create your maintenance plan with all the required logic for your job, then click the “Manage Connections” menu at the top of the screen and define the new connection to be used for this plan.




Powershell Script

You can use the Powershell Invoke-SqlCmd functionality and run it as a Powershell job step.

Invoke-Sqlcmd -Query "SELECT GETDATE() AS TimeOfQuery;" -ServerInstance "AzurreInstance" -U "UserName" -P "Password"

Azure Services

These change, it seems like, almost daily, so I’ve listed some of the common ways this is currently done.

  1. The Azure Scheduler used in conjunction with mobile services.
  2. Azure Automation


Discover more about our expertise in Cloud and SQL Server.

Categories: DBA Blogs

Monday OpenWorld

Bobby Durrett's DBA Blog - Mon, 2015-10-26 22:10

It was a good first full day at Oracle OpenWorld.  It started with the keynote led by Oracle’s CEO.  Of course he was very upbeat about Oracle’s products.  But, I found his comments about the economy and the way it affects IT spending more interesting than Oracle’s cloud products.  My translation is that companies have a lot of older systems that they can’t easily retire or upgrade but they want to quickly add all this new functionality.  I see that in my company so it rings true.  I don’t really believe that the cloud is the long-term answer but it makes me wonder what the real answer is.  I always come back to prioritization.  I think prioritizing spending is more important than moving things to the cloud.  You can’t afford to do everything so you have to make tough choices about what to spend your IT dollars on. That’s my opinion at least.

Next I went to a session on Coherence.  I kind of went out of a sense of obligation since our company owns the product.  But, it was a surprisingly good session.  It had a lot in it about Java 8 and the new features in it for parallel processing.  It made me want to dust off my Java skills and generally think about parallel processing in the Oracle database and how it relates to that in Hadoop, Scala, etc.

I went to two sessions on analytics, again out of a sense that I needed to learn about analytics and not due to any enthusiasm about it.  The first session was really interesting, but the 3:30 session almost literally put me to sleep.  The first session reminded me of some of the things in Craig Shallahamer’s forecasting book such as making a model of a system and doing validation of the model.  Analytics seems to follow a similar process. But, by the late afternoon a non-technical session on analytics in banking nearly knocked me out.

Wedged between my two analytics sessions I went to a very cool In-Memory Option boot camp.  I have not had the time or taken the time to look at the In-Memory Option and I got a nice fast hour-long exposure to it.  I don’t know if the other people in the class were lost because there were a lot of explain plans flying by but it is the type of stuff I’m interested in so it was nice that it was so technical.  The In-Memory Option reminded me a lot of Exadata smart scans and hybrid columnar compression.

Strangely enough multiple speakers pronounced columnar differently than I have done so I guess I will have to change.  They emphasize the second syllable but I usually emphasize the first.

I also snuck in to the OakTable World presentation by Tanel Poder.  It had to do with querying Hadoop clusters from Oracle databases using odbc/jdbc.  Pretty cool.  I also got to scope out the venue for my talk tomorrow in the process.

That’s enough for today.  I got a lot of good information.  I’ve slotted tomorrow for OakTable world so it will be interesting to see what all I can learn there.





Categories: DBA Blogs

EMEA Partners: Oracle Project Portfolio Management Cloud Implementation Workshops

Oracle will organize several 5-days Workshops between November 2015 – May 2016. We are pleased to invite your ERP Practice to a 5-days hands-on Oracle Project Portfolio Management...

We share our skills to maximize your revenue!
Categories: DBA Blogs

EMEA Partners: Oracle Procurement Cloud Implementation Workshops

Oracle will organize several 5-days Workshops between November 2015 – May 2016. We are pleased to invite your ERP Practice to a 5-days hands-on Oracle Procurement Cloud Implementation...

We share our skills to maximize your revenue!
Categories: DBA Blogs

EMEA Partners: Oracle Financials Cloud Implementation Workshops

Oracle will organize several 5-days Workshops between November 2015 – May 2016. We are pleased to invite your Financials Cloud Practice to a 5-days hands-on Oracle Financials Cloud...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Log Buffer #446: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-10-23 15:25

This Log Buffer Edition covers some more Oracle, SQL Server and MySQL blog posts from across the world.


  • Should I monitor the MGMTDB database?
  • Connecting MAF Applications to Mobile Cloud Service
  • JRE 6, 7, and 8 now certified with EBS 11i, 12.1, and 12.2
  • The database writer copies data blocks from the buffer cache onto disk. The db_writer_processes initialization parameter determines how many processes will do this task. Its default value is 1 or cpu_count / 8, whichever is greater. I found an Oracle 9 database on a Tru64 server with cpu_count set to 1
  • How To Install Latest Verisign G5 Root Certificates

SQL Server:

  • Dynamic Pivot, Passing Temp Table Variables to Dynamic SQL
  • Oracle offers a results cache in the database (from 11.2 onwards) and in PL/SQL (again, from 11.2 onwards) which can greatly reduce execution time of repeated statements when insert/update/delete activity is not heavy.
  • Maintaining a grouped running MAX (or MIN)
  • Harnessing SQL Server Metadata- Disabling and Rebuilding Indexes
  • Optimizing Your Query Plans with the SQL Server 2014 Cardinality Estimator


  • ClusterControl Tips & Tricks: Updating your MySQL Configuration
  • Become a MySQL DBA blog series – Optimizer Hints for faster query execution
  • Loading Arbitary XML documents into MySQL tables with p_load_xml
  • Use MySQL and Perl to automatically find, follow and unfollow twitter users
  • Great Advice on Monitoring Amazon RDS For MySQL From Datadog

Learn more about Pythian’s expertise in Oracle SQL Server & MySQL.

Categories: DBA Blogs

Reviewing Delphix blog posts before OpenWorld

Bobby Durrett's DBA Blog - Thu, 2015-10-22 10:13

I just finished reviewing my Delphix blog posts in preparation for the talks that I will give during OpenWorld. I find myself referring back to my blog to help me remember what I have done in the past. I was thinking that I needed to jog my memory so that I could answer questions when I give my talks.  Some of the Delphix topics that I am speaking about occurred a year or two ago so my memories are fuzzy.  But, reading my own posts brought a lot of the details back.

I thought I would list the Delphix posts, even though people can find these just by searching my web site. If you are coming to my talk and want more details or if you just want all the Delphix related information on my site I have collected it here.

I have two talks scheduled during OpenWorld next week.

The first talk is at OakTable World at 9 am on Tuesday.

The second talk is at Delphix Sync between 3 and 4 pm Thursday.  The second talk is a ten minute “lightning talk” version of the first.

I hope to see you at one of the talks and if not I hope this post is valuable to you.


Categories: DBA Blogs

Apex 5.0.2 released

Flavio Casetta - Wed, 2015-10-21 14:37
Categories: DBA Blogs

SQL On The Edge #4 – SQL 2016 Stretch Database

Pythian Group - Wed, 2015-10-21 09:49


As the Microsoft SQL Server team continues to build very interesting integrations between the full SQL Server box product we all know and the Azure cloud platform. Stretch database is another integration that will be included as part of SQL Server 2016 and it will help customers solve these issues:

– Running out of space in your own data center?
– Have long archival requirements for your data?
– Users still query data from many years ago on occasion?
– You don’t want to do code changes to deal with these scenarios?


Stretch database attacks those problems by allowing us to “stretch” tables from on-premises (or full VM) SQL Servers into a table living in Azure SQL Database. Your users and applications will not be aware of where the data is coming from, they just have to query and let SQL Server handle the behind the scenes magic to either go to the local data files or go to the cloud to get the records.

So if you’re running out of space, you can stretch to a Standard tier db and get 250GB instantaneously or 500GB on Premium or even 1TB if you get a P11! Of course you will have to pay for the Azure SQL database cost but considering it’s a fully managed service, in some scenarios this can easily offset the cost of the on-premises storage plus managing all the archival if your system is busy and large enough.

The feature also works transparently with the backup and restore operations. When you backup your SQL Server database it will contain the metadata for the Stretch database and if you have the proper credentials, you can re-enable the connection to the Azure SQL database on RESTORE. If you just want to RESTORE as let’s say a DEV copy then you can either not reconnect the stretch database at all or reconnect it to some other DEV Azure copy for example.

As is usually the case, this feature does come with some limitations. For example, it can’t be used on replicated tables, In-Memory tables or tables using the CLR based data types (hierarchy, XML, spatial types, etc). The main limitation now is that UPDATE and DELETE are not supported so it’s very much for ‘append-only’ type of data. However, for many scenarios these limitations can be more than acceptable. You can see the full list here.

To see how we enable this feature and how it works, let’s jump to the video, enjoy!


Discover more about our expertise in SQL Server.

Categories: DBA Blogs

Gotta Love English

Pythian Group - Wed, 2015-10-21 07:28


I do love to write, and have always enjoyed the idiosyncrasies of this wonderful language. One of my favourite features of English is how I struggle with the following items almost daily:

  1. I am doing a few things to alleviate the need to worry about something. Do I write There is a handful of things … or There are a handful of things …? I recently asked a handful of colleagues their feelings on the above and the results are in … 5-5. I guess the jury is still out on that one, but I still maintain there is a handful of questions as yet unanswered.
  2. I hope my decision does not affect (or is that effect) others’ opinion of my writing skills. Even though classically I use affect as a verb and effect as a noun … hold on … wait a sec … maybe it’s the other way around. I will still struggle and every time hope when I use one of them wrong, the effect of that mistake will not affect my feelings for the wonders of this fine language.
  3. It’s time for me to take its advice. I am not sure if it stands on its own with respect to ownership. Most words in this insane language use the ‘s to show possession but alas not all. Needless to say, it’s not obvious and I pause for a moment of thought every time I use this word, with or without the s on the end and/or the ‘s.
  4. That pesky ing ending to so many words. I saw him walking into a store. Is this a gerund or a present participle? I am not sure if anyone cares maybe except me :). Google says that  a gerund is “is a noun made from a verb by adding “-ing.” Thus, I believe walking, in the context of the above statement, is a gerund, being a verb behaving like a noun. Pencil is a noun and, when someone loses one, we say Have you seen his pencil not Have you seen him pencil. If this is the case, why would one say I saw him walking rather than I saw his walking? A noun is a noun.

To sum up this musing, suffice to say, a handful of questions has its effect on my studying more about this quirky language we all love.

Categories: DBA Blogs

Errors in a Pluggable Database?

Pythian Group - Wed, 2015-10-21 06:23


There might be a situation where executing some DDL in pluggable database may cause the following error:

ORA-65040: operation not allowed from within a pluggable database

This error could occur if a tablespace is being dropped from within PDB and this tablespace is a former default tablespace having some of the system objects. Even system objects cannot be moved with simple alter statements from within PDBs.

So in order to move these objects from within PDBs, you should be using procedure dbms_pdb.exec_as_oracle_script which is undocumented so far.

For example:

exec dbms_pdb.exec_as_oracle_script(‘alter table <owner>.<table_name> move tablespace <tablespace name>’);

From My Oracle Support, Doc ID 1943303.1 lists:

—   This procedure enables execution of certain restricted statements (most DDLs) on metadata-linked objects, from within a PDB.


Discover more about our expertise in the world of Oracle.

Categories: DBA Blogs

Four members of our team are in the Leadership Circle!

The Oracle Instructor - Wed, 2015-10-21 02:49

Oracle University Leadership Circle Q1FY16

The recipients of the Oracle University Leadership Circle Quarter 1 Fiscal Year 2016 have just been announced. That is a corporate award for the best instructors worldwide according to customer feedback. Not less than four (!) come from our small team of 14 instructors:

Leadership Circle Recipients Q1FY16

Yes, we have a great team – supported by a great manager: Thank you, Richard! Congratulations to everyone in the circle this time, I feel honored to be listed together with you!

By the way, I never understood why the fiscal year is ahead of the calendar year – suppose you need to study business economics for that ;-)

Categories: DBA Blogs

Speaking at SANGAM 2015

Oracle in Action - Wed, 2015-10-21 00:27

RSS content

AIOUG meet “SANGAM  – Meeting of Minds” is the Largest Independent Oracle Event in India, organized annually in the month of November. This year’s Sangam (Sangam15 - 7th Annual Oracle Users Group Conference) will be held in Hyderabad International Convention Centre, Hyderabad on Saturday 21st & Sunday 22nd November 2015.

I will be speaking at this year’s SANGAM about Oracle Database 12c new feature : Highly Available NFS (HANFS) over ACFS.

HANFS over ACFS enables highly available NFS servers to be configured using Oracle ACFS clusters. The NFS exports are exposed through Highly Available VIPs (HAVIPs), and this allows Oracle’s Clusterware agents to ensure that HAVIPs and NFS exports are always available. If the node hosting the export(s) fails, the corresponding HAVIP and hence its corresponding NFS export(s) will automatically fail over to one of the surviving nodes so that the NFS client continues to receive uninterrupted service of NFS exported paths.

My session will be held on Saturday November 21, 2015  from 5:10pm to 6:00pm in
Hall 5 (Ground Floor). 

Hope to meet you there!!




Comments:  3 comments on this item
You might be interested in this:  
Copyright © ORACLE IN ACTION [Speaking at SANGAM 2015], All Right Reserved. 2015.

The post Speaking at SANGAM 2015 appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

The 2nd Annual PASS Summit Bloggers Meetup (2015)

Pythian Group - Tue, 2015-10-20 17:20

I’m excited to announce the second annual PASS Summit Bloggers Meetup! We began this last year and it was cool but this year will be even cooler!

What: PASS Summit Bloggers Meetup 2015
When: Thursday, October 29th, 5:30pm
Where: Sports Restaurant & Bar, 140 4th Ave North, Suite 130, Seattle, WA 98109.
How: Please comment with “COUNT ME IN” if you’re coming — we need to know attendance numbers.

We’re excited to meet old friends, and make new ones in Seattle this year. Pythian will sponsor munchies and drinks as usual. There will be a networking contest with some cool prizes, plus you will get your very own Love Your Data t-shirt (at least the first 50 people). Meetups are a great way to make new friends in the community, so come on out — all bloggers are welcome!

If you are planning to attend, please comment here with the phrase “COUNT ME IN”. This will help us ensure we have the attendance numbers right. Please provide your blog URL with your comment — it’s a Bloggers Meetup after all! Make sure you comment here if you are attending so that we have enough room, food, and drinks.

See the photos from the last year’s meetup courtesy to Pat Wright.

SQL PASS 2014 Bloggers Meetup photo

The location is perfect to get ready for the Community Appreciation Party — a few minutes walk from EMP Museum! Snacks and drinks before the big event and mingling with fellow bloggers. What can be better?

Of course, do not forget to blog and tweet about this year’s bloggers meetup using #Summit15 #sqlpass. See you there!


Discover more about our expertise in SQL Server.

Categories: DBA Blogs

Oracle OpenWorld 2015 – Bloggers Meetup

Pythian Group - Tue, 2015-10-20 16:51

Oracle OpenWorld Bloggers Meetup Many of you are coming to San Francisco next week for Oracle OpenWorld 2015 and many of you have already booked time on your calendars on Wednesday evening before the appreciation event. You are right — the Annual Oracle Bloggers Meetup, one of your favorite events of the OpenWorld, is happening at usual place and time!

What: Oracle Bloggers Meetup 2015.

When: Wed, 28-Oct-2015, 5:30pm.

Where: Main Dining Room, Jillian’s Billiards @ Metreon, 101 Fourth Street, San Francisco, CA 94103 (street view). Please comment with “COUNT ME IN” if you’re coming — we need to know the attendance numbers.

As usual, Oracle Technology Network and Pythian sponsor the venue, drinks and cool fun social stuff. This year we are dropping a cool app and resurrecting traditions — you know what it means and if not, come and learn. All blogger community participants are invited — self qualify is what that means ;).

As usual, vintage t-shirts, ties, or bandanas from previous meetups will make you look cool — feel free to wear them.

For those of you who don’t know the history: The Bloggers Meetup during Oracle OpenWorld was started by Mark Rittman and continued by Eddie Awad, and then I picked up the flag in 2009. This year we have Oracle Technology Network taking more leadership on the organization of the event in addition to just being a “corporate sponsor”.

The meetups have been a great success for making new friends and catching up with the old, so let’s keep them this way! To give you an idea, here are the photos from the OOW08 Bloggers Meetup (courtesy of Eddie Awad) and OOW09 meetup blog post update from myself, and a super cool video by a good blogging friend, Bjorn Roest from OOW13.

While the initial meetings were mostly targeted to Oracle database folks, guys and gals from many Oracle technologies — Oracle database, MySQL, Apps, Sun technologies, Java and more join in the fun. All bloggers are welcome. Last year we crossed 150 attendees and I expect this year we may set a new record.

If you are planning to attend, please comment here with the phrase “COUNT ME IN”. This will help us ensure we have the attendance numbers right. Please provide your blog URL (or whatever you consider a replacement of that — I’ll leave it to your interpretation) with your comment — it’s a Bloggers Meetup after all! Please do make sure you comment here if you are attending so that we have enough room, food, and (most importantly) drinks.

Of course, do not forget to blog, tweet, linkedin, G+, instagram, email and just talk about this year’s bloggers meetup. See you there — it will be fun!


Discover more about our expertise in the world of Oracle.

Categories: DBA Blogs

RMAN old feature: Restore datafile without backup

The Oracle Instructor - Tue, 2015-10-20 04:09

Say I have created a new tablespace recently and did not yet take a backup of the datafile. Now I lose that datafile. Dilemma? No, because I can do an ALTER DATABASE CREATE DATAFILE. Sounds complex? Well even if I wouldn’t be aware of that possibility, a simple RMAN restore will work – as if there were a backup:

RMAN> create table tablespace tbs1 as select * from adam.sales where rownum<=10000; 
Statement processed 
RMAN> alter system switch logfile;

Statement processed

RMAN> host 'echo kaputt > /u01/app/oracle/oradata/prima/tbs1.dbf';

host command complete

RMAN> select count(*) from;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of sql statement command at 10/20/2015 11:50:12
ORA-01115: IO error reading block from file  (block # )
ORA-01110: data file 5: '/u01/app/oracle/oradata/prima/tbs1.dbf'
ORA-27072: File I/O error
Additional information: 4
Additional information: 131

RMAN> alter database datafile 5 offline;

Statement processed

RMAN> restore datafile 5;

Starting restore at 2015-10-20 11:50:43
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=187 device type=DISK

creating datafile file number=5 name=/u01/app/oracle/oradata/prima/tbs1.dbf
restore not done; all files read only, offline, or already restored
Finished restore at 2015-10-20 11:50:45

RMAN> recover datafile 5;

Starting recover at 2015-10-20 11:50:52
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 2015-10-20 11:50:53

RMAN> alter database datafile 5 online;

Statement processed

RMAN> select count(*) from;


Cool isn’t it? Requires that you have all archived logs available since the creation of the tablespace. And besides the SQL commands inside the RMAN shell, it is not a 12c new feature. It works that way since forever, as far as I recall. Don’t believe it, test it!  Maybe not on a production system ;-)

Tagged: Backup & Recovery, RMAN
Categories: DBA Blogs