Skip navigation.

Feed aggregator

Reading System Logs: SQL Server – Part 2

Pythian Group - Mon, 2015-07-20 13:52

Greetings!

4355536275_430b18f9d5_nLast time I talked about reading System Logs on the SQL Server box, explaining why it is really important that DBA(s) should scan through the logs once a day on a critical production system. As I mentioned in my previous blog post , sometimes there are messages logged in as information, and at times it can be treated as an early warning before the system gets actual error messages – a sign of warning or an error. That is why it is important to read the information level messages. Let me tell you yet another case that I had where the disk sub system issue was reported as an information in system logs.

In this case, the system was suffering with the high disk I/O. The disk that we had replaced was used for writing backups. For a few days we observed that writing backups were longer than it was before.  The number of databases were the same and the size of these databases were not drastically increased, though the time it was taking to write backups had increased significantly. Looking at the system logs I noticed some messages related to the disk. Searching for those messages lead me to some links pointing toward a disk issue, link among them. After working with others in storage admin they confirmed the issue too, and now they are procuring a new disk.

So, here is what I would say. When you start your day, spare few minutes to read the system logs.  At Pythian, we have our home grown monitoring tool Avail which does this job for us reporting information, warnings and errors as a report.

Excerpts from the System Log:

Log Name:      System
Source:        Server Administrator
Date:          6/18/2015 10:55:55 PM
Event ID:      2271
Task Category: Storage Service
Level:         Information
Keywords:      Classic
User:          N/A
Computer:      SQLServer
Description:
The Patrol Read corrected a media error.:  Physical Disk 0:0:10 Controller 0, Connector 0

photo credit: Ubicación del disco duro (antiguo) a desmontar via photopin (license)

 

Learn more about our expertise in SQL Server.

The post Reading System Logs: SQL Server – Part 2 appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

RMAN 11g : How to Restore / Duplicate to a More Recent Patchset

Pythian Group - Mon, 2015-07-20 13:24

In an Oracle DBA’s life, you’ll be regularly asked to work on applying a new patchset on a database and then you will apply it starting from the development database to the production database and this process can be quite long depending on the organization you are working for.

In an Oracle DBA’s life, you’ll be regularly asked to refresh a development database (or any environment before the production) with the production data for development, test or whatever needs. For years now, RMAN has helped us a lot to perform this kind of task easily.

And what should happen always happens and one day you will be asked to refresh your more recent patchset DEV database (let’s say 11.2.0.4) with your PROD data (let’s say that it’s running against an 11.2.0.3 version). And let’s call a spade a spade, that could be a bit tricky — and specially if you discover that the versions are different once the RESTORE / DUPLICATE is terminated because you have launched the usual refresh scripts forgetting this little detail…

A solution could be to ask some GB to the sys admin team, copy an 11.2.0.3 ORACLE_HOME from another server, quickly clone it on the DEV server, start a RMAN DUPLICATE / RESTORE DATABASE from the 11.2.0.3 PROD to the 11.2.0.3 DEV and then upgrade it to 11.2.0.4. But this will probably be quite long and in the case that adding some GB to a server requires some procedures, validations, etc… it could take many days to refresh the DEV database which is obviously not what everybody wants. And this possibility does not exists if you face the issue after the RESTORE / DUPLICATE is finished.

Hopefully, there’s a way to achieve this goal by directly RESTORE / DUPLICATE a database to a more recent patchset (note that this method is also working for 10g databases). Let’s explore the two cases you can face doing a direct RESTORE / DUPLICATE to a more recent patchset database.

 

RESTORE / DUPLICATE DATABASE case

Whether we are restoring or duplicating the production database from a backup, here is what will happen on the DEV database:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 03/11/2015 22:38:59
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 17 and starting SCN of 2232530
RMAN

Here, we can’t open the database with the RESETLOGS option due to the patchset version difference. We have to use a slightly different command:

SQL> alter database open resetlogs upgrade ;
Database altered.
SQL>

Now the database is opened in upgrade mode, we can now apply the 11.2.0.4 patchset and open it.

SQL> @?/rdbms/admin/catupgrd

...

SQL> startup
ORACLE instance started.

Total System Global Area 626327552 bytes
Fixed Size 2255832 bytes
Variable Size 243270696 bytes
Database Buffers 377487360 bytes
Redo Buffers 3313664 bytes
Database mounted.
Database opened.
SQL>

This one is in fact quick and easy.

 

DUPLICATE FROM ACTIVE DATABASE case

Starting from 11g, we have the cool DUPLICATE FORM ACTIVE DATABASE feature that we can also use to perform this kind of refresh. When you perform a DUPLICATE FROM ACTIVE DATABASE operation from a 11.2.0.3 to a 11.2.0.4 version, the procedure is different from the previous one as the RESETLOGS will begin but will not be able to finish properly and you will face this error :

RMAN-08161: contents of Memory Script:
{
 Alter clone database open resetlogs;
}
RMAN-08162: executing Memory Script

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00601: fatal error in recovery manager
RMAN-03004: fatal error during execution of command
RMAN-10041: Could not re-create polling channel context following failure.
RMAN-10024: error setting up for rpc polling
RMAN-10005: error opening cursor
RMAN-10002: ORACLE error: ORA-03114: not connected to ORACLE
RMAN-03002: failure of Duplicate Db command at 03/25/2015 20:22:56
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
RMAN-06136: ORACLE error from auxiliary database: ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-00704: bootstrap process failure
ORA-39700: database must be opened with UPGRADE option
Process ID: 24341
Session ID: 1 Serial number: 9

At this stage, it’s not possible to open the database in UPGRADE mode nor RECOVER the database and not even generate a BACKUP CONTROLFILE TO TRACE.

SQL> recover database using backup controlfile until cancel ;
ORA-00283: recovery session canceled due to errors
ORA-16433: The database must be opened in read/write mode.

SQL>

So we have to recreate the controlfile. By using these queries, we can easily create a new CREATE CONTROLFILE statement (or we could generate a BACKUP CONTROLFILE TO TRACE from the source database and then adapt it for the destination database).

SQL> select name from v$datafile order by file#;
SQL> select group#, member from v$logfile;
SQL> select name, bytes from v$tempfile order by file#;

And then recreate the controlfile:

SQL> CREATE CONTROLFILE REUSE DATABASE "TST11204" RESETLOGS ARCHIVELOG
 2 MAXLOGFILES 16
 3 MAXLOGMEMBERS 3
 4 MAXDATAFILES 100
 5 MAXINSTANCES 8
 6 MAXLOGHISTORY 292
 7 LOGFILE
 8 GROUP 1 '/u01/app/oracle/data/orcl11204/redo01.log' SIZE 50M BLOCKSIZE 512,
 9 GROUP 2 '/u01/app/oracle/data/orcl11204/redo02.log' SIZE 50M BLOCKSIZE 512,
 10 GROUP 3 '/u01/app/oracle/data/orcl11204/redo03.log' SIZE 50M BLOCKSIZE 512
 11 DATAFILE
 12 '/u01/app/oracle/data/orcl11204/system01.dbf',
 13 '/u01/app/oracle/data/orcl11204/sysaux01.dbf',
 14 '/u01/app/oracle/data/orcl11204/undotbs01.dbf',
 15 '/u01/app/oracle/data/orcl11204/users01.dbf'
CHARACTER SET AL32UTF8
 16 ;

Control file created.

SQL>

To finish the recover and open the database in UPGRADE mode, we would need to apply the current redolog (and not any archivelog — we don’t have any archivelog as the RESETLOGS didn’t happen yet).

SQL> select * from v$logfile ;

 GROUP# STATUS TYPE MEMBER IS_
---------- ------- ------- ------------------------------------------------------- ---
 3 STALE ONLINE /u01/app/oracle/data/orcl11204/redo03.log NO

 2 STALE ONLINE /u01/app/oracle/data/orcl11204/redo02.log NO

 1 STALE ONLINE /u01/app/oracle/data/orcl11204/redo01.log NO

SQL> RECOVER DATABASE USING BACKUP CONTROLFILE until cancel ;
ORA-00279: change 2059652 generated at 03/25/2015 20:22:54 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/data/TST11204/archivelog/2015_03_25/o1_mf_1_1_%u_.arc
ORA-00280: change 2059652 for thread 1 is in sequence #1

Specify log: {<ret>=suggested | filename | AUTO | CANCEL}
/u01/app/oracle/data/orcl11204/redo01.log
Log applied.
Media recovery complete.
SQL> alter database open resetlogs upgrade ;

Database altered.

SQL>

Now we can apply the 11.2.0.4 patchset:


SQL> @?/rdbms/admin/catupgrd

...

SQL>

And check that everything is good:


SQL> select * from v$version ;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
PL/SQL Release 11.2.0.4.0 - Production
CORE 11.2.0.4.0 Production
TNS for Linux: Version 11.2.0.4.0 - Production
NLSRTL Version 11.2.0.4.0 - Production

SQL> select comp_name, version, status from dba_registry ;

COMP_NAME VERSION STATUS
--------------------------------------------- ------------------------------ -----------
OWB 11.2.0.3.0 VALID
Oracle Application Express 3.2.1.00.12 VALID
Oracle Enterprise Manager 11.2.0.4.0 VALID
OLAP Catalog 11.2.0.4.0 INVALID
Spatial 11.2.0.4.0 VALID
Oracle Multimedia 11.2.0.4.0 VALID
Oracle XML Database 11.2.0.4.0 VALID
Oracle Text 11.2.0.4.0 VALID
Oracle Expression Filter 11.2.0.4.0 VALID
Oracle Rules Manager 11.2.0.4.0 VALID
Oracle Workspace Manager 11.2.0.4.0 VALID
Oracle Database Catalog Views 11.2.0.4.0 VALID
Oracle Database Packages and Types 11.2.0.4.0 INVALID
JServer JAVA Virtual Machine 11.2.0.4.0 VALID
Oracle XDK 11.2.0.4.0 VALID
Oracle Database Java Packages 11.2.0.4.0 VALID
OLAP Analytic Workspace 11.2.0.4.0 INVALID
Oracle OLAP API 11.2.0.4.0 VALID

18 rows selected.

SQL>

&nbsp;

 

This saved me a lot of time, have a good day :)

 

Discover more about our expertise in Oracle.

The post RMAN 11g : How to Restore / Duplicate to a More Recent Patchset appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

DevOps: Applied Empathy

Pythian Group - Mon, 2015-07-20 12:09

I enjoyed participating in a panel on DevOps culture by Electric Cloud last week. Our commendable hosts kept things light and productive despite the number of participants and breadth of topic.

It wouldn’t be a culture discussion if we had failed to review the motivations for that (DevOps) culture: namely the evolve-or-die progression of tech business in general and software systems of customer engagement in particular. So the logic goes, any non-trivial business is now (also) a software business – thus, being “good at software” (and rapidly deriving high quality, constantly improving, large-scale services from it) is a fundamental market success factor and must be(come) part of the corporate DNA.

I doubt the above is controversial, but the degree to which it feels true probably depends on the scale of opportunity in your sector(s) and the governing regulatory regime(s). Those factors have a big impact not only on the intensity of motivation, but the challenges and resistance to change that must be overcome in a successful program of transformation.

The discussion reminded me that empathy is important for more than just being nice. It’s also a great way to understand what motivates people and how to incorporate their success drivers into transformation efforts.

top of the world in a typical software engineering org

Consider Katniss, for example – she has to deliver to Rex (and consumers like you and me) the “and sandwich”, Velocity (new features) and Operational Excellence, or we (consumers) will find a service that does. She may prioritize Velocity at times over other initiatives, and the stress on Bill grows under this pressure. If, as agent of transformational change, you propose methods of increasing Velocity to Bill – you are likely to face rejection – Bill’s already drowning at the present pace.

If, on the other hand, one approaches Bill to explain that pervasive, intelligent automation strategies can give his team their weekends back, and make his team a proactive contributor and valued partner in growth of the business, one will likely find a different sort of audience.

All this means, to me, is that DevOps is a useful context for improving a complex sort of collaboration that’s called a software product lifecycle. Understanding the motivations and needs of the players in an organization  is a key enabler for planning and executing successful programs of change.

 

Discover more about our expertise in DevOps and the author Aaron Lee.

The post DevOps: Applied Empathy appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Why Oracle ERP Cloud? by Terrance Wampler

Linda Fishman Hoyle - Mon, 2015-07-20 11:00

In this video, Terrance Wampler (pictured left), VP of Oracle ERP Cloud Strategy and Product Development, talks candidly and fluently about Oracle ERP Cloud.

He hits on three topics.

1) The Changing Role of the CFO: CFO’s need to know how to create value in the organization, not just know how to control the organization. Theirs is an expanded role where they’re being asked to provide guidance to other LOB business leaders and those involved with customer engagement. Ideally, back office technology and processes are integrated with the front office—bringing analytics to the forefront, improving the customer experience and the bottom line.

2) Modernizing ERP: The cloud is very appealing from a cost savings perspective, but other key drivers of cloud adoption include the digital transformation technologies (mobile, social, embedded analytics, collaboration) and a modern user experience. Also the ability to take on two-three releases a year injects innovation into the business.

3) Social Brings Speed and Accountability: Social networking tools are very powerful inside a business operation. Nothing needs more collaboration than a business process to solve exceptions. Specific social conversations with those who need to take action move faster. In a secure environment, users can share documents, get approvals, produce an audit trail, and drive accountability.

Take a look at the video and share it freely with your colleagues.

A Lot To Listen To

FeuerThoughts - Mon, 2015-07-20 06:40
A Lot To Listen To

Sometimes, if you're lucky,
there is nothing to hearbut the sound of the windblowing through trees.
Now you could say:"That's not much to listen to."Or you could listen...
Listento the rustling, hissing, whispering, sometimes angry soundof thousands of almost silent brushings of leaf against leaf,of feather-light taps of twig striking twig,any single act nothing to hear at allbut when the tree is big enoughand the leaves are numerous enoughand the branches reach out thinner and thinnerpoking out toward the suncarrying leaves to their destiny,
then you might be able to hearthe sound of the windblowing through trees.
It's a lot to listen to,if you can hear it.



Copyright 2015 Steven Feuerstein
Categories: Development

12c Downgrade

Jonathan Lewis - Mon, 2015-07-20 06:12

No, not really – but sometimes the optimizer gets better and gives you worse performance as a side effect when you upgrade. Here’s an example where 11.2.0.4 recognised (with a few hints) the case for a nested loop semi-join and 12c went a bit further and recognised the opportunity for doing a cunning “semi_to_inner” transformation … which just happened to do more work than the 11g plan.

Here’s a data set to get things going, I’ve got “parent” and “child” tables, but in this particular demonstration I won’t be invoking referential integrity:


create table chi
as
with generator as (
        select  --+ materialize
                rownum  id
        from dual
        connect by
                level &lt;= 1e4
)
select
        rownum - 1                              id,
        trunc((rownum-1)/10)                    n1,
        trunc(dbms_random.value(0,1000))        n2,
        rpad('x',1000)                          padding
from
        generator
;

create table par
as
with generator as (
        select  --+ materialize
                rownum  id
        from dual
        connect by
                level &lt;= 1e4
)
select
        rownum - 1      id,
        rpad('x',1000)  padding
from
        generator
where
        rownum &lt;= 1e3
;

alter table par modify id not null;
alter table par add constraint par_pk primary key(id)
-- deferrable
;

-- Now gather stats on the tables.

The code uses my standard framework that could generate a few million rows even though it’s only generating 1,000 in par and 10,000 in chi. The presence of the commented “deferrable” for the primary key constraint is for a secondary demonstration.

You’ll notice that the 1,000 values that appear in chi.n1 and chi.n2 are matched by the 1,000 rows that appear in the primary key of par – in some other experiment I’ve got two foreign keys from chi to par. Take note that the values in n1 are very well clustered because of the call to trunc() while the values in n2 are evenly scattered because of the call to dbms_random() – the data patterns are very different although the data content is very similar (the randomised data will still produce, on average, 10 rows per value).

So here’s the test code:


set serveroutput off
set linesize 156
set trimspool on
set pagesize 60

alter session set statistics_level = all;

prompt  =============================
prompt  Strictly ordered driving data
prompt  =============================

select
        /*+
                leading(@sel$5da710d3 chi@sel$1 par@sel$2)
                full   (@sel$5da710d3 chi@sel$1)
                use_nl (@sel$5da710d3 par@sel$2)
                index  (@sel$5da710d3 par@sel$2 (par.id))
        */
        count(*)
from
        chi
where   exists (
                select null
                from par
                where par.id = chi.n1
        )
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last outline alias cost'));

prompt  =============================
prompt  Randomly ordered driving data
prompt  =============================

select
        /*+
                leading(@sel$5da710d3 chi@sel$1 par@sel$2)
                full   (@sel$5da710d3 chi@sel$1)
                use_nl (@sel$5da710d3 par@sel$2)
                index  (@sel$5da710d3 par@sel$2 (par.id))
        */
        count(*)
from
        chi
where   exists (
                select null
                from par
                where par.id = chi.n2
        )
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last outline alias cost'));

set serveroutput on
alter session set statistics_level = typical;

In both cases I’ve hinted the query into running with a nested loop semi-join from chi to par. Since there are 10,000 rows in chi with no filter predicates, you might expect to see the probe into the par table starting 10,000 times returning (thanks to our perfect data match) one row for each start. Here are the run-time plans with rowsource execution stats from 11.2.0.4

=============================
Strictly ordered driving data
=============================

--------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name   | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |        |      1 |        |   190 (100)|      1 |00:00:00.14 |    1450 |   1041 |
|   1 |  SORT AGGREGATE     |        |      1 |      1 |            |      1 |00:00:00.14 |    1450 |   1041 |
|   2 |   NESTED LOOPS SEMI |        |      1 |  10065 |   190   (4)|  10000 |00:00:00.12 |    1450 |   1041 |
|   3 |    TABLE ACCESS FULL| CHI    |      1 |  10065 |   186   (2)|  10000 |00:00:00.07 |    1434 |   1037 |
|*  4 |    INDEX UNIQUE SCAN| PAR_PK |   1000 |   1048 |     0   (0)|   1000 |00:00:00.01 |      16 |      4 |
--------------------------------------------------------------------------------------------------------------

=============================
Randomly ordered driving data
=============================

-----------------------------------------------------------------------------------------------------
| Id  | Operation           | Name   | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |        |      1 |        |   190 (100)|      1 |00:00:00.12 |    5544 |
|   1 |  SORT AGGREGATE     |        |      1 |      1 |            |      1 |00:00:00.12 |    5544 |
|   2 |   NESTED LOOPS SEMI |        |      1 |  10065 |   190   (4)|  10000 |00:00:00.10 |    5544 |
|   3 |    TABLE ACCESS FULL| CHI    |      1 |  10065 |   186   (2)|  10000 |00:00:00.02 |    1434 |
|*  4 |    INDEX UNIQUE SCAN| PAR_PK |   4033 |   1048 |     0   (0)|   4033 |00:00:00.02 |    4110 |
-----------------------------------------------------------------------------------------------------

Notice how we do 1,000 starts of operation 4 when the data is well ordered, and 4,033 starts when the data is randomly ordered. For a semi-join nested loop the run-time engine uses the same caching mechanism as it does for scalar subqueries – a fact you can corroborate by removing the current hints and putting the /*+ no_unnest */ hint into the subquery so that you get a filter subquery plan, in which you will note exactly the same number of starts of the filter subquery.

As an extra benefit you’ll notice that the index probes for the well-ordered data have managed to take advantage of buffer pinning (statistic “buffer is pinned count”) – keeping the root block and most recent leaf block of the par_pk index pinned almost continually through the query; while the randomised data access unfortunately required Oracle to unpin and repin the index leaf blocks (even though there were only 2 in the index) as the scan of chi progessed.

Time to upgrade to 12.1.0.2 and see what happens:

=============================
Strictly ordered driving data
=============================

--------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name   | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |        |      1 |        |   189 (100)|      1 |00:00:00.22 |    1448 |   1456 |
|   1 |  SORT AGGREGATE     |        |      1 |      1 |            |      1 |00:00:00.22 |    1448 |   1456 |
|   2 |   NESTED LOOPS      |        |      1 |  10000 |   189   (4)|  10000 |00:00:00.20 |    1448 |   1456 |
|   3 |    TABLE ACCESS FULL| CHI    |      1 |  10000 |   185   (2)|  10000 |00:00:00.03 |    1432 |   1429 |
|*  4 |    INDEX UNIQUE SCAN| PAR_PK |  10000 |      1 |     0   (0)|  10000 |00:00:00.06 |      16 |     27 |
--------------------------------------------------------------------------------------------------------------

=============================
Randomly ordered driving data
=============================

--------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name   | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |        |      1 |        |   189 (100)|      1 |00:00:00.22 |   11588 |   1429 |
|   1 |  SORT AGGREGATE     |        |      1 |      1 |            |      1 |00:00:00.22 |   11588 |   1429 |
|   2 |   NESTED LOOPS      |        |      1 |  10000 |   189   (4)|  10000 |00:00:00.19 |   11588 |   1429 |
|   3 |    TABLE ACCESS FULL| CHI    |      1 |  10000 |   185   (2)|  10000 |00:00:00.03 |    1432 |   1429 |
|*  4 |    INDEX UNIQUE SCAN| PAR_PK |  10000 |      1 |     0   (0)|  10000 |00:00:00.07 |   10156 |      0 |
--------------------------------------------------------------------------------------------------------------

Take a close look at operation 2 – it’s no longer a NESTED LOOP SEMI, the optimizer has got so smart (recognising the nature of the primary key on par) that it’s done a “semi_to_inner” transformation. But a side effect of the transformation is that the scalar subquery caching mechanism no longer applies so we probe the par table 10,000 times. When the driving data is well-ordered this hasn’t made much difference to the buffer gets (and related latch activity), but when the data is randomised the extra probes ramp the buffer gets up even further.

The timings (A-time) on all these experiments are not particularly trustworthy – the differences between cached reads and direct path reads introduced more variation than the difference in Starts and Buffers, and the total CPU load is pretty small anyway – and I suspect that this difference won’t make much difference to most people most of the time. No doubt, though, there will be a few cases where a small change like this could have a noticeable effect on some important queries.

Footnote

There is a hint /*+ no_semi_to_inner(@queryblock object_alias) */ that I thought might persuade the optimizer to stick with the semi-join, but it didn’t have any effect. Since the “semi to inner” transformation (and the associated hints) are available in 11.2.0.4 I was a little puzzled that (a) I didn’t see the same transformation in the 11g test, and (b) that I couldn’t hint the transformation.  This makes me wonder if there’s a defect in 11g that might be fixed in a future patch.

It’s also nice to think that the scalar subquery caching optimisation used in semi-joins might also become available to standard joins (in cases such as “join to parent”, perhaps).


Get to Know the Latest Feature Updates in Documents Cloud Service

WebCenter Team - Mon, 2015-07-20 05:00

While it is seamless for our Oracle Documents Cloud Service users as the updates are automatically pushed out without the users or customer organizations having to do anything at their ends, we thought you may want to learn about what additional features and capabilities we have pushed out with the latest release of Oracle Documents Cloud Service and why Oracle's cloud collaboration solution is fast becoming an industry benchmark. Our product expert and member of the product management team, Ellen Gravina discusses.

by: Ellen Gravina, Principal Product Manager, Oracle Documents Cloud Service

v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);}

Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

Oracle Documents Cloud Service was upgraded to include the latest features for our web, mobile, and desktop clients. Features include:

Oracle Documents Presenter

v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);}

Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

Wow your customers and prospects with beautiful presentations that deliver maximum impact and results!
• Present PPTs, review PDF documents, play videos, and use many more presentation formats.
• All your presentations are stored on your tablet–no need to find a WiFi hot spot.
• Author presentations by organizing material in folders on your desktop. Customize the look to your brand by using folder background and icon images.

Multiple Account Support
• Synchronize content from multiple accounts to your desktop.


• Access content from multiple accounts on your mobile device. 



Enhancements to Public Link Policy

• Administrators can set a maximum role allowed for public links.


• Folder owners can disable public links on a per-folder basis.



Customize Oracle Documents with Your Own Branding

• Change the logo in the header.
• Control where users look for help, share feedback, and get information about client downloads.




iOS Touch ID Support
• Make use of your fingerprint to protect access to your content.
• Available in the native mobile app and Oracle Documents Presenter app.


Access Version History from Your Mobile Device
• View old versions of a document.
• Delete old versions.
• Make an old version the current version.


Enterprise Installation Support for Desktop Client
• Roll out the Desktop Sync Client software to multiple client machines with the help of the EXE and MSI installer packages.
• Deploy the MSI installer through Active Directory’s group policy.
• See the Administrator’s Guide for details.

Learn More
Check out the Oracle Documents Cloud Service Getting Started page to learn more. And visit cloud.oracle.com/documents  for additional solution and market information.

Questions?
Please contact Oracle Support for any support questions.  Feedback is always welcome in our Documents Cloud Service discussion forum.


SaaS and traditional software from the same vendor?

DBMS2 - Mon, 2015-07-20 03:09

It is extremely difficult to succeed with SaaS (Software as a Service) and packaged software in the same company. There were a few vendors who seemed to pull it off in the 1970s and 1980s, generally industry-specific application suite vendors. But it’s hard to think of more recent examples — unless you have more confidence than I do in what behemoth software vendors say about their SaaS/”cloud” businesses.

Despite the cautionary evidence, I’m going to argue that SaaS and software can and often should be combined. The “should” part is pretty obvious, with reasons that start:

  • Some customers are clearly better off with SaaS. (E.g., for simplicity.)
  • Some customers are clearly better off with on-premises software. (E.g., to protect data privacy.)
  • On-premises customers want to know they have a path to the cloud.
  • Off-premises customers want the possibility of leaving their SaaS vendor’s servers.
  • SaaS can be great for testing, learning or otherwise adopting software that will eventually be operated in-house.
  • Marketing and sales efforts for SaaS and packaged versions can be synergistic.
    • The basic value proposition, competitive differentiation, etc. should be the same, irrespective of delivery details.
    • In some cases, SaaS can be the lower cost/lower commitment option, while packaged product can be the high end or upsell.
    • An ideal sales force has both inside/low-end and bag-carrying/high-end components.

But the “how” of combining SaaS and traditional software is harder. Let’s review why. 

Why it is hard for one vendor to succeed at both packaged software and SaaS?

SaaS and packaged software have quite different development priorities and processes. SaaS vendors deliver and support software that:

  • Runs on a single technology stack.
  • Is run only at one or a small number of physical locations.
  • Is run only in one or a small number of historical versions.
  • May be upgraded multiple times per month.
  • Can be assumed to be operated by employees of the SaaS company.
  • Needs, for customer acquisition and retention reasons, to be very easy for users to learn.

But traditional packaged software:

  • Runs on technology the customer provides and supports, at the location of the customer’s choice.
  • Runs in whichever versions customers have not yet upgraded from.
  • Should — to preserve the sanity of all concerned — have only have a few releases per year.
  • Is likely to be operated by less knowledgeable or focused staff than a SaaS vendor enjoys.
  • Can sometimes afford more of an end-user learning curve than SaaS.

Thus, in most cases:

  • Traditional software creates greater support and compatibility burdens than SaaS does.
  • SaaS and on-premises software have very different release cycles.
  • SaaS should be easier for end-users than most traditional software, but …
  • … traditional software should be easier to administer than SaaS.

Further — although this is one difference that I think has at times been overemphasized — SaaS vendors would prefer to operate truly multi-tenant versions of their software, while enterprises less often have that need.

How this hard thing could be done

Most of the major problems with combining SaaS and packaged software efforts can be summarized in two words — defocused development. Even if the features are substantially identical, SaaS is developed on different schedules and for different platform stacks than packaged software is.

So can we design an approach to minimize that problem? I think yes. In simplest terms, I suggest:

  • A main development organization focused almost purely on SaaS.
  • A separate unit adapting the SaaS code for on-premises customers, with changes to the SaaS offering being concentrated in three aspects:
    • Release cadence.
    • Platform support.
    • Administration features, which are returned to the SaaS group for its own optional use.

Certain restrictions would need to be placed on the main development unit. Above all, because the SaaS version will be continually “thrown over the wall” to the sibling packaged-product group, code must be modular and documentation must be useful. The standard excuses — valid or otherwise — for compromising on these virtues cannot be tolerated.

There is one other potentially annoying gotcha. Hopefully, the SaaS group uses third-party products and lots of them; that’s commonly better than reinventing the wheel. But in this plan they need to use ones that are also available for third-party/OEM kinds of licensing.

My thoughts on release cadence start:

  • There should be a simple, predictable release cycle:
    • N releases per year, for N approximately = 4.
    • Strong efforts to adhere to a predictable release schedule.
  • A reasonable expectation is that what’s shipped and supported for on-premises use is 6-9 months behind what’s running on the SaaS service. 3-6 months would be harder to achieve.

The effect would be that on-premises software would lag SaaS features to a predictable and bounded extent.

As for platform support:

  • You have to stand ready to install and support whatever is needed. (E.g., in the conversation that triggered this post, the list started with Hadoop, Spark, and Tachyon.)
  • You have to adapt to customers’ own reasonably-current installations of needed components (but help them upgrade if they’re way out of date).
  • Writing connectors is OK. Outright porting from your main stack to another may be unwise.
  • Yes, this is all likely to involve significant professional services, at least to start with, because different customers will require different degrees of adaptation.

That last point is key. The primary SaaS offering can be standard, in the usual way. But the secondary business — on-premises software — is inherently services-heavy. Fortunately, packaged software and professional services can be successfully combined.

And with that I’ll just stop and reiterate my conclusion:

It may be advisable to offer both SaaS and services-heavy packaged software as two options for substantially the same product line.

Related link

  • Point #4 of my VC overlord post is relevant  — and Point #3 even more so. :)
Categories: Other

Release of University of California at Davis Case Study on e-Literate TV

Michael Feldstein - Sun, 2015-07-19 16:55

By Phil HillMore Posts (345)

Today we are thrilled to release the fifth and final case study in our new e-Literate TV series on “personalized learning”. In this series, we examine how that term, which is heavily marketed but poorly defined, is implemented on the ground at a variety of colleges and universities. We plan to cap off this series with two analysis episodes looking at themes across the case studies.

We are adding three episodes from the University of California at Davis (UC Davis), a large research university that has a strong emphasis in science, technology, engineering, and math or STEM fields. The school has determined that the biggest opportunity to improve STEM education is to improve the success rates in introductory sciences classes – the ones typically taught in large lecture format at universities of their size. Can you personalize this most impersonal of academic experiences? What opportunities and barriers do institutions face when they try to extend personalized learning approaches?

You can see all the case studies (either 2 or 3 per case study) at the series link, and you can access individual episodes below.

UC Davis Case Study: Personalized The Large Lecture Class

UC Davis Case Study: Intro to Biology and Intro to Chemistry Examples

UC Davis Case Study: Opportunities and Barriers to Extending Personalization

e-Literate TV, owned and run by MindWires Consulting, is funded in part by the Bill & Melinda Gates Foundation. When we first talked about the series with the Gates Foundation, they agreed to give us the editorial independence to report what we find, whether it is good, bad, or indifferent.

As with the previous series, we are working in collaboration with In the Telling, our partners providing the platform and video production. Their Telling Story platform allows people to choose their level of engagement, from just watching the video to accessing synchronized transcripts and accessing transmedia. We have added content directly to the timeline of each video, bringing up further references, like e-Literate blog posts or relevant scholarly articles, in context. With In The Telling’s help, we are crafting episodes that we hope will be appealing and informative to those faculty, presidents, provosts, and other important college and university stakeholders who are not ed tech junkies.

We welcome your feedback, either in comments or on Twitter using the hashtag #eLiterateTV. Enjoy!

The post Release of University of California at Davis Case Study on e-Literate TV appeared first on e-Literate.

RMAN -- 5b : (More) Useful KEYWORDs and SubClauses

Hemant K Chitale - Sun, 2015-07-19 09:37
Here are a few more useful KEYWORDs and SubClauses

SECTION SIZE 
SECTION SIZE allows you to split a single Database file into multiple sections.  Note : A Database File  span BackupPieces.  The difference between BackupPieces and Sections is that the former are serially done while Sections can be done in-parallel.  (SECTION SIZE cannot be used with MAXPIECESIZE).  Here, I first show a datafile in 4 BackupPieces defined by SECTION SIZE.

SQL> select file_id, size_mb
2 from
3 (select file_id, bytes/1048576 size_mb
4 from dba_data_files
5 order by 2 desc)
6 where rownum = 1;

FILE_ID SIZE_MB
---------- ----------
2 1259

SQL>

RMAN> backup datafile 2 section size 400M;

Starting backup at 19-JUL-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=40 device type=DISK
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00002 name=/home/oracle/app/oracle/oradata/orcl/sysaux01.dbf
backing up blocks 1 through 51200
channel ORA_DISK_1: starting piece 1 at 19-JUL-15
channel ORA_DISK_1: finished piece 1 at 19-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T223055_btqf5020_.bkp tag=TAG20150719T223055 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:26
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00002 name=/home/oracle/app/oracle/oradata/orcl/sysaux01.dbf
backing up blocks 51201 through 102400
channel ORA_DISK_1: starting piece 2 at 19-JUL-15
channel ORA_DISK_1: finished piece 2 at 19-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T223055_btqf5sw2_.bkp tag=TAG20150719T223055 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:45
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00002 name=/home/oracle/app/oracle/oradata/orcl/sysaux01.dbf
backing up blocks 102401 through 153600
channel ORA_DISK_1: starting piece 3 at 19-JUL-15
channel ORA_DISK_1: finished piece 3 at 19-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T223055_btqf76wl_.bkp tag=TAG20150719T223055 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:56
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00002 name=/home/oracle/app/oracle/oradata/orcl/sysaux01.dbf
backing up blocks 153601 through 161152
channel ORA_DISK_1: starting piece 4 at 19-JUL-15
channel ORA_DISK_1: finished piece 4 at 19-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T223055_btqf8y7p_.bkp tag=TAG20150719T223055 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 19-JUL-15

Starting Control File and SPFILE Autobackup at 19-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_19/o1_mf_s_885508385_btqf92c7_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 19-JUL-15

RMAN>
RMAN> list backup of datafile 2 completed after "trunc(sysdate)";


List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
68 Full 452.02M DISK 00:02:09 19-JUL-15
List of Datafiles in backup set 68
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
2 Full 14135401 19-JUL-15 /home/oracle/app/oracle/oradata/orcl/sysaux01.dbf

Backup Set Copy #1 of backup set 68
Device Type Elapsed Time Completion Time Compressed Tag
----------- ------------ --------------- ---------- ---
DISK 00:02:09 19-JUL-15 YES TAG20150719T223055

List of Backup Pieces for backup set 68 Copy #1
BP Key Pc# Status Piece Name
------- --- ----------- ----------
71 1 AVAILABLE /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T223055_btqf5020_.bkp
72 2 AVAILABLE /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T223055_btqf5sw2_.bkp
73 3 AVAILABLE /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T223055_btqf76wl_.bkp
74 4 AVAILABLE /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T223055_btqf8y7p_.bkp

RMAN>


Note how the blocks of the datafile are allocated to each Section.  Section 1 is for Blocks 1 to 51200.  Section 2 is for Blocks 51201 to 102400.  And so on for subsequent Sections.
Next, I run two channels for in-parallel backups of the sections, still comprising of one BackupSet for the datafile.

RMAN> run
2> {
3> allocate channel d1 device type disk;
4> allocate channel d2 device type disk;
5> backup datafile 2 section size 400M;
6> }

released channel: ORA_DISK_1
allocated channel: d1
channel d1: SID=40 device type=DISK

allocated channel: d2
channel d2: SID=44 device type=DISK

Starting backup at 19-JUL-15
channel d1: starting compressed full datafile backup set
channel d1: specifying datafile(s) in backup set
input datafile file number=00002 name=/home/oracle/app/oracle/oradata/orcl/sysaux01.dbf
backing up blocks 1 through 51200
channel d1: starting piece 1 at 19-JUL-15
channel d2: starting compressed full datafile backup set
channel d2: specifying datafile(s) in backup set
input datafile file number=00002 name=/home/oracle/app/oracle/oradata/orcl/sysaux01.dbf
backing up blocks 51201 through 102400
channel d2: starting piece 2 at 19-JUL-15
channel d1: finished piece 1 at 19-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T224411_btqfxvrr_.bkp tag=TAG20150719T224411 comment=NONE
channel d1: backup set complete, elapsed time: 00:00:26
channel d1: starting compressed full datafile backup set
channel d1: specifying datafile(s) in backup set
input datafile file number=00002 name=/home/oracle/app/oracle/oradata/orcl/sysaux01.dbf
backing up blocks 102401 through 153600
channel d1: starting piece 3 at 19-JUL-15
channel d2: finished piece 2 at 19-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T224411_btqfxvvv_.bkp tag=TAG20150719T224411 comment=NONE
channel d2: backup set complete, elapsed time: 00:00:51
channel d2: starting compressed full datafile backup set
channel d2: specifying datafile(s) in backup set
input datafile file number=00002 name=/home/oracle/app/oracle/oradata/orcl/sysaux01.dbf
backing up blocks 153601 through 161152
channel d2: starting piece 4 at 19-JUL-15
channel d2: finished piece 4 at 19-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T224411_btqfzglf_.bkp tag=TAG20150719T224411 comment=NONE
channel d2: backup set complete, elapsed time: 00:00:03
channel d1: finished piece 3 at 19-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T224411_btqfyo8v_.bkp tag=TAG20150719T224411 comment=NONE
channel d1: backup set complete, elapsed time: 00:00:50
Finished backup at 19-JUL-15

Starting Control File and SPFILE Autobackup at 19-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_19/o1_mf_s_885509127_btqg07qq_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 19-JUL-15
released channel: d1
released channel: d2

RMAN> list backupset 72;


List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
72 Full 452.02M DISK 00:01:13 19-JUL-15
List of Datafiles in backup set 72
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
2 Full 14136475 19-JUL-15 /home/oracle/app/oracle/oradata/orcl/sysaux01.dbf

Backup Set Copy #1 of backup set 72
Device Type Elapsed Time Completion Time Compressed Tag
----------- ------------ --------------- ---------- ---
DISK 00:01:13 19-JUL-15 YES TAG20150719T224411

List of Backup Pieces for backup set 72 Copy #1
BP Key Pc# Status Piece Name
------- --- ----------- ----------
81 1 AVAILABLE /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T224411_btqfxvrr_.bkp
82 2 AVAILABLE /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T224411_btqfxvvv_.bkp
84 3 AVAILABLE /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T224411_btqfyo8v_.bkp
83 4 AVAILABLE /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T224411_btqfzglf_.bkp

RMAN>

Thus, we have BackupSet 72 consisting of the 4 BackupPieces.  The BackupPieces were created using 2 Channels running in-parallel with different ranges of Blocks.



FORMAT (and the FRA)
If you use the FRA with db_recovery_file_dest, Oracle tracks usage against the limit specified by db_recovery_file_dest_size.
However, if you use the FORMAT clause, such backups are *not* tracked as being part of the FRA.  This also means that Oracle would under-report usage of the FRA and wouldn't be able to identify when the FRA nears the limit.

For example, I first report the FRA usage and then run Backups without and and then with the FORMAT clause.  Note how the FRA usage report reflects the Backups without the FORMAT clause only.

SQL> set linesize 132
SQL> show parameter db_recovery

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string /NEW_FS/oracle/FRA
db_recovery_file_dest_size big integer 8G

SQL> select * from v$flash_recovery_area_usage;

FILE_TYPE PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
-------------------- ------------------ ------------------------- ---------------
CONTROL FILE 0 0 0
REDO LOG 0 0 0
ARCHIVED LOG .27 0 10
BACKUP PIECE 32.96 22.4 34
IMAGE COPY 0 0 0
FLASHBACK LOG 0 0 0
FOREIGN ARCHIVED LOG 0 0 0

7 rows selected.

SQL>

RMAN> backup as compressed backupset tablespace hemant filesperset=1;

Starting backup at 19-JUL-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=44 device type=DISK
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00006 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4vt_.dbf
channel ORA_DISK_1: starting piece 1 at 19-JUL-15
channel ORA_DISK_1: finished piece 1 at 19-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T230355_btqh2vkb_.bkp tag=TAG20150719T230355 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00007 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf
channel ORA_DISK_1: starting piece 1 at 19-JUL-15
channel ORA_DISK_1: finished piece 1 at 19-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T230355_btqh2yoc_.bkp tag=TAG20150719T230355 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00008 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x0_.dbf
channel ORA_DISK_1: starting piece 1 at 19-JUL-15
channel ORA_DISK_1: finished piece 1 at 19-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T230355_btqh31vh_.bkp tag=TAG20150719T230355 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00009 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf
channel ORA_DISK_1: starting piece 1 at 19-JUL-15
channel ORA_DISK_1: finished piece 1 at 19-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T230355_btqh34ys_.bkp tag=TAG20150719T230355 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00011 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x5_.dbf
channel ORA_DISK_1: starting piece 1 at 19-JUL-15
channel ORA_DISK_1: finished piece 1 at 19-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T230355_btqh382g_.bkp tag=TAG20150719T230355 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 19-JUL-15

Starting Control File and SPFILE Autobackup at 19-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_19/o1_mf_s_885510251_btqh3c9d_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 19-JUL-15

RMAN>

SQL> select * from v$flash_recovery_area_usage;

FILE_TYPE PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
-------------------- ------------------ ------------------------- ---------------
CONTROL FILE 0 0 0
REDO LOG 0 0 0
ARCHIVED LOG .27 0 10
BACKUP PIECE 33.54 22.92 40
IMAGE COPY 0 0 0
FLASHBACK LOG 0 0 0
FOREIGN ARCHIVED LOG 0 0 0

7 rows selected.

SQL>

RMAN> backup as compressed backupset tablespace hemant format '/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/%U' filesperset=1;

Starting backup at 19-JUL-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=41 device type=DISK
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00006 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4vt_.dbf
channel ORA_DISK_1: starting piece 1 at 19-JUL-15
channel ORA_DISK_1: finished piece 1 at 19-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/2vqcfk8t_1_1 tag=TAG20150719T230709 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00007 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf
channel ORA_DISK_1: starting piece 1 at 19-JUL-15
channel ORA_DISK_1: finished piece 1 at 19-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/30qcfk90_1_1 tag=TAG20150719T230709 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00008 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x0_.dbf
channel ORA_DISK_1: starting piece 1 at 19-JUL-15
channel ORA_DISK_1: finished piece 1 at 19-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/31qcfk93_1_1 tag=TAG20150719T230709 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00009 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf
channel ORA_DISK_1: starting piece 1 at 19-JUL-15
channel ORA_DISK_1: finished piece 1 at 19-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/32qcfk96_1_1 tag=TAG20150719T230709 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00011 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x5_.dbf
channel ORA_DISK_1: starting piece 1 at 19-JUL-15
channel ORA_DISK_1: finished piece 1 at 19-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/33qcfk9a_1_1 tag=TAG20150719T230709 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 19-JUL-15

Starting Control File and SPFILE Autobackup at 19-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_19/o1_mf_s_885510445_btqh9f6x_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 19-JUL-15

RMAN>

SQL> select * from v$flash_recovery_area_usage;

FILE_TYPE PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
-------------------- ------------------ ------------------------- ---------------
CONTROL FILE 0 0 0
REDO LOG 0 0 0
ARCHIVED LOG .27 0 10
BACKUP PIECE 33.65 23.5 41
IMAGE COPY 0 0 0
FLASHBACK LOG 0 0 0
FOREIGN ARCHIVED LOG 0 0 0

7 rows selected.

SQL>

Note how the first backup (which did NOT specify a FORMAT clause) consisted of 5 BackupPieces for the Datafiles plus 1 for the AutoBackup. The NUMBER_OF_FILES for BACKUP PIECE in v$flash_recovery_area_usage was updaetd from 34 to 40.
However, while the second backup with the FORMAT clause (actually pointing to the same physical directory) created 5 Datafile BackupPieces and 1 AutoBackup again, the count in v$flash_recovery_area_usage did NOT increment by 6  (the increment by 1 is for the AutoBackup).

Thus, for this second, with FORMAT clause, backup, while a LIST BACKUP command would show it and the backup would be usable for Restore scenarios, v$flash_recovery_area_usage does not track it.  Therefore, Oracle hasn't incremented the PERCENT_SPACE_USED either.  This PERCENT_SPACE_USED is very important for Oracle to automatically purge older (i.e. OBSOLETE) backups when space usage hits critical limits.



TAG
The TAG clause allows us to specifically define our own Tags.  Note my previous backups showed a TAG that indicates merely date and time as in :

RMAN> list backup summary;

using target database control file instead of recovery catalog

List of Backups
===============
Key TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
28 B F A DISK 23-JUN-15 1 1 NO TAG20150623T170721
29 B F A DISK 23-JUN-15 1 1 NO TAG20150623T170846
30 B F A DISK 04-JUL-15 1 1 YES TAG20150704T121859
31 B F A DISK 04-JUL-15 1 1 YES TAG20150704T121859
32 B F A DISK 04-JUL-15 1 1 YES TAG20150704T121859
33 B F A DISK 04-JUL-15 1 1 YES TAG20150704T121859
34 B F A DISK 04-JUL-15 1 1 YES TAG20150704T121859
35 B F A DISK 04-JUL-15 1 1 YES TAG20150704T121859
36 B F A DISK 04-JUL-15 1 1 NO TAG20150704T131927
37 B F A DISK 08-JUL-15 1 1 YES TAG20150708T211100
38 B F A DISK 08-JUL-15 1 1 YES TAG20150708T211100
39 B F A DISK 08-JUL-15 1 1 NO TAG20150708T211118
40 B F A DISK 08-JUL-15 1 1 NO TAG20150708T215526
60 B F A DISK 12-JUL-15 1 1 YES TAG20150712T222911
61 B F A DISK 12-JUL-15 1 1 NO TAG20150712T222957
62 B F A DISK 12-JUL-15 1 1 YES TAG20150712T223210
63 B F A DISK 12-JUL-15 1 1 YES TAG20150712T223210
64 B F A DISK 12-JUL-15 1 1 YES TAG20150712T223210
65 B F A DISK 12-JUL-15 1 1 YES TAG20150712T223210
66 B F A DISK 12-JUL-15 1 1 YES TAG20150712T223210
67 B F A DISK 12-JUL-15 1 1 NO TAG20150712T223226
68 B F A DISK 19-JUL-15 4 1 YES TAG20150719T223055
69 B F A DISK 19-JUL-15 1 1 NO TAG20150719T223305
71 B F A DISK 19-JUL-15 1 1 NO TAG20150719T224023
72 B F A DISK 19-JUL-15 4 1 YES TAG20150719T224411
73 B F A DISK 19-JUL-15 1 1 NO TAG20150719T224527
74 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230053
75 B F A DISK 19-JUL-15 1 1 NO TAG20150719T230119
76 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230355
77 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230355
78 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230355
79 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230355
80 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230355
81 B F A DISK 19-JUL-15 1 1 NO TAG20150719T230411
82 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230709
83 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230709
84 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230709
85 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230709
86 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230709
87 B F A DISK 19-JUL-15 1 1 NO TAG20150719T230725

RMAN>

However, I can use the TAG clause to identify specific Backups distinctively. Later, for the RESTORE command, the known TAG becomes useful as I can RESTORE FROM TAG. Here's how I create backups with specific TAGs.

RMAN> backup as compressed backupset tablespace HEMANT TAG 'Hemant_19Jul15';

Starting backup at 19-JUL-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=64 device type=DISK
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00006 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4vt_.dbf
input datafile file number=00007 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf
input datafile file number=00008 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x0_.dbf
input datafile file number=00009 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf
input datafile file number=00011 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x5_.dbf
channel ORA_DISK_1: starting piece 1 at 19-JUL-15
channel ORA_DISK_1: finished piece 1 at 19-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_HEMANT_19JUL15_btqhy6js_.bkp tag=HEMANT_19JUL15 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
Finished backup at 19-JUL-15

Starting Control File and SPFILE Autobackup at 19-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_19/o1_mf_s_885511125_btqhyoo9_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 19-JUL-15

RMAN>
RMAN> list backup of tablespace hemant completed after "(sysdate-5/1440)";


List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
88 Full 33.82M DISK 00:00:07 19-JUL-15
BP Key: 100 Status: AVAILABLE Compressed: YES Tag: HEMANT_19JUL15
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_HEMANT_19JUL15_btqhy6js_.bkp
List of Datafiles in backup set 88
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
6 Full 14138686 19-JUL-15 /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4vt_.dbf
7 Full 14138686 19-JUL-15 /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf
8 Full 14138686 19-JUL-15 /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x0_.dbf
9 Full 14138686 19-JUL-15 /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf
11 Full 14138686 19-JUL-15 /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x5_.dbf

RMAN> list backup of tablespace HEMANT summary;


List of Backups
===============
Key TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
37 B F A DISK 08-JUL-15 1 1 YES TAG20150708T211100
38 B F A DISK 08-JUL-15 1 1 YES TAG20150708T211100
60 B F A DISK 12-JUL-15 1 1 YES TAG20150712T222911
62 B F A DISK 12-JUL-15 1 1 YES TAG20150712T223210
63 B F A DISK 12-JUL-15 1 1 YES TAG20150712T223210
64 B F A DISK 12-JUL-15 1 1 YES TAG20150712T223210
65 B F A DISK 12-JUL-15 1 1 YES TAG20150712T223210
66 B F A DISK 12-JUL-15 1 1 YES TAG20150712T223210
74 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230053
76 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230355
77 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230355
78 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230355
79 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230355
80 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230355
82 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230709
83 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230709
84 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230709
85 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230709
86 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230709
88 B F A DISK 19-JUL-15 1 1 YES HEMANT_19JUL15

RMAN>

Notice how the latest backup of Tablespace HEMANT is tagged by the TAG I had specified during the BACKUP run.




completed after "..date/time clause"
Note how, in the listings above, I have used the 'completed after "trunc(sysdate)" '  and  'completed after "(sysdate-5/1440)" to specify a Date/Time as a filter.

Here's another example to show Backups completed in the month of July-2015. (The two backups of 23-Jun-2015 are excluded).

RMAN> list backup summary completed after "to_date('JUL-15','MON-RR')";


List of Backups
===============
Key TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
30 B F A DISK 04-JUL-15 1 1 YES TAG20150704T121859
31 B F A DISK 04-JUL-15 1 1 YES TAG20150704T121859
32 B F A DISK 04-JUL-15 1 1 YES TAG20150704T121859
33 B F A DISK 04-JUL-15 1 1 YES TAG20150704T121859
34 B F A DISK 04-JUL-15 1 1 YES TAG20150704T121859
35 B F A DISK 04-JUL-15 1 1 YES TAG20150704T121859
36 B F A DISK 04-JUL-15 1 1 NO TAG20150704T131927
37 B F A DISK 08-JUL-15 1 1 YES TAG20150708T211100
38 B F A DISK 08-JUL-15 1 1 YES TAG20150708T211100
39 B F A DISK 08-JUL-15 1 1 NO TAG20150708T211118
40 B F A DISK 08-JUL-15 1 1 NO TAG20150708T215526
60 B F A DISK 12-JUL-15 1 1 YES TAG20150712T222911
61 B F A DISK 12-JUL-15 1 1 NO TAG20150712T222957
62 B F A DISK 12-JUL-15 1 1 YES TAG20150712T223210
63 B F A DISK 12-JUL-15 1 1 YES TAG20150712T223210
64 B F A DISK 12-JUL-15 1 1 YES TAG20150712T223210
65 B F A DISK 12-JUL-15 1 1 YES TAG20150712T223210
66 B F A DISK 12-JUL-15 1 1 YES TAG20150712T223210
67 B F A DISK 12-JUL-15 1 1 NO TAG20150712T223226
68 B F A DISK 19-JUL-15 4 1 YES TAG20150719T223055
69 B F A DISK 19-JUL-15 1 1 NO TAG20150719T223305
71 B F A DISK 19-JUL-15 1 1 NO TAG20150719T224023
72 B F A DISK 19-JUL-15 4 1 YES TAG20150719T224411
73 B F A DISK 19-JUL-15 1 1 NO TAG20150719T224527
74 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230053
75 B F A DISK 19-JUL-15 1 1 NO TAG20150719T230119
76 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230355
77 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230355
78 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230355
79 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230355
80 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230355
81 B F A DISK 19-JUL-15 1 1 NO TAG20150719T230411
82 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230709
83 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230709
84 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230709
85 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230709
86 B F A DISK 19-JUL-15 1 1 YES TAG20150719T230709
87 B F A DISK 19-JUL-15 1 1 NO TAG20150719T230725
88 B F A DISK 19-JUL-15 1 1 YES HEMANT_19JUL15
89 B F A DISK 19-JUL-15 1 1 NO TAG20150719T231845

RMAN>

This shows that I can use Date Format masks in my filter.

Here's another example (note : if you use 'between', it seems that the two date formats must match, else no records are returned as in the first listing below):

RMAN> list backup completed between
2> "to_date('19-JUL-2015 22:45:00','DD-MON-YYYY HH24:MI:SS')"
3> and
4> "to_date(sysdate)";

specification does not match any backup in the repository

RMAN> list backup completed between
2> "to_date('19-JUL-2015 22:45:00','DD-MON-YYYY HH24:MI:SS')"
3> and
4> "to_date('19-JUL-2015 23:15:00','DD-MON-YYYY HH24:MI:SS')";


List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
72 Full 452.02M DISK 00:01:13 19-JUL-15
List of Datafiles in backup set 72
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
2 Full 14136475 19-JUL-15 /home/oracle/app/oracle/oradata/orcl/sysaux01.dbf

Backup Set Copy #1 of backup set 72
Device Type Elapsed Time Completion Time Compressed Tag
----------- ------------ --------------- ---------- ---
DISK 00:01:13 19-JUL-15 YES TAG20150719T224411

List of Backup Pieces for backup set 72 Copy #1
BP Key Pc# Status Piece Name
------- --- ----------- ----------
81 1 AVAILABLE /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T224411_btqfxvrr_.bkp
82 2 AVAILABLE /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T224411_btqfxvvv_.bkp
84 3 AVAILABLE /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T224411_btqfyo8v_.bkp
83 4 AVAILABLE /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T224411_btqfzglf_.bkp

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
73 Full 9.36M DISK 00:00:00 19-JUL-15
BP Key: 85 Status: AVAILABLE Compressed: NO Tag: TAG20150719T224527
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_19/o1_mf_s_885509127_btqg07qq_.bkp
SPFILE Included: Modification time: 19-JUL-15
SPFILE db_unique_name: HEMANTDB
Control File Included: Ckp SCN: 14136581 Ckp time: 19-JUL-15

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
74 Full 33.82M DISK 00:00:23 19-JUL-15
BP Key: 86 Status: AVAILABLE Compressed: YES Tag: TAG20150719T230053
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T230053_btqgx5vx_.bkp
List of Datafiles in backup set 74
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
6 Full 14137744 19-JUL-15 /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4vt_.dbf
7 Full 14137744 19-JUL-15 /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf
8 Full 14137744 19-JUL-15 /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x0_.dbf
9 Full 14137744 19-JUL-15 /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf
11 Full 14137744 19-JUL-15 /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x5_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
75 Full 9.36M DISK 00:00:00 19-JUL-15
BP Key: 87 Status: AVAILABLE Compressed: NO Tag: TAG20150719T230119
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_19/o1_mf_s_885510079_btqgxzd8_.bkp
SPFILE Included: Modification time: 19-JUL-15
SPFILE db_unique_name: HEMANTDB
Control File Included: Ckp SCN: 14137759 Ckp time: 19-JUL-15

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
76 Full 7.41M DISK 00:00:02 19-JUL-15
BP Key: 88 Status: AVAILABLE Compressed: YES Tag: TAG20150719T230355
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T230355_btqh2vkb_.bkp
List of Datafiles in backup set 76
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
6 Full 14137868 19-JUL-15 /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4vt_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
77 Full 7.40M DISK 00:00:02 19-JUL-15
BP Key: 89 Status: AVAILABLE Compressed: YES Tag: TAG20150719T230355
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T230355_btqh2yoc_.bkp
List of Datafiles in backup set 77
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
7 Full 14137870 19-JUL-15 /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
78 Full 7.38M DISK 00:00:02 19-JUL-15
BP Key: 90 Status: AVAILABLE Compressed: YES Tag: TAG20150719T230355
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T230355_btqh31vh_.bkp
List of Datafiles in backup set 78
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
8 Full 14137872 19-JUL-15 /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x0_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
79 Full 7.43M DISK 00:00:02 19-JUL-15
BP Key: 91 Status: AVAILABLE Compressed: YES Tag: TAG20150719T230355
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T230355_btqh34ys_.bkp
List of Datafiles in backup set 79
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
9 Full 14137874 19-JUL-15 /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
80 Full 7.91M DISK 00:00:01 19-JUL-15
BP Key: 92 Status: AVAILABLE Compressed: YES Tag: TAG20150719T230355
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/o1_mf_nnndf_TAG20150719T230355_btqh382g_.bkp
List of Datafiles in backup set 80
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
11 Full 14137876 19-JUL-15 /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x5_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
81 Full 9.36M DISK 00:00:00 19-JUL-15
BP Key: 93 Status: AVAILABLE Compressed: NO Tag: TAG20150719T230411
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_19/o1_mf_s_885510251_btqh3c9d_.bkp
SPFILE Included: Modification time: 19-JUL-15
SPFILE db_unique_name: HEMANTDB
Control File Included: Ckp SCN: 14137883 Ckp time: 19-JUL-15

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
82 Full 7.41M DISK 00:00:01 19-JUL-15
BP Key: 94 Status: AVAILABLE Compressed: YES Tag: TAG20150719T230709
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/2vqcfk8t_1_1
List of Datafiles in backup set 82
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
6 Full 14138235 19-JUL-15 /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4vt_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
83 Full 7.40M DISK 00:00:01 19-JUL-15
BP Key: 95 Status: AVAILABLE Compressed: YES Tag: TAG20150719T230709
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/30qcfk90_1_1
List of Datafiles in backup set 83
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
7 Full 14138237 19-JUL-15 /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
84 Full 7.38M DISK 00:00:02 19-JUL-15
BP Key: 96 Status: AVAILABLE Compressed: YES Tag: TAG20150719T230709
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/31qcfk93_1_1
List of Datafiles in backup set 84
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
8 Full 14138239 19-JUL-15 /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x0_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
85 Full 7.43M DISK 00:00:02 19-JUL-15
BP Key: 97 Status: AVAILABLE Compressed: YES Tag: TAG20150719T230709
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/32qcfk96_1_1
List of Datafiles in backup set 85
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
9 Full 14138241 19-JUL-15 /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
86 Full 7.91M DISK 00:00:01 19-JUL-15
BP Key: 98 Status: AVAILABLE Compressed: YES Tag: TAG20150719T230709
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_19/33qcfk9a_1_1
List of Datafiles in backup set 86
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
11 Full 14138243 19-JUL-15 /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x5_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
87 Full 9.36M DISK 00:00:00 19-JUL-15
BP Key: 99 Status: AVAILABLE Compressed: NO Tag: TAG20150719T230725
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_19/o1_mf_s_885510445_btqh9f6x_.bkp
SPFILE Included: Modification time: 19-JUL-15
SPFILE db_unique_name: HEMANTDB
Control File Included: Ckp SCN: 14138250 Ckp time: 19-JUL-15

RMAN>

Thus, you can identify a list of backups by time ranges as well !
.
.
.

Categories: DBA Blogs

Focusing on Ext4 and XFS TRIM Operations – Part I.

Kevin Closson - Sun, 2015-07-19 08:29

I’ve been doing some testing that requires rather large file systems. I have an EMC XtremIO Dual X-Brick array from which I provision a 10 terabyte volume. Volumes in XtremIO are always thinly provisioned. The testing I’m doing required me to scrutinize default Linux mkfs(8) behavior for both Ext4 and XFS. This is part 1 in a short series and it is about Ext4.

Discard the Discard Option

The first thing I noticed in this testing was the fantastical “throughput” demonstrated at the array while running the mkfs(8) command with the “-t ext4″ option/arg pair. As the following screen shot shows the “throughput” at the array level was just shy of 72GB/s.

That’s not real I/O…I’ll explain…

EMC XtremIO Dual X-Brick Array During Ext4 mkfs(8). Default Options.

EMC XtremIO Dual X-Brick Array During Ext4 mkfs(8). Default Options.

The default options for Ext4 include the discard (TRIM under the covers) option. The mkfs(8) manpage has this to say about the discard option :

Attempt to discard blocks at mkfs time (discarding blocks initially is useful on solid state devices and sparse / thin-provisioned storage). When the device advertises that discard also zeroes data (any subsequent read after the discard and before write returns zero), then mark all not-yet-zeroed inode tables as zeroed. This significantly speeds up filesystem initialization. This is set as default.

I’ve read that quoted text at least eleventeen times but the wording still sounds like gibberish-scented gobbledygook to me–well, except for the bit about significantly speeding up filesystem initialization.

Since XtremIO volumes are created thin I don’t see any reason for mkfs to take action to make it, what, thinner?  Please let me share test results challenging the assertion that the discard mkfs option results in faster file system initialization. This is the default functionality after all.

In the following terminal output you’ll see that the default mkfs options take 152 seconds to make a file system on a freshly-created 10TB XtremIO volume:


# time mkfs -t ext4 /dev/xtremio/fs/test
mke2fs 1.43-WIP (20-Jun-2013)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=2 blocks, Stripe width=16 blocks
335544320 inodes, 2684354560 blocks
134217728 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
81920 block groups
32768 blocks per group, 32768 fragments per group
4096 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544, 1934917632,
2560000000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
real 2m32.055s
user 0m3.648s
sys 0m17.280s
#

The mkfs(8) Command Without Default Discard Functionality

Please bear in mind that default 152 second result is not due to languishing on pathetic physical I/O. The storage is fast. Please consider the following terminal output where I passed in the non-default -E option with the nodiscard argument. The file system creation took 4.8 seconds:

# time mkfs -t ext4 -E nodiscard /dev/xtremio/fs/test
mke2fs 1.43-WIP (20-Jun-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=2 blocks, Stripe width=16 blocks
335544320 inodes, 2684354560 blocks
134217728 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
81920 block groups
32768 blocks per group, 32768 fragments per group
4096 inodes per group
Superblock backups stored on blocks:
 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
 102400000, 214990848, 512000000, 550731776, 644972544, 1934917632,
 2560000000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done 

real 0m4.856s
user 0m4.264s
sys 0m0.415s
#

I think 152 seconds down to 4.8 makes the point that with proper, thinly-provisioned storage the mkfs discard option does not “significantly speed up filesystem initialization.” But initializing file systems is not something one does frequently so investigation into the discard mount(8) option was in order.

Taking Ext4 For A Drive

Since I had this 10TB Ext4 file system–and a fresh focus on file system discard (storage TRIM) features–I thought I’d take it for a drive.

Discarded the Default Discard But Added The Non-Default Discard

While the default mkfs(8) command includes discard, the mount(8) command does not. I decided to investigate this option while unlinking a reasonable number of large files. To do so I ran a simple script (shown below) that copies 64 files of 16 gigabytes each–in parallel–into the Ext4 file system. I then timed a single invocation of the rm(1) command to remove all 64 of these files. Unlinking file in a Linux file system is a metadata operation, however, when the discard option is used to mount the file system each unlink operation includes TRIM operations being sent to storage. The following screen shot of the XtremIO performance dashboard was taken while the rm(1) command was running. The discard mount option turns a metadata operation into a rather costly storage operation.

Array Level Activity During Bulk rm(1) Command Processing. Ext4 (discard mount option)

Array Level Activity During Bulk rm(1) Command Processing. Ext4 (discard mount option)

The following terminal output shows the test step sequence used to test the discard mount option:

# umount /mnt ; mkfs -t ext4 -E nodiscard /dev/xtremio/fs/test; mount -t ext4 -o discard /dev/xtremio/fs/test /mnt
mke2fs 1.43-WIP (20-Jun-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=2 blocks, Stripe width=16 blocks
335544320 inodes, 2684354560 blocks
134217728 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
81920 block groups
32768 blocks per group, 32768 fragments per group
4096 inodes per group
Superblock backups stored on blocks:
 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
 102400000, 214990848, 512000000, 550731776, 644972544, 1934917632,
 2560000000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done 

# cd mnt
# cat &gt; cpit
for i in {1..64}; do ( dd if=/data1/tape of=file$i bs=1M oflag=direct )& done
wait
# time sh ./cpit &gt; /dev/null 2&gt;&1 

real 5m31.530s
user 0m2.906s
sys 8m45.292s
# du -sh .
1018G .
# time rm -f file*

real 4m52.608s
user 0m0.000s
sys 0m0.497s
#

The following terminal output shows the same test repeated with the file system being mounted with the default (thus no discard) mount options:

# cd ..
# umount /mnt ; mkfs -t ext4 -E nodiscard /dev/xtremio/fs/test; mount -t ext4 /dev/xtremio/fs/test /mnt
mke2fs 1.43-WIP (20-Jun-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=2 blocks, Stripe width=16 blocks
335544320 inodes, 2684354560 blocks
134217728 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
81920 block groups
32768 blocks per group, 32768 fragments per group
4096 inodes per group
Superblock backups stored on blocks:
 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
 102400000, 214990848, 512000000, 550731776, 644972544, 1934917632,
 2560000000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done 

# cd mnt
# cat &gt; cpit
for i in {1..64}; do ( dd if=/data1/tape of=file$i bs=1M oflag=direct )& done
wait
#
# time sh ./cpit &gt; /dev/null 2&gt;&1 

real 5m31.526s
user 0m2.957s
sys 8m50.317s
# time rm -f file*

real 0m16.398s
user 0m0.001s
sys 0m0.750s
#

This testing shows that mounting an Ext4 file system with the discard mount option dramatically impacts file removal operations. The default mount options (thus no discard option) performed the rm(1) command in 16 seconds whereas the same test took 292 seconds when mounted with the discard mount option.

So how can one perform the important house-cleaning that comes with TRIM operations?

The fstrim(8) Command

Ext4 supports user-invoked, online TRIM operations on mounted file systems. I would advise people to forego the discard mount option and opt for occasionally running the fstrim(8) command. The following is an example of  how long it takes to execute fstrim on the same 10TB file system stored in an EMC XtremIO array. I think that foregoing the taxation of commands like rm(1) is a good thing–especially since running fstrim is allowed on mounted file systems and only takes roughly 11 minutes on a 10TB file system.

# time fstrim -v /mnt
/mnt: 10908310835200 bytes were trimmed

real 11m29.325s
user 0m0.000s
sys 2m31.370s
#
Summary

If you use thinly-provisioned storage and want file deletion in Ext4 to return space to the array you have a choice. You can choose to take serious performance hits when you create the file system (default mkfs(8) options) and when you delete files (optional discard mount(8) option) or you can occasionally execute the fstrim(8) command on a mounted file system.

Up Next

The next post in this series will focus on XFS.


Filed under: oracle

Log Buffer #432: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-07-17 13:24

Yet again, this log buffer edition brings some rich blog posts from Oracle, SQL Server and MySQL.

Oracle:

  • Installing Oracle XE, ORDS and Apex on CentOS
  • Major Growth is Expected in the DBaaS Space. Are Your Skills Ready?
  • How to Hide Actions in OBPM 12c Workspace
  • You can configure auto refresh for ADF BC cached LOV and this works out of the box, no special coding is needed.
  • Search and Replace in Oracle SQL Developer Data Modeller

SQL Server:

  • Polyglot Persistence is a fancy term meaning that when storing data, it is best to use multiple data storage technologies, chosen based upon the way data is being utilized by individual applications or components of a single application.
  • HOW TO: Recreate SQL Server Statistics In a Different Environment
  • New SQL Server Management Studio 2015/June – with Check for Updates!
  • Power BI General Availability Announced: July 24, 2015
  • Ensuring Columns Are Always Present In A Table Returned By Power Query

MySQL:

  • Using MySQL sandbox for upgrade testing
  • Bypassing SST in Percona XtraDB Cluster with binary logs
  • Un-obvious “Unknown column in ‘field list’” error
  • Bypassing SST in Percona XtraDB Cluster with incremental backups
  • innodb_fast_checksum=1 and upgrading to MySQL 5.6

 

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

 

The post Log Buffer #432: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Real Life DBA: Finding Errors in SQL Server

Pythian Group - Fri, 2015-07-17 13:00

I tend to spend time on a few different forums at times, mostly the StackExchange network, and find folks asking questions about various errors seen in the error log of an SQL Server instance. Most DBAs are aware that errors that SQL Server provides can be vague at times, so it takes a bit of knowing where to look for more information. In some situations it is after the fact before you can get to the instance to start your analysis so you have to focus on those areas that contain historical information. A few of those places to check are:

  1. Various DMVs, depending on the error, but things like connectivity errors you can search the DMV sys.dm_os_ring_buffers.
  2. System_Health XEvent session has various things like deadlock reports and some of the things you find in the ring buffer are written to this session as well.
  3. Default Trace, this has been around since 2005 but being that Profiler and Traces are on the depreciation list it is a unknown how long it will stay around, but it can help while it is there.

In the field, we have our Avail Monitoring that customers can utilize for monitoring an instance. It will monitor the error log for a SQL Server instance, and we will get a page when something is found that it does not know about. This is an error message I actually received today for a client:

Error9100_avail_page

That error messages only tells me the SPID that it occurred under, which we know someone else could be using that SPID by the time I get to the server. It does not provide the login or even database it pertains to. So, I went searching around and decided to go check the default trace, and actually found this bit of information:

Error9100_1 Error9100_2

You can see from this the SPID matches and I was able to get the login that caused the error and what was being done. When you get an issue with SQL Server that is vague or does not provide enough information, go back to the basics and recall what options you have in front of you.

 

Discover more about our expertise in SQL Server.

The post Real Life DBA: Finding Errors in SQL Server appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Announcing “SLOB Recipes”

Kevin Closson - Fri, 2015-07-17 10:28

I’ve started updating the SLOB Resources page with links to “recipes” for certain SLOB testing. The first installment is the recipe for loading 8TB scale SLOB 2.3 Multiple Schema Model with a 2-Socket Linux host attached to EMC XtremIO. Recipes will include (at a minimum) the relevant SLOB program output (e.g., setup.sh or runit.sh), init.ora and slob.conf.

Please keep an eye on the SLOB Resources page for updates…and don’t miss the first installment. It’s quite interesting.

SLOB-recipes


Filed under: oracle

Monitoring and Diagnostics without Oracle Enterprise Manager

Hemant K Chitale - Fri, 2015-07-17 07:22
Here's a copy of a presentation I made at Singapore Oracle Sessions - III ,14-Jul-2015 on the topic of Monitoring and Diagnostics without Oracle Enterprise Manager.

.
.
.

Categories: DBA Blogs

Using JRE 1.8 for the Liberty Buildpack in Bluemix

Pas Apicella - Fri, 2015-07-17 06:17
The latest Liberty biuldpack update now adds the ability to switch to JRE 1.8 runtime. Given I am now switching to JDK 1.8 for all my projects this has come in very handy, and of course Spring Boot favors JDK 1.8 as well. The default is still JRE 1.7 so to switch to 1.8 set an ENV variable as follows in your manifest.yml as shown below.

manifest.yml

applications:
- name: pas-myapp
  memory: 512M
  instances: 1
  host: pas-myapp
  domain: mybluemix.net
  path: ./target/myapp-1.0-SNAPSHOT.jar
  env:
   JBP_CONFIG_IBMJDK: "version: 1.8.+"

More Information

https://developer.ibm.com/bluemix/2015/05/05/liberty-buildpack-updates-java-8-java-ee-7-updates/http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

SQL Server 2016: In-Memory OLTP enhancement

Yann Neuhaus - Fri, 2015-07-17 05:19

The CTP2.1 has been released for some weeks and it is time to check which interesting enhancements have been already deployed for the In-Memory OLTP feature.
I will first have a look to the memory optimized table.

Memory Optimized table

I have a Memory Optimized table named command_queue_HK and I will add a column to this table. This table is saved on my disk in a XPT folder with its corresponding files:

b2ap3_thumbnail_In-memory1.jpg

I execute the following command:

b2ap3_thumbnail_In-memory2.jpg

If I have a look to my XTP folder I see that I have six new files:

b2ap3_thumbnail_In-memory3.jpg

Does it mean that each time I make an alter table I have 6 new files? Let’s try another alter table:

b2ap3_thumbnail_In-memory4.jpg

Go back to my XTP container:

b2ap3_thumbnail_In-memory5.jpg

I have six new files for my last alter. Each time you run an alter table you create six new files corresponding to the new structure of your table… could by really disk consuming if you update a lot your tables… which hopefully could not arrive too often.
For the time being I cannot find a process which cleans obsoletes files, but when you restart you SQL Server Engine this process runs and deletes all files corresponding to your tables and recreates just six for the current structure:

b2ap3_thumbnail_In-memory6.jpg

This alter table is an offline process and need twice the size of the table in memory… don’t forget it.

It is also possible now to use Non-BIN2 collations in index key columns:

b2ap3_thumbnail_In-memory7.jpg

Some functionalities are not yet available (no exhaustive list).

Foreign Keys for Memory Optimized table:

b2ap3_thumbnail_In-memory8.jpg

Check constraint:

b2ap3_thumbnail_In-memory9.jpg

Natively Compiled Stored Procedure

For Natively Compiled Stored Procedures, we also have some enhancements and as for Memory Optimized table the first one is the possibility to alter them.
If I check on my container folder I see the six files for my Natively Compiled Stored Procedure:

b2ap3_thumbnail_In-memory10.jpg

If I right click on my SP in Management Studio I see that I have two possibility to update my SP:

  • select Modify
  • go through “Script Stored Procedure as”, “ALTER to…”, select where to script it

Once my Stored Procedure is scripted, I can make the modifications I need and execute the code:

b2ap3_thumbnail_In-memory11.jpg


Once the code is executed, I don’t have like for table six files more but directly six new files and just six:

b2ap3_thumbnail_In-memory12.jpg

Execute function in Natively Compiled Stored Procedure is also available but just with natively compiled modules:

b2ap3_thumbnail_In-memory13.jpg

It is not working if we try to call a nested Stored Procedure.

Like for Memory Optimized table some functionalities are not already available in this Technology Preview 2.1.

Subqueries are also not available for the time being:

b2ap3_thumbnail_In-memory14.jpg

There are interesting enhancements for In-Memory OLTP already available with this CTP 2.1 like Alter Table or Alter Natively Compiled Procedure but some important ones are not yet getting out like Foreign keys or Constraints for In-Memory tables. We will have to wait the RTM version to be fixed on remaining caveats.

Descending Indexes

Jonathan Lewis - Fri, 2015-07-17 01:42

I’ve written about optimizer defects with descending indexes before now but a problem came up on the OTN database forum a few days ago that made me decide to look very closely at an example where the arithmetic was clearly defective. The problem revolves around a table with two indexes, one on a date column (TH_UPDATE_TIMESTAMP) and the other a compound index which starts with the same column in descending order (TH_UPDATE_TIMESTAMP DESC, TH_TXN_CODE). The optimizer was picking the “descending” index in cases where it was clearly the wrong index (even after the statistics had been refreshed and all the usual errors relating to date-based indexes had been discounted). Here’s an execution plan from the system which shows that there’s something wrong with the optimizer:


SELECT COUNT(*) FROM TXN_HEADER WHERE TH_UPDATE_TIMESTAMP BETWEEN SYSDATE-30 AND SYSDATE;

---------------------------------------------------------------------------------------
| Id  | Operation          | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                  |       |       |     4 (100)|          |
|   1 |  SORT AGGREGATE    |                  |     1 |     8 |            |          |
|*  2 |   FILTER           |                  |       |       |            |          |
|*  3 |    INDEX RANGE SCAN| TXN_HEADER_IDX17 |  1083K|  8462K|     4   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

There are two clues here: first, Oracle has used the (larger) two-column index when the single column is (almost guaranteed to be) the better choice simply because it will be smaller; secondly, we have a cost of 4 to acquire 1M rowids through an (implicitly b-tree) index range scan, which would require at least 250,000 index entries per leaf block to keep the cost that low (or 2,500 if you set the optimizer_index_cost_adj to 1; so it might just be possible if you had a 32KB block size).

The OP worked hard to supply information we asked for, and to follow up any suggestions we made; in the course of this we got the information that the table had about 90M rows and this “timestamp” column had about 45M distinct values ranging from 6th Sept 2012 to 2nd July 2015 with no nulls.

Based on this information I modelled the problem in an instance of 11.2.0.4 (the OP was using 11.2.0.3).


create table t1
nologging
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        rownum                                                          id,
        to_date('06-SEP-2012 15:13:00','dd-mon-yyyy hh24:mi:ss') +
                trunc((rownum - 1 )/4) / (24 * 60)                      upd_timestamp,
        mod(rownum - 1,10)                                              txn_code,
        rpad('x',50)                                                    padding
from
        generator       v1,
        generator       v2
where
        rownum <= 4 * 1030  *  24 * 60
;

create index t1_asc on t1(upd_timestamp) nologging;
create index t1_desc on t1(upd_timestamp desc) nologging;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          => 'T1',
                method_opt       => 'for all columns size 1',
                cascade          => false
        );
end;
/

My data set has 4 rows per minute from 6th Sept 2012 to 3rd July 2015, with both an ascending and descending index on the upd_timestamp column. For reference, here are the statistics about the two indexes:


INDEX_NAME               SAMPLE     BLEVEL LEAF_BLOCKS CLUSTERING_FACTOR
-------------------- ---------- ---------- ----------- -----------------
T1_DESC                 5932800          2       17379            154559
T1_ASC                  5932800          2       15737             59825

The ascending index is smaller with a significantly better clustering_factor, so for queries where either index would be a viable option the ascending index is the one (we think) that the optimizer should choose. In passing, the 5.9M index entries is exactly the number of rows in the table – these stats were computed automatically as the indexes were created.

Here’s a simple query with execution plan (with rowsource execution stats):

select max(id) from t1 where upd_timestamp between
to_date('01-jun-2015','dd-mon-yyyy')                and
to_date('30-jun-2015','dd-mon-yyyy')

---------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name    | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |         |      1 |        |    29 (100)|      1 |00:00:01.29 |    4710 |
|   1 |  SORT AGGREGATE              |         |      1 |      1 |            |      1 |00:00:01.29 |    4710 |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1      |      1 |    167K|    29   (0)|    167K|00:00:00.98 |    4710 |
|*  3 |    INDEX RANGE SCAN          | T1_DESC |      1 |    885 |     5   (0)|    167K|00:00:00.35 |     492 |
---------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."SYS_NC00005$">=HEXTORAW('878CF9E1FEF8FEFAFF')  AND
              "T1"."SYS_NC00005$"<=HEXTORAW('878CF9FEF8FEF8FF') )
       filter((SYS_OP_UNDESCEND("T1"."SYS_NC00005$")>=TO_DATE(' 2015-06-01 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND SYS_OP_UNDESCEND("T1"."SYS_NC00005$")<=TO_DATE(' 2015-06-30 00:00:00',
              'syyyy-mm-dd hh24:mi:ss')))

The optimizer’s index estimate of 885 rowids is a long way off the actual rowcount of 167,000 – even though I have perfect stats describing uniform data and the query is simple enough that the optimizer should be able to get a good estimate. Mind you, the predicate section looks a lot messier than you might expect, and the table estimate is correct (though not consistent with the index estimate, of course).

Here’s the plan I get when I make the descending index invisible:


--------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name   | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |        |      1 |        |  2146 (100)|      1 |00:00:01.25 |    2134 |
|   1 |  SORT AGGREGATE              |        |      1 |      1 |            |      1 |00:00:01.25 |    2134 |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1     |      1 |    167K|  2146   (1)|    167K|00:00:00.94 |    2134 |
|*  3 |    INDEX RANGE SCAN          | T1_ASC |      1 |    167K|   453   (2)|    167K|00:00:00.31 |     446 |
--------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("UPD_TIMESTAMP">=TO_DATE(' 2015-06-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "UPD_TIMESTAMP"<=TO_DATE(' 2015-06-30 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

With the descending index invisible we can see that the cardinality estimate for the ascending index correct and notice how much higher this makes the cost. It’s not surprising that the optimizer picked the wrong index with such a difference in cost. The question now is why did the optimizer get the index cardinality (i.e. selectivity, hence cost) so badly wrong.

The answer is that the optimizer has made the same mistake that applications make by storing dates as character strings. It’s using the normal range-based calculation for the sys_op_descend() values recorded against the virtual column and has lost all understanding of the fact that these values represent dates. I can demonstrate this most easily by creating one more table, inserting a couple of rows, gathering stats, and showing you what the internal storage of a couple of “descendomg” dates looks like.


drop table t2;

create table t2 (d1 date);
create index t2_i1 on t2(d1 desc);

insert into t2 values('01-Jun-2015');
insert into t2 values('30-Jun-2015');
commit;

select d1, sys_nc00002$ from t2;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          => 'T2',
                method_opt       => 'for all columns size 1',
                cascade          => false
        );
end;
/

column endpoint_value format 999,999,999,999,999,999,999,999,999,999,999,999
break on table_name skip 1

select
        table_name, column_name, endpoint_number, endpoint_value, to_char(endpoint_value,'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX') end_hex
from
        user_tab_histograms
where
        table_name in ('T1','T2')
and     column_name like 'SYS%'
order by
        table_name, column_name, endpoint_number
;

I’ve put the dates 1st June 2015 and 30th June 2015 into the table because those were the values I used in the where clause of my query. Here are the results showing the internal representation of the (descending) index column and the stored low and high values for the virtual columns in both t1 and t2.


D1        SYS_NC00002$
--------- ------------------------
01-JUN-15 878CF9FEF8FEF8FF
30-JUN-15 878CF9E1FEF8FEFAFF

TABLE_NAME           COLUMN_NAME          ENDPOINT_NUMBER                                   ENDPOINT_VALUE END_HEX
-------------------- -------------------- --------------- ------------------------------------------------ ---------------------------------
T1                   SYS_NC00005$                       0  703,819,340,111,320,000,000,000,000,000,000,000    878CF8FCEFF30BCEBC8823F7000000
                     SYS_NC00005$                       1  703,880,027,955,346,000,000,000,000,000,000,000    878FF6F9EFF20256F3B404F7400000

T2                   SYS_NC00002$                       0  703,819,411,001,549,000,000,000,000,000,000,000    878CF9E1FEF8F24C1C7CED46200000
                     SYS_NC00002$                       1  703,819,419,969,389,000,000,000,000,000,000,000    878CF9FEF8FEEED28AC5B22A200000

If you check the t2 data (sys_nc00002$) values against the end_hex values in the histogram data you’ll see they match up to the first 6 bytes (12 digits). Oracle has done its standard processing – take the first 6 bytes of the column, covert to decimal, round to the most significant 15 digits (technically round(value, -21)), convert and store the result as hex.

So let’s do some arithmetic. The selectivity of a range scan that is in-bounds is (informally): “range we want” / “total range”. I’ve set up t2 to show us the values we will need to calculate the range we want, and I’ve reported the t1 values to allow us to calculate the total range, we just have to subtract the lower value (endpoint number 0) from the higher value for the two sys_nc0000N$ columns. So (ignoring the 21 zeros everywhere) our selectivity is:

  • (703,819,419,969,389 – 703,819,411,001,549) / ( 703,880,027,955,346 – 703,819,340,111,320) = 0.00014777
  • We have 5.9M rows in the table, so the cardinality estimate should be about: 5,932,800 * 0.00014777 = 876.69

The actual cardinality estimate was 885 – but we haven’t allowed for the exact form of the range-based predicate: we should add 1/num_distinct at both ends because the range is a closed range (greater than or EQUAL to, less than or EQUAL to) – which takes the cardinality estimate up to 884.69 which rounds to the 885 that the optimizer produced.

Conclusion

This note shows that Oracle is (at least for dates) losing information about the underlying data when dealing with the virtual columns associated with descending columns in indexes. As demonstrated that can lead to extremely bad selectivity estimates for predicates based on the original columns, and these selectivity estimates make it possible for Oracle to choose the wrong index and introduce a spuriously low cost into the calculation of the full execution plan.

Footnote

This note seems to match bug note 11072246 “Wrong Cardinality estimations for columns with DESC indexes”, but that bug is reported as fixed in 12.1, while this test case reproduces in 12.1.0.2

 


Error referenced ‘irman ioracle’ during binary installation

DBASolved - Thu, 2015-07-16 19:27

This week I decided to redo a few of my virtual box machines and build a few new ones for testing. Getting the Oracle Linux 6.6 installed was a breeze; however, when I started to install the Oracle Database 12c binaries I was hitting errors during the linking phase. This had me puzzled for a bit to say the least. I kept getting this error:

After driving myself nuts, I decided to look closer to what is going on. The file listed in the error message contained a reference to ‘irman ioracle’. The actual message was:

So how did I get past this issue? The message is referring to a linking issue of the Oracle binaries. The issue is due to the libjavavm12.a file not being placed in the $ORACLE_HOME/lib. In order to fix this, I had to run:

cp /opt/app/oracle/product/12.1.0.2/dbhome_1/javavm/jdk/jdk6/lib/libjavavm12.a /opt/app/oracle/product/12.1.0.2/dbhome_1/lib/

Once this was ran, the installation completed without error and other configuration assistants within the binaries were able to complet successfully.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: Database
Categories: DBA Blogs