Skip navigation.

DBA Blogs

Memory added based on buffer pool advisory did not give desired result

Bobby Durrett's DBA Blog - Wed, 2015-08-19 14:37

Development asked me to look at ways to cut the run time of a series of PeopleSoft payroll batch jobs so I took an AWR report of the entire 4 hour period.  Based on the waits, the percentage of the elapsed time spent using I/O and the buffer pool advisory I chose to double the size of the buffer cache. But, this added memory did not improve the run time of the batch jobs. Maybe the affected blocks are only read into memory once so they would not get cached no matter how big the buffer pool was.

Here is the original run on June 22 with the original memory settings:

Cache Sizes

Begin End Buffer Cache: 3,328M 3,424M Std Block Size: 8K Shared Pool Size: 1,600M 1,520M Log Buffer: 7,208K

Top 5 Timed Foreground Events

Event Waits Time(s) Avg wait (ms) % DB time Wait Class db file sequential read 1,910,393 10,251 5 72.03 User I/O DB CPU 2,812 19.76 log file sync 35,308 398 11 2.80 Commit resmgr:cpu quantum 31,551 62 2 0.43 Scheduler db file scattered read 7,499 60 8 0.42 User I/O Buffer Pool Advisory
  • Only rows with estimated physical reads >0 are displayed
  • ordered by Block Size, Buffers For Estimate
P Size for Est (M) Size Factor Buffers (thousands) Est Phys Read Factor Estimated Phys Reads (thousands) Est Phys Read Time Est %DBtime for Rds D 336 0.10 41 2.71 6,513,502 1 9842530.00 D 672 0.20 83 2.42 5,831,130 1 8737799.00 D 1,008 0.29 124 2.18 5,241,763 1 7783636.00 D 1,344 0.39 166 1.96 4,720,053 1 6939010.00 D 1,680 0.49 207 1.77 4,250,981 1 6179603.00 D 2,016 0.59 248 1.59 3,825,904 1 5491420.00 D 2,352 0.69 290 1.43 3,438,372 1 4864023.00 D 2,688 0.79 331 1.28 3,083,734 1 4289879.00 D 3,024 0.88 373 1.15 2,758,459 1 3763273.00 D 3,360 0.98 414 1.02 2,459,644 1 3279504.00 D 3,424 1.00 422 1.00 2,405,118 1 3191229.00 D 3,696 1.08 455 0.91 2,184,668 1 2834329.00 D 4,032 1.18 497 0.80 1,931,082 1 2423784.00 D 4,368 1.28 538 0.71 1,696,756 1 2044421.00 D 4,704 1.37 579 0.62 1,479,805 1 1693185.00 D 5,040 1.47 621 0.53 1,278,370 1 1367070.00 D 5,376 1.57 662 0.45 1,090,505 1 1062925.00 D 5,712 1.67 704 0.38 914,112 1 777352.00 D 6,048 1.77 745 0.31 746,434 1 505888.00 D 6,384 1.86 786 0.24 580,310 1 236941.00 D 6,720 1.96 828 0.17 414,233 1 149325.00

In the SQL ordered by Elapsed Time report the top batch job SQL was 99.14% I/O

Based on this report it seems that the number of physical reads could be reduced to about 20% what they were on June 22 by doubling the size of the buffer cache. But, adding the memory did not cut the number of physical reads in any major way.

Here is yesterday’s run:

Cache Sizes

Begin End Buffer Cache: 6,848M 6,816M Std Block Size: 8K Shared Pool Size: 3,136M 3,136M Log Buffer: 16,572K

Top 5 Timed Foreground Events

Event Waits Time(s) Avg wait (ms) % DB time Wait Class db file sequential read 1,789,852 10,173 6 72.15 User I/O DB CPU 2,970 21.06 log file sync 37,562 200 5 1.42 Commit resmgr:cpu quantum 24,996 59 2 0.42 Scheduler db file scattered read 5,409 54 10 0.38 User I/O Buffer Pool Advisory
  • Only rows with estimated physical reads >0 are displayed
  • ordered by Block Size, Buffers For Estimate
P Size for Est (M) Size Factor Buffers (thousands) Est Phys Read Factor Estimated Phys Reads (thousands) Est Phys Read Time Est %DBtime for Rds D 672 0.10 83 11.25 516,440 1 1309098.00 D 1,344 0.20 166 5.98 274,660 1 683610.00 D 2,016 0.29 248 4.02 184,712 1 450915.00 D 2,688 0.39 331 2.90 133,104 1 317404.00 D 3,360 0.49 414 2.20 100,860 1 233990.00 D 4,032 0.59 497 1.80 82,768 1 187185.00 D 4,704 0.69 580 1.53 70,445 1 155305.00 D 5,376 0.79 663 1.31 60,345 1 129176.00 D 6,048 0.88 745 1.14 52,208 1 108127.00 D 6,720 0.98 828 1.01 46,477 1 93301.00 D 6,848 1.00 844 1.00 45,921 1 91862.00 D 7,392 1.08 911 0.95 43,572 1 85785.00 D 8,064 1.18 994 0.89 40,789 1 78585.00 D 8,736 1.28 1,077 0.85 38,889 1 73671.00 D 9,408 1.37 1,160 0.81 37,112 1 69073.00 D 10,080 1.47 1,242 0.77 35,490 1 64876.00 D 10,752 1.57 1,325 0.75 34,439 1 62158.00 D 11,424 1.67 1,408 0.73 33,353 1 59347.00 D 12,096 1.77 1,491 0.71 32,524 1 57204.00 D 12,768 1.86 1,574 0.69 31,909 1 55613.00 D 13,440 1.96 1,656 0.68 31,361 1 54194.00

After the memory add the same top batch job SQL was 98.80% I/O.  Some improvement but not nearly as much as I expected based on the buffer pool advisory.

I guess the moral of the story is that the buffer pool advisory does not apply to specific workloads and is only a general guideline.  Maybe this is the same kind of fallacy that you have with buffer cache hit ratios where certain workloads make the ratio irrelevant.  Here were the hit ratios:  Before 98.59% After 98.82%.  Basically these are the same.

I just thought I would share this to document a real case of using the buffer pool advisory and having it not produce the expected results.

Bobby

Categories: DBA Blogs

Script to get previous month’s AWR report

Bobby Durrett's DBA Blog - Tue, 2015-08-18 11:58

We keep 6 weeks of history in the AWR on our databases, but I want to capture some information for long-term trending. What I really want to do is capture some metrics and put them in some database tables to use to generate reports, but I have not had time to build the scripts to do that.  So, instead I built a simple set of scripts to capture an AWR for the previous month. Since we have 6 weeks of history if I run my report in the first week of a month all the days of the previous month should still be in the AWR. I have just finished building this script so I can not promise that there is value in keeping monthly AWR reports but I thought it was worth sharing it.  Maybe something in the code will be useful to someone.  Here is the script:

-- Has to be run in the first week of the month so the entire
-- previous month is available. We keep 6 weeks of awr history.

-- setup columns for snapshots

column bsnap1 new_value bsnap1s noprint;
column esnap1 new_value esnap1s noprint;
column filenm new_value filenms noprint;

-- get snap id for first day of previous month

select min(snap_id) bsnap1
from dba_hist_snapshot
where 
extract(month from END_INTERVAL_TIME)=
extract(month from (sysdate-to_number(to_char(sysdate,'DD'))))
and
STARTUP_TIME=
(select max(STARTUP_TIME)
from dba_hist_snapshot
where 
extract(month from END_INTERVAL_TIME)=
extract(month from (sysdate-to_number(to_char(sysdate,'DD')))));

-- get snap id for last day of previous month

select max(snap_id) esnap1
from dba_hist_snapshot
where 
extract(month from END_INTERVAL_TIME)=
extract(month from (sysdate-to_number(to_char(sysdate,'DD'))));

-- get html file name

select 
name||
'_'|| 
to_char(extract(month from 
(sysdate-to_number(to_char(sysdate,'DD')))))||
'_'|| 
to_char(extract(year from 
(sysdate-to_number(to_char(sysdate,'DD')))))||
'.html' filenm
from v$database;

-- get awr report

define report_type='html';
define begin_snap = &bsnap1s;
define end_snap = &esnap1s;
define report_name = '&filenms';

define num_days = 0;

@@$ORACLE_HOME/rdbms/admin/awrrpt.sql

undefine report_type
undefine report_name
undefine begin_snap
undefine end_snap
undefine num_days

If the database bounced during the previous month we get the last set of snapshots after the last bounce.

I am not sure whether this approach will give us any benefits but I think it may help to show how to use queries to pick begin and end snapshots and then run an AWR report.

The tricky part of the code is this:

extract(month from (sysdate-to_number(to_char(sysdate,'DD'))))

It just returns the previous month as a number.  It is August now so here is what it returns today:

SQL> select
 2 extract(month from (sysdate-to_number(to_char(sysdate,'DD'))))
 3 prev_month
 4 from dual;

PREV_MONTH
----------
 7

sysdate-to_number(to_char(sysdate,’DD’)) is the last day of the previous month:

SQL> select sysdate-to_number(to_char(sysdate,'DD')) last_day
 2 from dual;

LAST_DAY
---------
31-JUL-15

– Bobby

Categories: DBA Blogs

John King in Cleveland CTOWN for September meeting

Grumpy old DBA - Mon, 2015-08-17 04:50
For our September 18 friday afternoon quarterly meeting Northeast Ohio Oracle Users Group is lucky to have a great speaker Oracle Ace Director John King presenting.  It's the usual networking and free lunch beginning at noon at the Rockside Oracle offices followed by quick business meeting and presentations starting at 1 pm.

Full details on NEOOUG Sept 2015 presentations

November meeting we have Daniel Morgan woo hoo!
Categories: DBA Blogs

Log Buffer #436: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-08-14 08:00

This Log Buffer Edition covers the top blog posts of the week from the Oracle, SQL Server and MySQL arenas.

Oracle:

  • Momentum and activity regarding the Data Act is gathering steam, and off to a great start too. The Data Act directs the Office of Management and Budget (OMB) and the Department of the Treasury (Treasury) to establish government-wide financial reporting data standards by May 2015.
  • RMS has a number of async queues for processing new item location, store add, warehouse add, item and po induction. We have seen rows stuck in the queues and needed to release the stuck AQ Jobs.
  • We have a number of updates to partitioned tables that are run from within pl/sql blocks which have either an execute immediate ‘alter session enable parallel dml’ or execute immediate ‘alter session force parallel dml’ in the same pl/sql block. It appears that the alter session is not having any effect as we are ending up with non-parallel plans.
  • Commerce Cloud, a new flexible and scalable SaaS solution built for the Oracle Public Cloud, adds a key new piece to the rich Oracle Customer Experience (CX) applications portfolio. Built with the latest commerce technology, Oracle Commerce Cloud is designed to ignite business innovation and rapid growth, while simplifying IT management and reducing costs.
  • Have you used R12: Master Data Fix Diagnostic to Validate Data Related to Purchase Orders and Requisitions?

SQL Server:

  • SQL Server 2016 Community Technology Preview 2.2 is available
  • What is Database Lifecycle Management (DLM)?
  • SSIS Catalog – Path to backup file could not be determined
  • SQL SERVER – Unable to Bring SQL Cluster Resource Online – Online Pending and then Failed
  • Snapshot Isolation Level and Concurrent Modification Collisions – On Disk and In Memory OLTP

MySQL:

  • A Better Approach to all MySQL Regression, Stress & Feature Testing: Random Coverage Testing & SQL Interleaving.
  • What is MySQL Package Verification? Package verification (Pkgver for short) refers to black box testing of MySQL packages across all supported platforms and across different MySQL versions. In Pkgver, packages are tested in order to ensure that the basic user experience is as it should be, focusing on installation, initial startup and rudimentary functionality.
  • With the rise of agile development methodologies, more and more systems and applications are built in series of iterations. This is true for the database schema as well, as it has to evolve together with the application. Unfortunately, schema changes and databases do not play well together.
  • MySQL replication is a process that allows you to easily maintain multiple copies of MySQL data by having them copied automatically from a master to a slave database.
  • In Case You Missed It – Breaking Databases – Keeping your Ruby on Rails ORM under Control.

The post Log Buffer #436: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Thoughts on Google Cloud Dataflow

Pythian Group - Thu, 2015-08-13 15:20

Google Cloud Dataflow is a data processing tool developed by Google that runs in the cloud. Dataflow is an easy to use, flexible tool that delivers completely automated scaling. It is deeply tied to the Google cloud infrastructure, making it a very powerful for projects running in Google Cloud.

Dataflow is an attractive resource management and job monitoring tool because it automatically manages all of the Google Cloud resources, including creating and tearing down  Google Compute Engine resources, communicating with Google Cloud Storage, working with Google Cloud Pub/Sub, aggregating logs, etc.

Cloud Dataflow has the following major components:

SDK – The Dataflow SDK provides a programming mode that simplifies/abstracts out the processing of large amounts of data. Dataflow only provides a Java SDK at the moment, which is a barrier for non-Java programmers. More on the programming model later.

Google Cloud Platform Managed Services – This is one of my favourite features in Dataflow. Dataflow manages and ties together components, such as Google Compute Engine, spins up and tears down VMs, manages BigQuery, aggregates logs, etc.

These two components can be used together to create jobs.

Being programmatic, Dataflow is extremely flexible. It works well for both batch and streaming jobs. Dataflow excels at high-volume computations and provides a unified programming model, which is very efficient and rather simple considering how powerful it is.

The Dataflow programming model simplifies the mechanics of large-scale data processing and abstracts out a lot of the lower level tasks, such as cluster management, adding more nodes, etc. It lets you focus on the logical aspect of your pipeline and not worry about how the job will run.

The Dataflow pipeline consists of four major abstractions:

  • Pipelines – A pipeline represents a complete process on a dataset or datasets. The data could be brought in from external data sources. It could then have a series of transformation operations, such as filter, joins, aggregation, etc., applied to the data to give it meaning and to achieve its desired form. This data could be then written to a sink. The sink could be within the Google Cloud platform or external. The sink could even be the same as the data source.
  • PCollections – PCollections are datasets in the pipeline. PCollections could represent datasets of any size. These datasets could be bounded (fixed size – such as national census data) or unbounded (such as a Twitter feed or data from weather sensors). PCollections are the input and output of every transform operation.
  • Transforms – Transforms are the data processing steps in the pipeline. Transforms take one or more PCollections, apply some transform operations to those collections, and then output to a PCollection.
  • I/O Sinks and Sources – The Source and Sink APIs provide functions to read data into and out of collections. The sources act as the roots of the pipeline and the sinks are the endpoints of the pipeline. Dataflow has a set of built in sinks/sources, but it is also possible to write sinks sources for custom data sources.

Dataflow is also planning to add integration for Apache Flink and Apache Spark. Adding Spark and Flink integration would be a huge feature since it would open up the possibilities to use MLlib, Spark SQL, and Flink machine-learning capabilities.

One of the use cases we explored was to create a pipeline that ingests streaming data from several POS systems using Dataflow’s streaming APIs. This data can be then joined with customer profile data that is ingested incrementally on a daily basis from a relational database. We can then run some filtering and aggregation operations on this data. Using the sink for BigQuery, we can insert the data into BigQuery and then run queries. What makes this so attractive is that in this whole process of ingesting vast amounts of streaming data, there was no need to set up clusters or networks or install software, etc. We stayed focused on the data processing and the logic that went into it.

To summarize, Dataflow is the only data processing tool that completely manages the lower level infrastructure. This removes several API calls for monitoring the load and spinning up and tearing down VMs, aggregating logs, etc., and lets you focus on the logic of the task at hand.  The abstractions are very easy to understand and work with and the Dataflow API also provides a good set of built in transform operations for tasks such as filtering, joining, grouping, and aggregation. Dataflow integrates really well with all components in the Google Cloud Platform, however, Dataflow does not have SDKs in any language besides Java, which is somewhat restrictive.

The post Thoughts on Google Cloud Dataflow appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Applied July Patch Sets To Test Databases

Bobby Durrett's DBA Blog - Tue, 2015-08-11 16:09

I applied the current July patch sets to a 11.2 and a 12.1 test database.  Now I have a 11.2.0.4.7 and a 12.1.0.2.4 test database.  It is helpful to have test databases that are on the most current patch sets and releases.  If I see unexpected behavior on some other database I can try the same thing on the patched test databases to see if some patch changed the behavior to what I expect.  Also, our production databases are all on 11.2.0.4 or earlier releases so I can check whether the new fully patched 12.1 release has different behavior than our older systems.

Here are the patch numbers:

6880880 – current version of opatch

20760982 – 11.2.0.4.7

20831110 – 12.1.0.2.4

My test environments are on x86-64 Linux.

– Bobby

Categories: DBA Blogs

Oracle 12.1.0.2c Standard Cluster: New Location / Name For Alert Log

Oracle in Action - Mon, 2015-08-10 23:46

RSS content

In my last post, I had presumed there is a bug since I discovered an empty clusterware alert log in its conventional location i.e.  $ORACLE_HOME/log/<hostname>in 12.1.0.2 standard cluster.

[grid@host01 ~]$ crsctl query crs activeversion;
Oracle Clusterware active version on the cluster is [12.1.0.2.0]

[root@host01 host01]# ls -l /u01/app/12.1.0/grid/log/host01/alerthost01.log

-rw-rw-r– 1 grid oinstall 0 Jun 15 14:10 /u01/app/12.1.0/grid/log/host01/alerthost01.log

But as commented by Ricardo Portillo Proni,  in 12c,  the location of alert log has been changed to $ORACLE_BASE/diag/crs/<hostname>/crs/trace/

Hence, I could successfully  the alert log on node host01 in directory  $ORACLE_BASE/diag/crs/host01/crs/trace/

[grid@host01 trace]$ ls -l $ORACLE_BASE/diag/crs/host01/crs/trace/alert*

-rw-rw—- 1 root oinstall 812316 Aug 11 10:22 /u01/app/grid/diag/crs/host01/crs/trace/alert.log

Another noticeable thing is that name of clusterware alert log has been changed to alert.log as compared to alert<hostname>.log in 11g.

I would like to mention that I have verified the above only in 12.1.0.2 standard cluster.

In 12.1.0.1 flex cluster though, the location and name of  alert log location is same as in 11g i.e. $ORACLE_HOME/log/host01

[root@host01 host01]# crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.1.0]

[root@host01 host01]# ls -l $ORACLE_HOME/log/host01/alert*
-rw-rw-r-- 1 grid oinstall 497364 Aug 11 11:00 /u01/app/12.1.0/grid/log/host01/alerthost01.log

Conclusion:
12.1.0.2 standard cluster

  • Name of alert log : alert.log
  • location of alert log: $ORACLE_BASE/diag/crs/host01/crs/trace

12.1.0.1 flex cluster

  • Name of alert log : alert<hostname>.log
  • location of alert log: $ORACLE_HOME/log/host01

Hope it helps!

Pls refer to comments for further information.

References:
Oracle RAC 12c (12.1.0.2) Operational Best Practices (Oracle Presentation)

———————————————————————————————————————–

Related Links :

Home

12c RAC Index

Oracle 12.1.0.2 Standard Cluster: Empty Alert Log



Tags:  

Del.icio.us
Digg

Comments:  8 comments on this item
You might be interested in this:  
Copyright © ORACLE IN ACTION [Oracle 12.1.0.2c Standard Cluster: New Location / Name For Alert Log], All Right Reserved. 2015.

The post Oracle 12.1.0.2c Standard Cluster: New Location / Name For Alert Log appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Registered for Oracle OpenWorld

Bobby Durrett's DBA Blog - Mon, 2015-08-10 13:41

I registered myself for Oracle OpenWorld and I have my hotel reserved and my flights ticketed.

I think it has been over 12 years – probably more like 15 years – since I went to OpenWorld. I went at least once between December 1994 and November 2003 when I still lived in Florida and was working on Oracle databases.  But since I moved from Florida I do not believe that I have been to the conference.  I have presented at Collaborate and ECOUG conferences since then.  I’m thinking that maybe next year I will try to present at the RMOUG conference.  I live in Arizona so RMOUG is close.  ECOUG was a nice distance when I still lived near the East Coast.  I like the smaller conferences and I have a better shot at getting a presentation accepted there.

But, this year it is OpenWorld and I am looking forward to it.  I may get a chance to interact with some Delphix employees and customers.  Also, I’m hoping to check out some technical presentations by the Oak Table members.  And it does not hurt to hear from Oracle itself on its technology.  No doubt there will be many of Oracle’s top technical leaders presenting.  And, any interaction I get with fellow DBA’s will be great.  It is always good to hear from people about their own experiences which may differ from mine.

Anyway, I’m all booked for OpenWorld.  Hope to see you there.

– Bobby

 

 

Categories: DBA Blogs

RMAN -- 7 : Recovery Through RESETLOGS -- how are the ArchiveLogs identified ?

Hemant K Chitale - Mon, 2015-08-10 03:21
When Oracle introduced "official" support for Recovery through RESETLOGS in 10g, it introduced the "%r" component of the log_archive_format parameter.  (This was not present in 9.2)

Therefore, in 10.2, we religiously included "%r" in the log_archive_format parameter.

However, if you use the  FRA via USE_DB_RECOVERY_FILE_DEST, the resetlogs_id is not present and  Oracle uses OMF naming rules for the archivelogs.

SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 154
Next log sequence to archive 156
Current log sequence 156
SQL> show parameter log_archive_format

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
log_archive_format string %t_%s_%r.dbf
SQL> select name from v$archived_log where first_time > trunc(sysdate);

NAME
--------------------------------------------------------------------------------
/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_08_10/o1_mf_1_152_bwjmnzt5_.arc
/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_08_10/o1_mf_1_153_bwjmo4yp_.arc
/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_08_10/o1_mf_1_154_bwjmoh29_.arc
/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_08_10/o1_mf_1_155_bwjmxjyb_.arc

SQL>

Thus, you can see in the above listing the archivelog filenames, the RESETLOGS_ID is *not* part of the archivelog filename.  (THREAD# and SEQUENCE# do still appear)

How, then, does Oracle (or do we ??) distinguish between archivelogs from one resetlogs and another ?  The archivelog sequence numbers do get reset to 1.
What matters  is that RESETLOGS_ID, RESETLOGS_CHANGE# and RESETLOGS_TIME are also incorporated into V$ARCHIVED_LOG.

SQL> select * from v$database_incarnation order by incarnation#;

INCARNATION# RESETLOGS_CHANGE# RESETLOGS PRIOR_RESETLOGS_CHANGE# PRIOR_RES
------------ ----------------- --------- ----------------------- ---------
STATUS RESETLOGS_ID PRIOR_INCARNATION# FLASHBACK_DATABASE_ALLOWED
------- ------------ ------------------ --------------------------
1 1 13-AUG-09 0
PARENT 694825248 0 NO

2 754488 30-OCT-09 1 13-AUG-09
PARENT 701609923 1 NO

3 14082620 04-JUL-15 754488 30-OCT-09
CURRENT 884179148 2 NO


SQL> select resetlogs_id, count(*) from v$archived_log group by resetlogs_id;

RESETLOGS_ID COUNT(*)
------------ ----------
884179148 153

SQL>

My database's current Incarnation is 3 and all the archivelogs are for this Incarnation.  What if I were to do a RESETLOGS and go to a new Incarnation ?

First, I generate some more archivelogs in the current incarnation.

SQL> create table hemant.test_recovery_thru (id number);

Table created.

SQL> insert into hemant.test_recovery_thru select rownum from dual connect by level < 100001;

100000 rows created.

SQL> commit;

Commit complete.

SQL> alter system archive log current;

System altered.

SQL> alter system archive log current;

System altered.

SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 156
Next log sequence to archive 158
Current log sequence 158
SQL>

Then I shutdown the database and do a RESETLOGS on OPEN.

SQL> alter system archive log current;

System altered.

SQL> alter system archive log current;

System altered.

SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 159
Next log sequence to archive 161
Current log sequence 161
SQL>
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 456146944 bytes
Fixed Size 1344840 bytes
Variable Size 394267320 bytes
Database Buffers 54525952 bytes
Redo Buffers 6008832 bytes
Database mounted.
SQL> recover database until cancel;
Media recovery complete.
SQL> alter database open resetlogs;

Database altered.

SQL>
SQL> select * from v$database_incarnation order by 1;

INCARNATION# RESETLOGS_CHANGE# RESETLOGS PRIOR_RESETLOGS_CHANGE# PRIOR_RES
------------ ----------------- --------- ----------------------- ---------
STATUS RESETLOGS_ID PRIOR_INCARNATION# FLASHBACK_DATABASE_ALLOWED
------- ------------ ------------------ --------------------------
1 1 13-AUG-09 0
PARENT 694825248 0 NO

2 754488 30-OCT-09 1 13-AUG-09
PARENT 701609923 1 NO

3 14082620 04-JUL-15 754488 30-OCT-09
PARENT 884179148 2 NO


INCARNATION# RESETLOGS_CHANGE# RESETLOGS PRIOR_RESETLOGS_CHANGE# PRIOR_RES
------------ ----------------- --------- ----------------------- ---------
STATUS RESETLOGS_ID PRIOR_INCARNATION# FLASHBACK_DATABASE_ALLOWED
------- ------------ ------------------ --------------------------
4 14184835 10-AUG-15 14082620 04-JUL-15
CURRENT 887387582 3 NO


SQL>

I now have a new incarnation (4) created today. I next generate a couple of archivelogs and check the archivelogs

SQL> alter system archive log current;

System altered.

SQL> alter system archivelog current;
alter system archivelog current
*
ERROR at line 1:
ORA-02065: illegal option for ALTER SYSTEM


SQL> alter system archive log current;

System altered.

SQL> select resetlogs_id, sequence#
2 from v$archived_log
3 where first_time > trunc(sysdate)
4 order by 1,2;

RESETLOGS_ID SEQUENCE#
------------ ----------
884179148 152
884179148 153
884179148 154
884179148 155
884179148 156
884179148 157
884179148 158
884179148 159
884179148 160
884179148 161
887387582 1
887387582 2

12 rows selected.

SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 1
Next log sequence to archive 3
Current log sequence 3
SQL>
SQL> select resetlogs_id, name from v$archived_log where first_time > trunc(sysdate) order by first_time;

RESETLOGS_ID
------------
NAME
--------------------------------------------------------------------------------
884179148
/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_08_10/o1_mf_1_152_bwjmnzt5_.arc

884179148
/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_08_10/o1_mf_1_153_bwjmo4yp_.arc

884179148
/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_08_10/o1_mf_1_154_bwjmoh29_.arc

884179148
/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_08_10/o1_mf_1_155_bwjmxjyb_.arc

884179148
/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_08_10/o1_mf_1_156_bwjr2nmj_.arc

884179148
/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_08_10/o1_mf_1_157_bwjr2s1s_.arc

884179148
/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_08_10/o1_mf_1_158_bwjr9pg5_.arc

884179148
/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_08_10/o1_mf_1_159_bwjrb06z_.arc

884179148
/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_08_10/o1_mf_1_160_bwjrb582_.arc

884179148
/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_08_10/o1_mf_1_161_bwjrfz1j_.arc

887387582
/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_08_10/o1_mf_1_1_bwjrhogp_.arc

887387582
/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_08_10/o1_mf_1_2_bwjrj0gf_.arc


12 rows selected.

SQL>

Notice the first 10 archivelogs are from the RESETLOGS_ID corresponding to Incarnation 3. The Thread# and SEQUENCE# do appear to be part of the file names. But NOT the RESETLOGS_ID.
The last two files now have SEQUENCE# 1 and 2 but no indication of the RESETLOGS_ID.  The RESETLOGS_ID is in V$ARCHIVED_LOG, not in the actual filename.

.
.
.



Categories: DBA Blogs

Oracle 12.1.0.2c Standard cluster : Empty Alert Log

Oracle in Action - Sun, 2015-08-09 23:52

RSS content

I have setup Oracle  12.1.0.2 standard  2 node cluster  called cluster01 with ASM storage as shown:

[grid@host01 ~]$ asmcmd showclustermode
ASM cluster : Flex mode disabled

[root@host01 ~]# olsnodes -c
cluster01

[root@host01 host01]# crsctl get cluster mode config
Cluster is configured as type “standard

[grid@host01 ~]$ crsctl query crs activeversion;
Oracle Clusterware active version on the cluster is [12.1.0.2.0]

[root@host01 host01]# crsctl get cluster mode status
Cluster is running in “standard” mode

[root@host01 host01]# olsnodes -n
host01 1
host02 2

[root@host01 host01]# crsctl query css votedisk
## STATE File Universal Id File Name Disk group
— —– —————– ——— ———
1. ONLINE aa1ca556ae114f57bf44070be6a78656 (ORCL:ASMDISK01) [DATA]
2. ONLINE ff91dd96594d4f3dbfcb9cff081e3438 (ORCL:ASMDISK02) [DATA]
3. ONLINE 815ddcab94d34f50bf318ba93e19951d (ORCL:ASMDISK03) [DATA]
Located 3 voting disk(s).

[root@host01 host01]# ocrcheck -config
Oracle Cluster Registry configuration is :
Device/File Name : +DATA

[root@host01 host01]# crsctl check crs

CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

When I tried to check the contents of the cluster alert log, I was surprised to find an empty alert log.

[root@host01 host01]# ls -l /u01/app/12.1.0/grid/log/host01/alerthost01.log

-rw-rw-r– 1 grid oinstall 0 Jun 15 14:10 /u01/app/12.1.0/grid/log/host01/alerthost01.log

It seems that this is a bug.



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [Oracle 12.1.0.2c Standard cluster : Empty Alert Log], All Right Reserved. 2015.

The post Oracle 12.1.0.2c Standard cluster : Empty Alert Log appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Mongostat – A Nifty Tool for Mongo DBAs

Pythian Group - Fri, 2015-08-07 12:08

One of the main MongoDB DBA’s task is to monitor the usage of MongoDB system and it’s load distribution. This could be needed for proactive monitoring, troubleshooting during performance degradation, root cause analysis, or capacity planning.

Mongostat is a nifty tool which comes out of the box with MongoDB which provides wealth of information in a nicely and familiar formatted way. If you have used vmstat, iostat etc on Linux; Mongostat should seem very familiar.

Mongostat dishes out statistics like counts of database operations by type (e.g. insert, query, update, delete, getmore). The vsize column  in Mongostat output shows the amount of virtual memory in megabytes used by the process. There are other very useful columns regarding network traffic, connections, queuing etc.

Following are some of the examples of running Mongostat.

[mongo@mongotest data]$ mongostat
insert query update delete getmore command flushes mapped  vsize    res faults qr|qw ar|aw netIn netOut conn     time
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:29
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:30
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:31
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:32
*0    *0     *0     *0       0     2|0       0 160.0M 646.0M 131.0M      0   0|0   0|0  133b    10k    1 12:47:33
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:34
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:35
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:36
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:37
*0    *0     *0     *0       0     2|0       0 160.0M 646.0M 131.0M      0   0|0   0|0  133b    10k    1 12:47:38

Following displayes just 5 rows of output.

[mongo@mongotest data]$ mongostat -n 5
insert query update delete getmore command flushes mapped  vsize    res faults qr|qw ar|aw netIn netOut conn     time
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:45
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:46
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:47
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:48
*0    *0     *0     *0       0     2|0       0 160.0M 646.0M 131.0M      0   0|0   0|0  133b    10k    1 12:47:49

In order to see full list of options:

[mongo@mongotest data]$ mongostat –help
Usage:
mongostat <options> <polling interval in seconds>

Monitor basic MongoDB server statistics.

See http://docs.mongodb.org/manual/reference/program/mongostat/ for more information.

general options:
–help                     print usage
–version                  print the tool version and exit

verbosity options:
-v, –verbose                  more detailed log output (include multiple times for more verbosity, e.g. -vvvvv)
–quiet                    hide all log output

connection options:
-h, –host=                    mongodb host to connect to (setname/host1,host2 for replica sets)
–port=                    server port (can also use –host hostname:port)

authentication options:
-u, –username=                username for authentication
-p, –password=                password for authentication
–authenticationDatabase=  database that holds the user’s credentials
–authenticationMechanism= authentication mechanism to use

stat options:
–noheaders                don’t output column names
-n, –rowcount=                number of stats lines to print (0 for indefinite)
–discover                 discover nodes and display stats for all
–http                     use HTTP instead of raw db connection
–all                      all optional fields
–json                     output as JSON rather than a formatted table

 

Discover more about our expertise in Big Data.

The post Mongostat – A Nifty Tool for Mongo DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Log Buffer #435: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-08-07 11:30

Sun of database technologies is shining through the cloud technology. Oracle, SQL Server, MySQL and various other databases are bringing forth some nifty offerings and this Log Buffer Edition covers some of them.

Oracle:

  • How to create your own Oracle database merge patch.
  • Finally the work of a database designer will be recognized! Oracle has announced the Oracle Database Developer Choice Awards.
  • Oracle Documents Cloud Service R4: Why You Should Seriously Consider It for Your Enterprise.
  • Mixing Servers in a Server Pool.
  • Index compression–working out the compression number
  • My initial experience upgrading database from Oracle 11g to Oracle 12c (Part -1).

SQL Server:

  • The Evolution of SQL Server BI
  • Introduction to SQL Server 2016 Temporal Tables
  • Microsoft and Database Lifecycle Management (DLM): The DacPac
  • Display SSIS package version on the Control Flow design surface
  • SSAS DSV COM error from SSDT SSAS design Data Source View

MySQL:

  • If you run multiple MySQL instances on a Linux machine, chances are good that at one time or another, you’ve ended up connected to an instance other than what you had intended.
  • MySQL Group Replication: Plugin Version Access Control.
  • MySQL 5.7 comes with many changes. Some of them are better explained than others.
  • What Makes the MySQL Audit Plugin API Special?
  • Architecting for Failure – Disaster Recovery of MySQL/MariaDB Galera Cluster

 

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL.

The post Log Buffer #435: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Partitioning in Hive Tables

Pythian Group - Fri, 2015-08-07 11:03

Partitioning a large table is general practice for a few reasons:

  • Improving query efficiency by avoiding to transfer and process unnecessary data.
  • Improving data lineage by isolating batches of ingestion, so if a ingestion batch failed for some reason and introduces some corrupted data, it’s safe to re-ingest the data

With that being said this practice often results in a table with a lot of partitions, which makes querying a full table or a large part of it a very slow operation. It also makes the Hive client executing the query “memory hungry”. This is mainly caused by how Hive processes a query. Before generating a query plan, the Hive client needs to read the metadata of all partitions. That means a lot of RPC round trips between the Hive client and Hadoop namenode, as well as RDBMS transactions between the Hive client and metastore. It’s a slow process and also consumes a lot of memory. A simple experiment using Hive-0.12 shows that it takes around 50KB heap space to store all data structures for each partition. Below are two examples from a heap dump of a Hive client executing a query which touches 13k+ partitions.

 

Screen Shot 2015-08-05 at 11.24.16 pm

We can set HADOOP_HEAPSIZE in hive-env.sh to a larger number to keep ourself out of trouble. The HADOOP_HEAPSIZE will be passed as -Xmx argument to JVM. But if we want to run multiple Hive queries at the same time on the same machine, we will run out of memory very quickly. Another thing to watch out when increasing the heap size is: if the parallel GC is used for the JVM, which is the default option for Java server VM, and if the maximum GC pause time isn’t set properly, a Hive client dealing with a lot of partitions will quickly increase its heap size to the maximum and never shrink the heap size down.

Another potential problem of querying a large amount of partitions is that Hive uses CombineHiveInputFormat by default, which instructs Hadoop to combine all input files which are smaller than “split size” into splits. The algorithm used to do the combining is “greedy”. It bins larger files into splits first, then smaller ones. So the “last” couple of splits combined usually have a huge amount (depends on how unevenly the size of input files is distributed) of small files in them. As a result, those “unlucky” map tasks which get these splits will be very slow compared to other map tasks and consume a lot of memory to collect and process metadata of input files. Usually you can tell how bad the situation is by comparing SPLIT_RAW_BYTES counters of map tasks.

A possible solution to this problem is creating two versions of that table: one partitioned, and one non-partitioned. The partitioned one is still populated as the way it is. The non-partitioned one can be populated in parallel with the partitioned one by using “INSERT INTO”. One disadvantage of the non-partitioned version is it’s harder to be revised if corrupted data is found in it because in that case the whole table has to be rewritten. Though, starting with hive 0.14, updating and deleting SQL statements are allowed for tables stored in ORC format. Another possible problem of the non-partitioned version is that the table may contain a large number of small files on HDFS, because every “INSERT INTO” will create at least one file. As the number of files in the table increases, querying to the table slows down. So a periodical compaction is recommended to decrease the number of files in a table. It can be done by simply executing “INSERT OVERWRITE SELECT * FROM” periodically. You need to make sure no other inserts are being executed at the same time or data loss will occur.

Learn more about Pythian’s expertise in Big Data.

The post Partitioning in Hive Tables appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Mongostat ; A Nifty Tool for Mongo DBA

Pakistan's First Oracle Blog - Thu, 2015-08-06 20:56
One of the main Mongodb DBA's task is to monitor the usage of Mongodb system and it's load distribution. This could be needed for proactive monitoring, troubleshooting during performance degradation, root cause analysis, or capacity planning.

Mongostat is a nifty tool which comes out of the box with Mongodb which provides wealth of information in a nicely and familiar formatted way. If you have used vmstat, iostat etc on Linux; Mongostat should seem very familiar.


Mongostat dishes out statistics like counts of database operations by type (e.g. insert, query, update, delete, getmore). The vsize column  in Mongostat output shows the amount of virtual memory in megabytes used by the process. There are other very useful columns regarding network traffic, connections, queuing etc.


Following are some of the examples of running mongostat.

[mongo@mongotest data]$ mongostat

insert query update delete getmore command flushes mapped  vsize    res faults qr|qw ar|aw netIn netOut conn     time
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:29
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:30
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:31
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:32
*0    *0     *0     *0       0     2|0       0 160.0M 646.0M 131.0M      0   0|0   0|0  133b    10k    1 12:47:33

*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:34
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:35
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:36
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:37
*0    *0     *0     *0       0     2|0       0 160.0M 646.0M 131.0M      0   0|0   0|0  133b    10k    1 12:47:38

Following displayes just 5 rows of output.

[mongo@mongotest data]$ mongostat -n 5
insert query update delete getmore command flushes mapped  vsize    res faults qr|qw ar|aw netIn netOut conn     time

*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:45
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:46
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:47
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:48
*0    *0     *0     *0       0     2|0       0 160.0M 646.0M 131.0M      0   0|0   0|0  133b    10k    1 12:47:49

In order to see full list of options:

[mongo@mongotest data]$ mongostat --help

Usage:
mongostat

Monitor basic MongoDB server statistics.

See http://docs.mongodb.org/manual/reference/program/mongostat/ for more information.

general options:

--help                     print usage
--version                  print the tool version and exit
verbosity options:
-v, --verbose                  more detailed log output (include multiple times for more verbosity, e.g. -vvvvv)
--quiet                    hide all log output
connection options:
-h, --host=                    mongodb host to connect to (setname/host1,host2 for replica sets)
--port=                    server port (can also use --host hostname:port)
authentication options:
-u, --username=                username for authentication
-p, --password=                password for authentication
--authenticationDatabase=  database that holds the user's credentials
--authenticationMechanism= authentication mechanism to use
stat options:
--noheaders                don't output column names
-n, --rowcount=                number of stats lines to print (0 for indefinite)
--discover                 discover nodes and display stats for all
--http                     use HTTP instead of raw db connection
--all                      all optional fields
--json                     output as JSON rather than a formatted table
Categories: DBA Blogs

Two great speakers coming up in fall for NEOOUG quarterly meetings

Grumpy old DBA - Sun, 2015-08-02 18:03
Full details out soon over at NEOOUG website but we have John King from Colorado speaking in September and Daniel Morgan speaking in November for our quarterly meetings.

These guys are both Oracle Ace Directors and great dynamic speakers it should be a great time.

Complete information including bio's and abstracts for the sessions should be out soon.

Thanks John
Categories: DBA Blogs

RMAN -- 6 : RETENTION POLICY and CONTROL_FILE_RECORD_KEEP_TIME

Hemant K Chitale - Sat, 2015-08-01 08:12
Most people read the documentation on CONTROL_FILE_RECORD_KEEP_TIME and believe that this parameter *guarantees* that Oracle will retain backup records for that long.  (Some do understand that backup records may be retained longer, depending on the availability of slots (or "records") for the various types of metadata in the controlfile).

However, .... as you should know from real-world experience ... there is always a "BUT".

Please read Oracle Support Note "How to ensure that backup metadata is retained in the controlfile when setting a retention policy and an RMAN catalog is NOT used. (Doc ID 461125.1)" and Bug 6458068

Oracle may need to "grow" the controlfile when adding information about ArchiveLogs or BackupSets / BackupPieces.
An example is this set of entries that occurred when I had created very many archivelogs and backuppieces for them :
Trying to expand controlfile section 13 for Oracle Managed Files
Expanded controlfile section 13 from 200 to 400 records
Requested to grow by 200 records; added 9 blocks of records


To understand the contents of the controlfile see how this listing shows that I have space for 400 records of Backup Pieces and am currently using 232 records.  :

SQL> select * from v$controlfile_record_section where type like '%BACKUP%' order by 1;

TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID
---------------------------- ----------- ------------- ------------ ----------- ---------- ----------
BACKUP CORRUPTION 44 371 0 0 0 0
BACKUP DATAFILE 200 245 159 1 159 159
BACKUP PIECE 736 400 232 230 61 261
BACKUP REDOLOG 76 215 173 1 173 173
BACKUP SET 40 409 249 1 249 249
BACKUP SPFILE 124 131 36 1 36 36

6 rows selected.

SQL>


However, if I start creating new Backup Pieces without deleting older ones (without Oracle auto-deleting older ones) and Oracle hits the allocation of 400 records, it may try to add new records.  Oracle then prints a message (as shown above) into the alert.log.  Oracle may overwrite records older than control_file_record_keep_time.  If necesssary, it tries to expand the controlfile. If, however, there is not enough filesystem space (or space in the raw device or ASM DiskGroup) to expand the controlfile, it may have to ovewrite some records from the controlfile.  If it has to overwrite records that are older than control_file_record_keep_time, it provides no warning.  However, if it has to overwrite records that are not older than the control_file_record_keep_time, it *does* write a warning to the alert.log

I don't want to violate the Oracle Support policy and quote from the Note and the Bug but I urge you to read both very carefully.  The Note has a particular line about whether there is a relationship between the setting of the control_file_record_time and the Retention Policy.  In the Bug, there is one particularly line about whether the algorithm to extend / reuse / purge records in the controlfile is or is not related to the Retention Policy.  So it IS important to ensure that you have enough space for the controlfile to grow in case it needs to expand space for these records.

Also, remember that not all Retention Policies are defined in terms of days.  Some may be defined in terms of REDUNDANCY (the *number* of Full / L0 backups that are not to be obsoleted).  This does NOT relate to the number of days because Oracle can't predict how many backups you run in a day / in a week / in a month.  Take an organisation with a small database and runs 3 Full / L0 backups per day versus another with a very large database that runs Full / L0 backup only once a fortnight !  How many days of Full / L0 backups would each have to retain if the REDUNDANCY is set to, say, 3 ?

.
.
.




Categories: DBA Blogs

FASTSYNC Redo Transport for Data Guard in #Oracle 12c

The Oracle Instructor - Sat, 2015-08-01 04:11

FASTSYNC is a new LogXptMode for Data Guard in 12c. It enables Maximum Availability protection mode at larger distances with less performance impact than LogXptMode SYNC has had before. The old SYNC behavior looks like this:

LogXptMode=SYNC

LogXptMode=SYNC

The point is that we need to wait for two acknowledgements by RFS (got it & wrote it) before we can write the redo entry locally and get the transaction committed. This may slow down the speed of transactions on the Primary, especially with long distances. Now to the new feature:

LogXptMode=FASTSYNC

LogXptMode=FASTSYNC

Here, we wait only for the first acknowledgement (got it) by RFS before we can write locally. There is still a possible performance impact with large distances here, but it is less than before. This is how it looks implemented:

DGMGRL> show configuration;   

Configuration - myconf

  Protection Mode: MaxAvailability
  Members:
  prima - Primary database
    physt - (*) Physical standby database 

Fast-Start Failover: ENABLED

Configuration Status:
SUCCESS   (status updated 26 seconds ago)

DGMGRL> show database physt logxptmode
  LogXptMode = 'fastsync'
DGMGRL> exit
[oracle@uhesse ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Sat Aug 1 10:41:27 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> show parameter log_archive_dest_2

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2		     string	 service="physt", SYNC NOAFFIRM
						  delay=0 optional compression=
						 disable max_failure=0 max_conn
						 ections=1 reopen=300 db_unique
						 _name="physt" net_timeout=30,
						 valid_for=(online_logfile,all_
						 roles)

My configuration uses Fast-Start Failover, just to show that this is no restriction. Possible but not required is the usage of FASTSYNC together with Far Sync Instances. You can’t have Maximum Protection with FASTSYNC, though:

DGMGRL> disable fast_start failover;
Disabled.
DGMGRL> edit configuration set protection mode as maxprotection;
Error: ORA-16627: operation disallowed since no standby databases would remain to support protection mode

Failed.
DGMGRL> edit database physt set property logxptmode=sync;
Property "logxptmode" updated
DGMGRL> edit configuration set protection mode as maxprotection;
Succeeded.

Addendum: As my dear colleague Joel Goodman pointed out, the name of the process that does the Redo Transport from Primary to Standby has changed from LNS to NSS (for synchronous Redo Transport):

SQL> select name,description from v$bgprocess where paddr<>'00';

NAME  DESCRIPTION
----- ----------------------------------------------------------------
PMON  process cleanup
VKTM  Virtual Keeper of TiMe process
GEN0  generic0
DIAG  diagnosibility process
DBRM  DataBase Resource Manager
VKRM  Virtual sKeduler for Resource Manager
PSP0  process spawner 0
DIA0  diagnosibility process 0
MMAN  Memory Manager
DBW0  db writer process 0
MRP0  Managed Standby Recovery
TMON  Transport Monitor
ARC0  Archival Process 0
ARC1  Archival Process 1
ARC2  Archival Process 2
ARC3  Archival Process 3
ARC4  Archival Process 4
NSS2  Redo transport NSS2
LGWR  Redo etc.
CKPT  checkpoint
RVWR  Recovery Writer
SMON  System Monitor Process
SMCO  Space Manager Process
RECO  distributed recovery
LREG  Listener Registration
CJQ0  Job Queue Coordinator
PXMN  PX Monitor
AQPC  AQ Process Coord
DMON  DG Broker Monitor Process
RSM0  Data Guard Broker Resource Guard Process 0
NSV1  Data Guard Broker NetSlave Process 1
INSV  Data Guard Broker INstance SlaVe Process
FSFP  Data Guard Broker FSFO Pinger
MMON  Manageability Monitor Process
MMNL  Manageability Monitor Process 2

35 rows selected.

I’m not quite sure, but I think that was even in 11gR2 already the case. Just kept the old name in sketches as a habit :-)


Tagged: 12c New Features, Data Guard
Categories: DBA Blogs

Log Buffer #434: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-07-31 09:55

This Log Buffer Edition throws spotlight on some of the salient blog posts from Oracle, SQL Server and MySQL.

Oracle:

  • STANDARD date considerations in Oracle SQL and PL/SQL
  • My good friend, Oracle icon Karen Morton passed away.
  • Multiple invisible indexes on the same column in #Oracle 12c
  • Little things worth knowing: Data Guard Broker Setup changes in 12c
  • Things that are there but you cannot use

SQL Server:

  • Dynamic Grouping in SSRS Reports
  • SQL 2014 Clustered Columnstore index rebuild and maintenance considerations
  • SQL Server 2016 CTP2
  • Azure SQL Database Security Features
  • Visualize the timeline of your SQL jobs using Google graph and email

MySQL:

  • Shinguz: Max_used_connections per user/account
  • Generally in MySQL we send queries massaged to a point where optimizer doesn’t have to think about anything.
  • Replication is the process that transfers data from an active master to a slave server, which reproduces the data stream to achieve, as best as possible, a faithful copy of the data in the master.
  • Unknown column ‘smth’ in ‘field list’ -> Oldie but goodie error
  • Why base64-output=DECODE-ROWS does not print row events in MySQL binary logs

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL.

The post Log Buffer #434: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Less Performance Impact with Unified Auditing in #Oracle 12c

The Oracle Instructor - Fri, 2015-07-31 04:56

There is a new auditing architecture in place with Oracle Database 12c, called Unified Auditing. Why would you want to use it? Because it has significantly less performance impact than the old approach. We buffer now audit records in the SGA and write them asynchronously to disk, that’s the trick.

Other benefits of the new approach are that we have now one centralized way (and one syntax also) to deal with all the various auditing features that have been introduced over time, like Fine Grained Auditing etc. But the key improvement in my opinion is the reduced performance impact, because that was often hurting customers in the past. Let’s see it in action! First, I will record a baseline without any auditing:

 

[oracle@uhesse ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Fri Jul 31 08:54:32 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select value from v$option where parameter='Unified Auditing';

VALUE
----------------------------------------------------------------
FALSE

SQL> @audit_baseline
Connected.

Table truncated.


Noaudit succeeded.


PL/SQL procedure successfully completed.

Connected.

PL/SQL procedure successfully completed.

Elapsed: 00:00:06.07
Connected.

PL/SQL procedure successfully completed.
SQL> host cat audit_baseline.sql
connect / as sysdba
truncate table aud$;
noaudit select on adam.sales;
exec dbms_workload_repository.create_snapshot

connect adam/adam
set timing on
declare v_product adam.sales.product%type;
begin
for i in 1..100000 loop
select product into v_product from adam.sales where id=i;
end loop;
end;
/
set timing off

connect / as sysdba
exec dbms_workload_repository.create_snapshot

So that is just 100k SELECT against a 600M MB table with an index on ID without auditing so far. Key sections of the AWR report for the baseline:

unified_auditing1unified_auditing2

The most resource consuming SQL in that period was the AWR snapshot itself. Now let’s see how the old way to audit impacts performance here:

SQL>  show parameter audit_trail

NAME_COL_PLUS_SHOW_PARAM                 TYPE        VALUE_COL_PLUS_SHOW_PARAM
---------------------------------------- ----------- ----------------------------------------
audit_trail                              string      DB, EXTENDED
SQL> @oldaudit
Connected.

Table truncated.


Audit succeeded.


PL/SQL procedure successfully completed.

Connected.

PL/SQL procedure successfully completed.

Elapsed: 00:00:56.42
Connected.

PL/SQL procedure successfully completed.
SQL> host cat oldaudit.sql
connect / as sysdba
truncate table aud$;
audit select on adam.sales by access;
exec dbms_workload_repository.create_snapshot

connect adam/adam
set timing on
declare v_product adam.sales.product%type;
begin
for i in 1..100000 loop
select product into v_product from adam.sales where id=i;
end loop;
end;
/
set timing off

connect / as sysdba
exec dbms_workload_repository.create_snapshot

That was almost 10 times slower! The AWR report confirms that and shows why it is so much slower now:

unified_auditing3unified_auditing4

It’s because of the 100k inserts into the audit trail, done synchronously to the SELECTs. The audit trail is showing them here:

 

SQL> select sql_text,sql_bind from dba_audit_trail where rownum<=10; 
SQL_TEXT                                           SQL_BIND 
-------------------------------------------------- ---------- 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1         #1(1):1 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1         #1(1):2 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1         #1(1):3 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1         #1(1):4 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1         #1(1):5 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1         #1(1):6 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1         #1(1):7 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1         #1(1):8 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1         #1(1):9 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1         #1(2):10 
10 rows selected. 
SQL> select count(*) from dba_audit_trail where sql_text like '%SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1%';

  COUNT(*)
----------
    100000

Now I will turn on Unified Auditing – that requires a relinking of the software while the database is down. Afterwards:

SQL> select value from v$option where parameter='Unified Auditing';

VALUE
----------------------------------------------------------------
TRUE

SQL> @newaudit
Connected.

Audit policy created.


Audit succeeded.


PL/SQL procedure successfully completed.

Connected.

PL/SQL procedure successfully completed.

Elapsed: 00:00:11.90
Connected.

PL/SQL procedure successfully completed.
SQL> host cat newaudit.sql
connect / as sysdba
create audit policy audsales actions select on adam.sales;
audit policy audsales;
exec dbms_workload_repository.create_snapshot

connect adam/adam
set timing on
declare v_product adam.sales.product%type;
begin
for i in 1..100000 loop
select product into v_product from adam.sales where id=i;
end loop;
end;
/
set timing off

connect / as sysdba
exec dbms_workload_repository.create_snapshot

That was still slower than the baseline, but much better than with the old method! Let’s see the AWR report for the last run:

unified_auditing5

unified_auditing6

Similar to the first (baseline) run, the snapshot is the most resource consuming SQL during the period. DB time as well as elapsed time are shorter by far than with the old audit architecture. The 100k SELECTs together with the bind variables have been captured here as well:

SQL> select sql_text,sql_binds from unified_audit_trail where rownum<=10; 
SQL_TEXT                                                     SQL_BINDS 
------------------------------------------------------------ ---------- 
ALTER DATABASE OPEN 
create audit policy audsales actions select on adam.sales 
audit policy audsales 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1                   #1(1):1 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1                   #1(1):2 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1                   #1(1):3 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1                   #1(1):4 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1                   #1(1):5 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1                   #1(1):6 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1                   #1(1):7 
10 rows selected. 
SQL> select count(*) from unified_audit_trail where sql_text like '%SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1%';

  COUNT(*)
----------
    100000

The first three lines above show that sys operations are also recorded in the same (Unified!) Audit Trail, by the way. There is much more to say and to learn about Unified Auditing of course, but this may give you a kind of motivation to evaluate it, especially if you have had performance issues in the past related to auditing. As always: Don’t believe it, test it! :-)


Tagged: 12c New Features, Performance Tuning, security
Categories: DBA Blogs

Why Move to Cassandra?

Pythian Group - Thu, 2015-07-30 12:05

Nowadays Cassandra is getting a lot of attention, and we’re seeing more and more examples of companies moving to Cassandra. Why is this happening? Why are companies with solid IT structures and internal knowledge shifting, not only to a different paradigm (Read: NoSQL vs SQL), but also to completely different software? Companies don’t simply move to Cassandra because they feel like it. A drive or need must exist. In this post, I’m going to review a few use cases and highlight some of the interesting parts to explain why these particular companies adopted Cassandra. I will also try to address concerns about Cassandra in enterprise environments that have critical SLAs and other requirements. And at the end of this post, I will go over our experience with Cassandra.

Cassandra Use Cases Instagram

Cutting costs. How? Instagram was using an in-memory database before moving to Cassandra. Memory is expensive compared to disk. So if you do not need the advanced performance of an in-memory datastore, Cassandra can deliver the performance you need and help you save money on storage costs. Plus, as mentioned in the use case, Cassandra allows Instagram to continually add data to the cluster.  They also loved Cassandra’s reliability and availability features.

Ebay

Cassandra proved to be the best technology, among the ones they tested, for their scaling needs. With Cassandra, Ebay can look up historical behavioral data quickly and update their recommendation models with low latency. Ebay has deployed Cassandra across multiple data centers.

Spotify

Spotify moved to Cassandra because it’s a highly reliable and easily scalable datastore. Their old datastore was not able to keep up with the volume of writes and reads they had. Cassandra’s scalability with its multi-datacenter replication, plus its reliability, proved to be a hit for them.

Comcast

They were looking for 3 things: scale, availability, and active-active. Only Cassandra provided all of them. There transition to Cassandra went smoothly and enjoy the ease of development Cassandra offers.

Cassandra brings something new to the game

NoSQL existed before Cassandra. There were also other mature technologies when Cassandra was released. So why didn’t companies move to those technologies?

Like the subtitle says, Cassandra brings something new to the game. In my experience, and as discussed in some of the use cases above, one of the strongest points is Cassandra’s ease of use. Once you know how to configure  Cassandra, it’s almost “fire-and-forget”! It just works. In an era like ours, where you see new technologies appear every day, on different stacks, with different dependencies, Cassandra easy installation and basic configuration is refreshingly simple, which leads us to…

Scalability!! Yes it scales linearly. This level of scalability, combined with its ease of deployment, takes your infrastructure to another level.

Last but not least, Cassandra is highly flexible. You can tweak your consistency settings per transaction. You need more speed? Pick less consistency. You want data integrity? Push those consistency settings up. It is up to you, your project, and your requirements. And you can easily change it.

Also don’t forget its other benefits: open source, free, geo-replication, low latency etc…

Pythian’s Experience with Cassandra

Cassandra is not without its challenges. Like I said earlier, it is new technology that makes you think differently about databases. And because it’s easy to deploy and work with, it can lead to mistakes that could seriously impact scalability, and applications/services performance when they start to scale.

And that is where we come in. We ensure that companies just starting out with Cassandra have well built and well designed deployments, so they don’t run into these problems. Starting with a solid architecture plan for a Cassandra deployment and the correct data model can make a whole lot of difference!

We’ve seen some deployments that started out well, but without proper maintenance, fell into some of the pitfalls or edge-cases mentioned above. We help out by fixing the problem and/or providing recommended changes to the original deployment, so it will keep performing well without without issues! And because Cassandra delivers high resilience, many of these problems can be solved without having to deal with downtime.

Thinking about moving to Cassandra? Not sure if open source or enterprise is right for you? Need project support? Schedule a free assessment so we can help you with next steps!

The post Why Move to Cassandra? appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs