Skip navigation.

DBA Blogs

Hive (HiveQL) SQL for Hadoop Big Data

Kubilay Çilkara - Thu, 2015-06-25 13:30


In this  post I will share my experience with an Apache Hadoop component called Hive which enables you to do SQL on an Apache Hadoop Big Data cluster.

Being a great fun of SQL and relational databases, this was my opportunity to set up a mechanism where I could transfer some (a lot)  data from a relational database into Hadoop and query it with SQL. Not a very difficult thing to do these days, actually is very easy with Apache Hive!

Having access to a Hadoop cluster which has the Hive module installed on, is all you need. You can provision a Hadoop cluster yourself by downloading and installing it in pseudo mode on your own PC. Or you can run one in the cloud with Amazon AWS EMR in a pay-as-you-go fashion.

There are many ways of doing this, just Google it and you will be surprised how easy it is. It is easier than it sounds. Next find links for installing it on your own PC (Linux).  Just download and install Apache Hadoop and Hive from Apache Hadoop Downloads

You will need to download and install 3 things from the above link.

  • Hadoop (HDFS and Big Data Framework, the cluster)
  • Hive (data warehouse module)
  • Sqoop (data importer)
You will also need to put the connector of the database (Oracle, MySQL...) you want to extract data from in the */lib folder in your Sqoop installation. For example the MySQL JDBC connector can be downloaded from hereDon't expect loads of tinkering installing Apache Hadoop and Hive or Sqoop, just unzipping binary extracts and few line changes on some config files in directories, that's all. Is not a big deal, and is Free. There are tones of tutorials on internet on this, here is one I used from another blogger bogotobogo.


What is Hive?

Hive is Big Data SQL, the Data Warehouse in Hadoop. You can create tables, indexes, partition tables, use external tables, Views like in a relational database Data Warehouse. You can run SQL to do joins and to query the Hive tables in parallel using the MapReduce framework. It is actually quite fun to see your SQL queries translating to MapReduce jobs and run in parallel like parallel SQL queries we do on Oracle EE Data Warehouses and other databases. :0) The syntax looks very much like MySQL's SQL syntax.

Hive is NOT an OLTP transactional database, does not have transactions of INSERT, UPDATE, DELETE like in OLTP and doesn't conform to ANSI SQL and ACID properties of transactions.


Direct insert into Hive with Apache Sqoop:
After you have installed Hadoop and have hive setup and are able to login to it, you can use Sqoop - the data importer of Hadoop - like in the following command and directly import a table from MySQL via JDBC into Hive using MapReduce.
$  sqoop import -connect jdbc:mysql://mydatbasename -username kubilay -P -table mytablename --hive-import --hive-drop-import-delims --hive-database dbadb --num-mappers 16 --split-by id
Sqoop import options explained:
  •  -P will ask for the password
  • --hive-import which makes Sqoop to import data straight into hive table which it creates for you
  • --hive-drop-import-delims Drops \n\r, and \01 from string fields when importing to Hive. 
  • --hive-database tells it which database in Hive to import it to, otherwise it goes to the default database. 
  • --num-mappers number of parallel maps to run, like parallel processes / threads in SQL
  • --split-by  Column of the table used to split work units, like in partitioning key in database partitioning. 
The above command will import any MySQL table you give in place of mytablename into Hive using MapReduce from a MySQL database you specify.

Once you import the table then you can login to hive and run SQL to it like in any relational database. You can login to Hive in a properly configured system just by calling hive from command line like this:

$ hive
hive> 


More Commands to list jobs:

Couple of other commands I found useful when I was experimenting with this:

List running Hadoop jobs

hadoop job -list

Kill running Hadoop jobs

hadoop job -kill job_1234567891011_1234

List particular table directories in HDFS

hadoop fs -ls mytablename


More resources & Links



Categories: DBA Blogs

Quiz Time. Why Do Deletes Cause An Index To Grow ? (Up The Hill Backwards)

Richard Foote - Thu, 2015-06-25 01:02
OK, time for a little quiz. One of the things I’ve seen at a number of sites is the almost fanatical drive to make indexes as small as possible because indexes that are larger than necessary both waste storage and hurt performance. Or so the theory goes …   :) In many cases, this drives DBAs to […]
Categories: DBA Blogs

Building Simple Java EE REST Service Using Oracle JDeveloper 12c

v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} REST (Representational State Transfer) – an...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Empty Leaf Blocks After Rollback Part II (Editions of You)

Richard Foote - Wed, 2015-06-24 01:35
In my last post, I discussed how both 1/2 empty and totally empty leaf blocks can be generated by rolling back a bulk update operation. An important point I made within the comments of the previous post is that almost the exact scenario would have taken place had the transaction committed rather than rolled back. A […]
Categories: DBA Blogs

RMAN - 3 : The DB_UNIQUE_NAME in Backups to the FRA

Hemant K Chitale - Tue, 2015-06-23 03:14
When you run RMAN Backups to the FRA without using the FORMAT clause, Oracle automatically generates filenames for the BackupPieces.  The folder name is derived from the system date.  But what is the parent folder for backups ?  Is it simply the DB_RECOVERY_FILE_DEST ?  Actuallly, the DB_UNIQUE_NAME comes into play as well.

For example :

[oracle@localhost ~]$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Tue Jun 23 16:57:19 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655)

RMAN> list backup of datafile 1;

using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
1 Full 831.23M DISK 00:03:32 07-JUN-15
BP Key: 1 Status: AVAILABLE Compressed: YES Tag: TAG20150607T165914
Piece Name: /NEW_FS/oracle/FRA/ORCL/backupset/2015_06_07/o1_mf_nnndf_TAG20150607T165914_bq81z2y6_.bkp
List of Datafiles in backup set 1
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14068320 07-JUN-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
3 Full 366.89M DISK 00:01:56 07-JUN-15
BP Key: 3 Status: AVAILABLE Compressed: YES Tag: TAG20150607T170754
Piece Name: /NEW_FS/oracle/FRA/ORCL/backupset/2015_06_07/o1_mf_nnndf_TAG20150607T170754_bq82hc5f_.bkp
List of Datafiles in backup set 3
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14068721 07-JUN-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

RMAN>

[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Tue Jun 23 16:58:34 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> show parameter db_recovery_file_dest

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string /NEW_FS/oracle/FRA
db_recovery_file_dest_size big integer 8G
SQL>

We can see that the DB_RECOVERY_FILE_DEST is defined as "/NEW_FS/oracle/FRA". However, the backups go into a "backupset" folder under "/NEW_FS/oracle/FRA/ORCL/". The "ORCL" is part of the path to the folder holding the backups. How is this "ORCL" derived ?

SQL> show parameter db_name

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_name string orcl
SQL> show parameter db_unique_name

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_unique_name string orcl
SQL>

By default, the DB_UNIQUE_NAME is the same as DB_NAME. Let's see what happens after I change the DB_UNIQUE_NAME.

SQL> 
SQL> !ls -l /NEW_FS/oracle/FRA/
total 4
drwxrwx--- 5 oracle oracle 4096 Jun 7 17:10 ORCL

SQL>
SQL> alter system set db_unique_name='HEMANTDB' scope=SPFILE;

System altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 456146944 bytes
Fixed Size 1344840 bytes
Variable Size 385878712 bytes
Database Buffers 62914560 bytes
Redo Buffers 6008832 bytes
Database mounted.
Database opened.
SQL> !ls -l /NEW_FS/oracle/FRA/
total 4
drwxrwx--- 5 oracle oracle 4096 Jun 7 17:10 ORCL

SQL>

After resetting the DB_UNIQUE_NAME, Oracle doesn't immediately create the folder for the new DB_UNIQUE_NAME until and unless I run an RMAN Backup.

RMAN> exit

RMAN-06900: WARNING: unable to generate V$RMAN_STATUS or V$RMAN_OUTPUT row
RMAN-06901: WARNING: disabling update of the V$RMAN_STATUS and V$RMAN_OUTPUT rows
ORACLE error from target database:
ORA-03135: connection lost contact
Process ID: 3344
Session ID: 67 Serial number: 13


Recovery Manager complete.
[oracle@localhost ~]$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Tue Jun 23 17:07:14 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655)

RMAN> backup datafile 1;

Starting backup at 23-JUN-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=27 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=38 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/home/oracle/app/oracle/oradata/orcl/system01.dbf
channel ORA_DISK_1: starting piece 1 at 23-JUN-15
channel ORA_DISK_1: finished piece 1 at 23-JUN-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_06_23/o1_mf_nnndf_TAG20150623T170721_brl8g9od_.bkp tag=TAG20150623T170721 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:01:25
Finished backup at 23-JUN-15

Starting Control File and SPFILE Autobackup at 23-JUN-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_06_23/o1_mf_s_883156126_brl8k0w4_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 23-JUN-15

RMAN>

SQL> !ls -l /NEW_FS/oracle/FRA/
total 8
drwxrwx--- 3 oracle oracle 4096 Jun 23 17:07 HEMANTDB
drwxrwx--- 5 oracle oracle 4096 Jun 7 17:10 ORCL

SQL>
SQL> show parameter db_name

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_name string orcl
SQL> show parameter db_unique_name

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_unique_name string HEMANTDB
SQL>

Notice how Oracle created the "HEMANTDB" folder under the designated DB_RECOVERY_FILE_DEST. It then created the "backupset" and "autobackup" folders also as subfolders under this.  BackupSet BackupPieces and Controlfile Autobackups are now going to the new path.  The backups are go to folders under {DB_RECOVERY_FILE_DEST}/{DB_UNIQUE_NAME}
.
.
.



Categories: DBA Blogs

Empty Leaf Blocks After Rollback Part I (Empty Spaces)

Richard Foote - Tue, 2015-06-23 00:09
There’s been an interesting recent discussion on the OTN Database forum regarding “Index blank blocks after large update that was rolled back“. Setting aside the odd scenario of updating a column that previously had 20 million distinct values to the same value on a 2 billion row table, the key questions raised are why the blank index leaf blocks […]
Categories: DBA Blogs

Log Buffer #428: A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2015-06-22 11:45

The Log Buffer Edition once again is sparkling with some gems, hand-picked from Oracle, SQL Server and MySQL.

Oracle:

  • Oracle GoldenGate 12.1.2.1.1  is now certified with Unity 14.10.  With this certification, customers can use Oracle GoldenGate to deliver data to Teradata Unity which can then automate the distribution of data to multiple Teradata databases.
  • How do I change DNS servers on Exadata storage servers.
  • Flushing Shared Pool Does Not Slow Its Growth.
  • Code completion is the key feature you need when adding support for your own JavaScript framework to NetBeans IDE.
  • Replicating Hive Data Into Oracle BI Cloud Service for Visual Analyzer using BICS Data Sync.

SQL Server:

  • Trigger an Email of an SSRS Report from an SSIS Package.
  • Script All Server Level Objects to Recreate SQL Server.
  • A Syntax Mystery in a Previously Working Procedure.
  • Using R to Explore Data by Analysis – for SQL Professionals.
  • Converting Rows to Columns (PIVOT) and Columns to Rows (UNPIVOT) in SQL Server.

MySQL:

  • Some applications, particularly those written with a single-node database server in mind, attempt to immediately read a value they have just inserted into the database, without making those operations part of a single transaction. A read/write splitting proxy or a connection pool combined with a load-balancer can direct each operation to a different database node.
  • Q&A: High availability when using MySQL in the cloud.
  • MariaDB 10.0.20 now available.
  • Removal and Deprecation in MySQL 5.7.
  • Getting EXPLAIN information from already running queries in MySQL 5.7.

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

Categories: DBA Blogs

Index Tree Dumps in Oracle 12c Database (New Age)

Richard Foote - Sun, 2015-06-21 23:56
I’ve previously discussed Index Tree Dumps but I’ve recently found a nice little improvement that’s been introduced in Oracle Database 12c. Let’s begin by creating a little table and index: To generate an Index Tree Dump, we first need the OBJECT_ID of the index: And then use it to generate the Index Tree Dump: Previously, an […]
Categories: DBA Blogs

Creating Oracle Service Bus 12c Proxy Service to Decouple JCA Database Adapter Business Services

In an earlier blog I showed how to service enable a database on OSB 12c . The business service created as we saw is strongly coupled to the database. A change on the database (table name, data type...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Flushing Shared Pool Does Not Slow Its Growth

Bobby Durrett's DBA Blog - Thu, 2015-06-18 17:14

I’m still working on resolving the issues caused by bug 13914613.

Oracle support recommended that we apply a parameter change to resolve the issue but that change requires us to bounce the database  and I was looking for a resolution that does not need a bounce.  The bug caused very bad shared pool latch waits when the automatic memory management feature of our 11.2.0.3 database expanded the shared pool.  Oracle support recommending setting _enable_shared_pool_durations=false and I verified that changing this parameter requires a bounce.  It is a big hassle to bounce this database because of the application so I thought that I might try flushing the shared pool on a regular basis so the automatic memory management would not need to keep increasing the size of the shared pool.  The shared pool was growing in size because we have a lot of SQL statements without bind variables.  So, I did a test and in my test flushing the shared pool did not slow the growth of the shared pool.

Here is a zip of the scripts I used for this test and their outputs: zip

I set the shared pool to a small value so it was more likely to grow and I created a script to run many different sql statements that don’t use bind variables:

spool runselects.sql

select 'select * from dual where dummy=''s'
||to_char(sysdate,'HHMISS')||rownum||''';'
from dba_objects;

spool off

@runselects

So, the queries looked like this:

select * from dual where dummy='s0818111';
select * from dual where dummy='s0818112';
select * from dual where dummy='s0818113';
select * from dual where dummy='s0818114';
select * from dual where dummy='s0818115';
select * from dual where dummy='s0818116';
select * from dual where dummy='s0818117';

I ran these for an hour and tested three different configurations.  The first two did not use the _enable_shared_pool_durations=false setting and the last did.  The first test was a baseline that showed the growth of the shared pool without flushing the shared pool.  The second test including a flush of the shared pool every minute.  The last run included the parameter change and no flush of the shared pool.  I queried V$SGA_RESIZE_OPS after each test to see how many times the shared pool grew.  Here is the query:

SELECT OPER_TYPE,FINAL_SIZE Final,
to_char(start_time,'dd-mon hh24:mi:ss') Started, 
to_char(end_time,'dd-mon hh24:mi:ss') Ended 
FROM V$SGA_RESIZE_OPS
where component='shared pool'
order by start_time,end_time;

Here are the results.

Baseline – no flush, no parameter change:

OPER_TYPE       FINAL STARTED         ENDED
--------- ----------- --------------- ---------------
GROW      150,994,944 18-jun 05:03:54 18-jun 05:03:54
GROW      134,217,728 18-jun 05:03:54 18-jun 05:03:54
STATIC    117,440,512 18-jun 05:03:54 18-jun 05:03:54
GROW      167,772,160 18-jun 05:04:36 18-jun 05:04:36
GROW      184,549,376 18-jun 05:47:38 18-jun 05:47:38

Flush every minute, no parameter change:

OPER_TYPE       FINAL STARTED         ENDED
--------- ----------- --------------- ---------------
GROW      134,217,728 18-jun 06:09:15 18-jun 06:09:15
GROW      150,994,944 18-jun 06:09:15 18-jun 06:09:15
STATIC    117,440,512 18-jun 06:09:15 18-jun 06:09:15
GROW      167,772,160 18-jun 06:09:59 18-jun 06:09:59
GROW      184,549,376 18-jun 06:22:26 18-jun 06:22:26
GROW      201,326,592 18-jun 06:42:29 18-jun 06:42:29
GROW      218,103,808 18-jun 06:47:29 18-jun 06:47:29

Parameter change, no flush:

OPER_TYPE        FINAL STARTED         ENDED
--------- ------------ --------------- ---------------
STATIC     117,440,512 18-jun 07:16:09 18-jun 07:16:09
GROW       134,217,728 18-jun 07:16:18 18-jun 07:16:18

So, at least in this test – which I have run only twice – flushing the shared pool if anything makes the growth of the shared pool worse.  But, changing the parameter seems to lock it in.

– Bobby

Categories: DBA Blogs

Partner Webcast – Next Generation IoT real time applications with Oracle Stream Explorer

The Internet of Things (IoT)- or rather „The Internet of Everything“ is one of the key drivers transforming the world from analogue to digital. Cheap sensors, cheap bandwidth, cheap processing,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Create #em12c users fast and easy!

DBASolved - Thu, 2015-06-18 11:57

Over the last few months, I’ve been working a project where I’ve started to dive into EM CLI and the value that EM CLI brings to cutting down on doing things like creating Enterprise Manager users. Hence the reason for this post.

Note: If you haven’t looked into EM CLI yet, I encourage you to do so. A good starting point is here. Plus there is a whole book written on the topic by some friends and guru’s of mine, here.

Creating users in Enterprise Manager 12c is pretty simple as it is. Simply go to Setup -> Security -> Administrators. When you get this screen, then click on either the Create or Create Like buttons.

After clicking Create or Create Like, Enterprise Manger takes you to a five (5) step wizard for creating a user. This wizard allows you to provide details about the user, assign roles, assign target privileges, assign resource privileges and then review what you have done.

Depending on how many users you have to create, this wizard is either an great way of creating user or a slow way for creating users. Using EM CLI, users can be created from the command line very quickly and easily and no need to use the GUI wizard either.. :)

The syntax to create a user from the command line is as follows:

emcli create_user
-name="name"
-password="password"
[-type="user_type"]
[-roles="role1;role2;..."]
[-email="email1;email2;..."]
[-privilege="name[;secure-resource-details]]"
[-separator=privilege="sep_string"]
[-subseparator=privilege="subsep_string"]
[-profile="profile_name"]
[-desc="user_description"]
[-expired="true|false"]
[-prevent_change_password="true|false"]
[-department="department_name"]
[-cost_center="cost_center"]
[-line_of_business="line_of_business"]
[-contact="contact"]
[-location="location"]
[-input_file="arg_name:file_path"]

The beautiful part of EM CLI is that is can be used with any scripting language. Since I like to use PERL, I decided to write a simple script that can be used to create a user from the command line using EM CLI.

#!/usr/bin/perl -w
use strict;
use warnings;

#Parameters
my $oem_home_bin = “$OMS_HOME/bin";
my ($username, $passwd, $email) = @ARGV;
my $pwdchange = ‘false';

#Program
if (not defined $username or not defined $passwd or not defined $email)
{    
    print "\nUsage: perl ./emcli_create_em_user.pl username password email_address\n\n";    
    exit;
}

system($oem_home_bin.'/emcli login -username=sysman);
system($oem_home_bin.'/emcli sync');
my $cmd = 'emcli create_user -name='.$username.' -password='.$passwd.' -email='.$email.' -prevent_change_password='.$pwdchange;
#print $cmd."\n";
system($oem_home_bin.'/'.$cmd);
system($oem_home_bin.'/emcli logout');

Now using this bit of code, I’m able to create users very rapidly using EM CLI with a command like this:

perl ./emcli_create_em_user.pl <username> <password for user> <email address>

Well, I hope this helps other look at and start using EM CLI when managing their EM environments.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: EMCLI, OEM
Categories: DBA Blogs

Oracle Cloud Platform Launch Event on Monday June 22 - Integrate, Accelerate and Lead with the Oracle Cloud Platform

Watch live as Larry Ellison, Executive Chairman of the Board and Chief Technology Officer, Oracle, unveils new Oracle Cloud Platform services. Discover how you can drive innovation with the leading...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Overall I/O Query

Bobby Durrett's DBA Blog - Tue, 2015-06-16 14:57

I hacked together a query today that shows the overall I/O performance that a database is experiencing.

The output looks like this:

End snapshot time   number of IOs ave IO time (ms) ave IO size (bytes)
------------------- ------------- ---------------- -------------------
2015-06-15 15:00:59        359254               20              711636
2015-06-15 16:00:59        805884               16              793033
2015-06-15 17:00:13        516576               13              472478
2015-06-15 18:00:27        471098                6              123565
2015-06-15 19:00:41        201820                9              294858
2015-06-15 20:00:55        117887                5              158778
2015-06-15 21:00:09         85629                1               79129
2015-06-15 22:00:23        226617                2               10744
2015-06-15 23:00:40        399745               10              185236
2015-06-16 00:00:54       1522650                0               43099
2015-06-16 01:00:08       2142484                0               19729
2015-06-16 02:00:21        931349                0                9270

I’ve combined reads and writes and focused on three metrics – number of IOs, average IO time in milliseconds, and average IO size in bytes.  I think it is a helpful way to compare the way two systems perform.  Here is another, better, system’s output:

End snapshot time   number of IOs ave IO time (ms) ave IO size (bytes)
------------------- ------------- ---------------- -------------------
2015-06-15 15:00:25        331931                1              223025
2015-06-15 16:00:40        657571                2               36152
2015-06-15 17:00:56       1066818                1               24599
2015-06-15 18:00:11        107364                1              125390
2015-06-15 19:00:26         38565                1               11023
2015-06-15 20:00:41         42204                2              100026
2015-06-15 21:00:56         42084                1               64439
2015-06-15 22:00:15       3247633                3              334956
2015-06-15 23:00:32       3267219                0               49896
2015-06-16 00:00:50       4723396                0               32004
2015-06-16 01:00:06       2367526                1               18472
2015-06-16 02:00:21       1988211                0                8818

Here is the query:

select 
to_char(sn.END_INTERVAL_TIME,'YYYY-MM-DD HH24:MI:SS') "End snapshot time",
sum(after.PHYRDS+after.PHYWRTS-before.PHYWRTS-before.PHYRDS) "number of IOs",
trunc(10*sum(after.READTIM+after.WRITETIM-before.WRITETIM-before.READTIM)/
sum(1+after.PHYRDS+after.PHYWRTS-before.PHYWRTS-before.PHYRDS)) "ave IO time (ms)",
trunc((select value from v$parameter where name='db_block_size')*
sum(after.PHYBLKRD+after.PHYBLKWRT-before.PHYBLKRD-before.PHYBLKWRT)/
sum(1+after.PHYRDS+after.PHYWRTS-before.PHYWRTS-before.PHYRDS)) "ave IO size (bytes)"
from DBA_HIST_FILESTATXS before, DBA_HIST_FILESTATXS after,DBA_HIST_SNAPSHOT sn
where 
after.file#=before.file# and
after.snap_id=before.snap_id+1 and
before.instance_number=after.instance_number and
after.snap_id=sn.snap_id and
after.instance_number=sn.instance_number
group by to_char(sn.END_INTERVAL_TIME,'YYYY-MM-DD HH24:MI:SS')
order by to_char(sn.END_INTERVAL_TIME,'YYYY-MM-DD HH24:MI:SS');

I hope this is helpful.

– Bobby

Categories: DBA Blogs

Indexing and Transparent Data Encryption Part III (You Can’t Do That)

Richard Foote - Tue, 2015-06-16 00:28
In Part II of this series, we looked at how we can create a B-Tree index on a encrypted column, providing we do not apply salt during encryption. However, this is not the only restriction with regard to indexing an encrypted column using column-based encryption. If we attempt to create an index that is not a […]
Categories: DBA Blogs

CRS-4995: The command ‘Modify resource’ is invalid in crsctl. Use srvctl for this command.

Oracle in Action - Mon, 2015-06-15 09:40

RSS content

Today, in my 12.1.0.2 cluster,  I encountered above error message when I was trying to modify ACL of an ASM cluster file system created on volume VOL1 in DATA diskgroup as follows:

[root@host01 ~]# crsctl modify resource ora.data.vol1.acfs -attr "ACL='owner:root:rwx,pgrp:dba:rwx,other::r--'"

CRS-4995: The command 'Modify resource' is invalid in crsctl. Use srvctl for this command.

I resolved the above problem by using the unsupported flag as follows:

[root@host01 ~]# crsctl modify resource ora.data.vol1.acfs -attr "ACL='owner:root:rwx,pgrp:dba:rwx,other::r--'" -unsupported

 

Hope it helps!!

References:
Oracle Issue running 12.1.0.2 clusterware with 11.2.0.2 database

Oracle Issue running 12.1.0.2 clusterware with 11.2.0.2 database

——————————————————————————————————————-

 Related Links :

Home

12c RAC Index

 



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [CRS-4995: The command 'Modify resource' is invalid in crsctl. Use srvctl for this command.], All Right Reserved. 2015.

The post CRS-4995: The command ‘Modify resource’ is invalid in crsctl. Use srvctl for this command. appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

RMAN -- 2 : ArchiveLog Deletion Policy

Hemant K Chitale - Sat, 2015-06-13 08:54
Most Internet references about defining the ArchiveLog Deletion Policy relate to the necessity to preserve ArchiveLogs for Standby databases.

For example, the configuration here prevents deletion unless an ArchiveLog has been applied on a Standby :

RMAN> show all;

RMAN configuration parameters for database with db_unique_name ORCL are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/home/oracle/app/oracle/product/11.2.0/dbhome_2/dbs/snapcf_orcl.f'; # default

RMAN>

But it is possible to also configure it differently. For example, thus for a database without a Standby, I can configure it to prevent deletion unless a Backup of the ArchiveLog has been made (to disk in this case)  :

RMAN> configure archivelog deletion policy to backed up 1 times to device type disk;

old RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
new RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK;
new RMAN configuration parameters are successfully stored

RMAN> show all;

RMAN configuration parameters for database with db_unique_name ORCL are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/home/oracle/app/oracle/product/11.2.0/dbhome_2/dbs/snapcf_orcl.f'; # default

RMAN>

Let's see how this plays.
RMAN> sql 'alter system archive log current ';

sql statement: alter system archive log current

RMAN> delete archivelog all;

released channel: ORA_DISK_1
released channel: ORA_DISK_2
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=35 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=52 device type=DISK
RMAN-08138: WARNING: archived log not deleted - must create more backups
archived log file name=/NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_623_bqrjp5gx_.arc thread=1 sequence=623
RMAN-08138: WARNING: archived log not deleted - must create more backups
archived log file name=/NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_624_bqrjpsb3_.arc thread=1 sequence=624
RMAN-08138: WARNING: archived log not deleted - must create more backups
archived log file name=/NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_625_bqrjq8kj_.arc thread=1 sequence=625
RMAN-08138: WARNING: archived log not deleted - must create more backups
archived log file name=/NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_626_bqrjqfdq_.arc thread=1 sequence=626

RMAN>

RMAN raised a WARNING that indicates that deletion of the ArchiveLog is not permitted until a Backup has been taken.  Thus, you can protect your ArchiveLogs from deletion by RMAN commands if they have not been backed up.
NOTE : This does NOT prevent non-RMAN commands (e.g. cron jobs with shell scripts) from deleting ArchiveLogs !

Let me backup and then delete the ArchiveLogs.

RMAN> backup as compressed backupset archivelog all;

Starting backup at 13-JUN-15
current log archived
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=623 RECID=9 STAMP=882312517
channel ORA_DISK_1: starting piece 1 at 13-JUN-15
channel ORA_DISK_2: starting compressed archived log backup set
channel ORA_DISK_2: specifying archived log(s) in backup set
input archived log thread=1 sequence=624 RECID=10 STAMP=882312537
input archived log thread=1 sequence=625 RECID=11 STAMP=882312552
input archived log thread=1 sequence=626 RECID=12 STAMP=882312557
channel ORA_DISK_2: starting piece 1 at 13-JUN-15
channel ORA_DISK_1: finished piece 1 at 13-JUN-15
piece handle=/NEW_FS/oracle/FRA/ORCL/backupset/2015_06_13/o1_mf_annnn_TAG20150613T225210_bqrjwtfd_.bkp tag=TAG20150613T225210 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=627 RECID=13 STAMP=882312730
channel ORA_DISK_1: starting piece 1 at 13-JUN-15
channel ORA_DISK_2: finished piece 1 at 13-JUN-15
piece handle=/NEW_FS/oracle/FRA/ORCL/backupset/2015_06_13/o1_mf_annnn_TAG20150613T225210_bqrjwtg3_.bkp tag=TAG20150613T225210 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_1: finished piece 1 at 13-JUN-15
piece handle=/NEW_FS/oracle/FRA/ORCL/backupset/2015_06_13/o1_mf_annnn_TAG20150613T225210_bqrjwvp1_.bkp tag=TAG20150613T225210 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 13-JUN-15

Starting Control File and SPFILE Autobackup at 13-JUN-15
piece handle=/NEW_FS/oracle/FRA/ORCL/autobackup/2015_06_13/o1_mf_s_882312732_bqrjwwsc_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 13-JUN-15

RMAN> delete archivelog all;

released channel: ORA_DISK_1
released channel: ORA_DISK_2
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=35 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=52 device type=DISK
List of Archived Log Copies for database with db_unique_name ORCL
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
9 1 623 A 07-JUN-15
Name: /NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_623_bqrjp5gx_.arc

10 1 624 A 13-JUN-15
Name: /NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_624_bqrjpsb3_.arc

11 1 625 A 13-JUN-15
Name: /NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_625_bqrjq8kj_.arc

12 1 626 A 13-JUN-15
Name: /NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_626_bqrjqfdq_.arc

13 1 627 A 13-JUN-15
Name: /NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_627_bqrjwt3k_.arc


Do you really want to delete the above objects (enter YES or NO)? YES
deleted archived log
archived log file name=/NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_623_bqrjp5gx_.arc RECID=9 STAMP=882312517
deleted archived log
archived log file name=/NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_624_bqrjpsb3_.arc RECID=10 STAMP=882312537
deleted archived log
archived log file name=/NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_625_bqrjq8kj_.arc RECID=11 STAMP=882312552
deleted archived log
archived log file name=/NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_626_bqrjqfdq_.arc RECID=12 STAMP=882312557
deleted archived log
archived log file name=/NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_627_bqrjwt3k_.arc RECID=13 STAMP=882312730
Deleted 5 objects


RMAN>

Now, I am able to delete the ArchiveLogs as I have at least 1 backup (on disk) of each.

.
.
.

Categories: DBA Blogs

Log Buffer #427: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-06-12 06:54

This Log Buffer Edition covers various blog posts from the last week regarding Oracle, SQL Server and MySQL.

Oracle:

  • Merging Overlapping Date Ranges with MATCH_RECOGNIZE
  • The latest version of Enterprise Manager, EM 12.1.0.5, has been announced!
  • Kdump is the Linux kernel crash-dump mechanism. In the event of a server crash, Kdump creates a memory image (vmcore) that can help in determining the cause of the crash.
  • APEX Connect Presentation and Download of the sample application
  • One of my favorite feature of ZFS is the I/O aggregation done in the final stage of issuing I/Os to devices.

SQL Server:

  • Reusing T-SQL Code is catching on.
  • SELECT INTO vs INSERT INTO on Columnstore
  • SQL Monitor Custom Metric: WriteLog wait time
  • Query Performance Tuning – A Methodical Approach
  • How to Get SQL Server Dates and Times Horribly Wrong

MySQL:

  • Hash-based workarounds for MySQL unique constraint limitations
  • Replicate MySQL to Amazon Redshift with Tungsten: The good, the bad & the ugly
  • Indexing MySQL JSON Data
  • Improving the Performance of MySQL on Windows
  • Auditing MySQL with McAfee and MongoDB

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

Categories: DBA Blogs

Partner Webcast – Oracle Big Data & Business Analytics: NEOS SNA Solution for Telcos

Behind the hype of big data there's a simple story, as for decades companies have been making business decisions based on transactional data stored in relational databases. Beyond that critical data,...

We share our skills to maximize your revenue!
Categories: DBA Blogs