DBA Blogs

Oracle Direct NFS and Infiniband: A Less-Than-Perfect Match

Don Seiler - Wed, 2014-03-05 19:54
Readers of an earlier post on this blog will know about my latest forays into the world of Direct NFS. Part of that means stumbling over configuration hiccups or slamming into brick walls when you find new bugs.

To quickly re-set the table, my organization purchased the Oracle ZFS Storage Appliance (ZFSSA) 7420. Oracle sold us on the Infiniband connectivity as a way to make a possible future transition to Exadata easier. However the pre-sales POC testing was done over 10gb Ethernet (10gigE). So it was that everything (including their Infiniband switches and cables) arrived at the datacenter and was installed and connected by the Oracle technicians. There were a few initial hiccups and frustrating inconsistencies with their installation and configuration, but those are outside the scope of this post.

We decided to put a copy of our standby database on the ZFSSA and have it run as a second standby. The performance problems were quick to appear, and they weren't pretty.



Configuring SharesWe configured the ZFS project shares by the common Oracle best practices in terms ZFS recordsize and write bias. For example, datafile shares were set to an 8k recordsize (to match the db_block_size) and throughput write bias, where as redo log shares were set to 128k recordsize and latency bias. Note that with Oracle Database 12c, Direct NFS over NFSv4, and the more recent ZFSSA firmware, you gain the benefit of Oracle Intelligent Storage Protocol (OISP), which will determine the recordsize and write bias automatically based on the type of file it recognizes.

Copying the DatabaseTo start out we needed to get a copy of the database onto the ZFSSA shares. This was easily done with RMAN's backup as copy database command, specifying the ZFSSA mount as the format destination. We were fairly impressed with the Direct NFS transfer speed during the copy and so we were optimistic about how it would stand up with our production load.

Starting Recovery!
Once everything was set, we started managed recovery on the standby. Our earlier excitement gave way to a sort of soul-crushing disappointment as the recovery performance basically ground to a standstill and traffic to the ZFSSA went from hundreds of Mbps to barely a trickle. We could stop recovery and copy a big file with great speed, but something in managed recovery was not playing nicely.

We found that we could disable Direct NFS (requires a database restart and software relinking), and managed recovery would actually perform better over the kernel NFS, although still not nearly as well as we would need.

This started a blizzard of SR creations, including SRs being spawned from other SRs. We had SRs open for the ZFSSA team, the Direct NFS team, the Data Guard team, and even the Oracle Linux and Solaris teams, even though we were not on Oracle Linux or Solaris (we use RHEL). It came to a point where I had to tell our account manager to have support stop creating new SRs, since every new SR meant I had to explain the situation to a new technician all over again.

At this point we were having twice-daily conference calls with our account manager and technical leads from the various departments. Their minions were working hard on their end to replicate the problem and find a solution, but we were running into a 4th week of this craziness.

The Infiniband BanditAfter many frustrating weeks of changing configurations, cables, cards, and just generally grasping at straws, it was finally narrowed down to the Infiniband. Or rather, a bug in the open fabric (OFA) linux kernel module that dealt with Infiniband that was triggered when Direct NFS would fire off a whole lot of connections, like when DataGuard managed recover would fire up 80 parallel slaves. We tested out the 10gigE channel we had for the management UI and performance was like night and day with just the one channel.

Oracle Support suggested it might be related to bug 15824316, which also deals with dramatic performance loss with Direct NFS over Infiniband. The bug in the OFA kernel module was fixed in recent versions of Oracle Enterprise Linux (OEL) (specifically the UEK kernel), but Oracle is not sharing this fix with Red Hat (or anyone else, presumably). Since we're on RHEL, we had little choice but to send all the Infiniband networking hardware back and order up some 10gigE replacements.

We're still in the process of getting the 10gigE switches and cables all in place for the final production setup. If you're curious, it's 4 10gigE cards per server, bonded to a single IP to a 10gigE switch into the ZFSSA heads. This 10gigE network is dedicated exclusively to ZFSSA traffic.

So, in the end, if you're on (a recent version of) of OEL/UEK, you should have nothing to worry about. But if you're on RHEL and planning to use Direct NFS, you're going to want to use 10gigE and NOT Infiniband.

Update - 6 Mar 2014Some of have asked, and I want to re-iterate: Oracle have claimed that the OFA module was entirely re-written, and their fix is specific to OEL and is not covered by GPL or any similar license. We were told that they have no plans to share their code with RHEL. Also there is no MOS bug number for the OFA issue, it was apparently re-written from scratch with no bug to track the issue. If this all sounds rather dubious to you, join the club. But it's what our account manager told us at the end of last year.

Another Update - 6 Mar 2014Bjoern Rost and I discussed this privately and after quite a bit of research and analysis he shared this conclusion:

Oracle Support suggested that this issue would not occur with the OFA module used in Oracle Linux with the UEK kernel. RedHat changed their support in RHEL6 from shipping the whole openfabrics stack to just including the drivers that were also present in the upstream mainline kernel. This is RedHat’s policy to ensure stability in the version of the kernel they ship. Oracle offers an OFA package with some additional patches (all GPL/BSD license) for the UEKr1 and UEKr2 kernels. Unfortunately, these two different approaches make it very hard to pinpoint specific patches or create backports for the RedHat kernel version.
Categories: DBA Blogs

The Danger of Moving Incrementally Updated Datafile Copies

Don Seiler - Fri, 2014-02-28 11:20
When I sat down at my desk yesterday morning I was greeted with some disturbing email alerts notifying me that one of the NFS mounts on my standby database host was full. This was the NFS mount that held an image copy of my database that is updated daily from an incremental backup. The concept and an example can be found in the documentation. With a 25Tb database, waiting to restore from backups is not as appealing as simply switching to the copies and getting back to business.

We quickly saw that the reason that this mount was full was that RMAN had tried to make another set of image copies in the latest backup run rather than take an incremental backup for recovery. It does this when it finds no valid copy of the datafiles to increment, and the logs confirmed this to be the reason:

Starting backup at 2014/02/26 13:30:16
no parent backup or copy of datafile 792 found
no parent backup or copy of datafile 513 found
no parent backup or copy of datafile 490 found
no parent backup or copy of datafile 399 found

... and so on, for every datafile. However I knew that the copies that had been there (and been updated) every day were still there. So what was different?

It was then that I remembered my work from the day before. Doing a bit of re-organization, I renamed the directory where the datafile copies lived. However I made sure to re-catalog them and double-checked to make sure the backup tag was still there, which it was. I also crosschecked the copies to make the old entries as expired and then deleted them from the catalog. This turned out to be the cause of the problem.

When the original datafilecopy entries were removed from the catalog, RMAN didn't want to recognize the new entries as the right copies, even though they were literally the same file, with the same backup tag. And so RMAN printed the message you see above and dutifully began making new image copies until it filled up the mountpoint, which didn't have another spare 25 Tb handy.

Today I was able to duplicate the scenario on a (much smaller) sandbox with various sequences. Every time, once I crosschecked the original copies and deleted them as expired, RMAN would create a new copy on the next run. The sequence was basically this:
  1. Run backup-for-recovery and recover commands. First time will create datafile copies as expected.
  2. Run it again, this time it will create an incremental backup and then apply it to the copies made in the previous step.
  3. Move or rename the directory holding the copies.
  4. CROSSCHECK COPY; & DELETE EXPIRED COPY;
  5. CATALOG START WITH '/path/to/new/location/';
  6. LIST DATAFILECOPY ALL; to verify that the copies are registered under the new location and the TAG is right.
  7. Run backup-for-recovery and recover commands (be sure to update the location). I would expect the same results as step 2, but instead new copies are created.
One thing that was very interesting was that if I just cataloged the new location, but did not crosscheck or delete the old entries (i.e. skipped step 4), then I could run the script and it would take an incremental backup as planned and recover the copies in the new location. But then if I later did the crosscheck and delete, it would not accept those copies and create new copies. And all this time I can "list datafilecopy all;" and see both copies with the same tags. Changing the order of steps 4 and 5 made no difference.

I'd be interesting to know what anyone else thinks about it. Personally it seems like a bug to me, so I've opened an SR. So far Oracle Support have confirmed what I've experienced, although have said there is no bug on file. They suggested I use Doc ID 1578701.1 to make another copy of the datafile with a new tag and use that new tag. However if I wanted to do that I would just create a new database copy and keep using the original tag, which is exactly what I've done.

I will be sure to update this post with anything I find. Until then, I wanted to share this experience for anyone else that might need or want to move their datafile copies if they are part of an incrementally-updated-backup strategy.
Categories: DBA Blogs

SQL Developer’s PL/SQL Debugger and Oracle 12c

Galo Balda's Blog - Mon, 2014-02-17 17:54

The PL/SQL Debugger works pretty much out of the box when used with a previous Oracle version. These are the things we needed in place before we could start debugging PL/SQL:

  1. A grant of the DEBUG CONNECT SESSION privilege.
  2. EXECUTE privilege on DBMS_DEBUG_JDWP.
  3. EXECUTE privilege on the stored procedure you want to debug.
  4. Make sure the stored procedure is “Compiled for Debug”.

Jeff Smith talks about it in this post.

But what happens when you use Oracle 12c? You still need all the stuff that I mentioned but that’s not enough. See the error I got when I was trying to debug a procedure.

Executing PL/SQL: CALL DBMS_DEBUG_JDWP.CONNECT_TCP( ‘192.168.0.10’, ‘49428’ )
ORA-24247: network access denied by access control list (ACL)
ORA-06512: at “SYS.DBMS_DEBUG_JDWP”, line 68
ORA-06512: at line 1
Process exited.
Disconnecting from the database SCOTT – ORA12CPDB1

Starting with Oracle 12c, if you want to debug PL/SQL stored procedures in the database through a Java Debug Wire Protocol (JDWP)-based debugger, such as SQL Developer or JDeveloper, then you must be granted the jdwp ACL privilege to connect your database session to the debugger at a particular host.

This is one way you can configure network access for JDWP operations:

BEGIN
 DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE
 (
 host => '192.168.0.10',
 lower_port => null,
 upper_port => null,
 ace => xs$ace_type(privilege_list => xs$name_list('jdwp'),
 principal_name => 'scott',
 principal_type => xs_acl.ptype_db)
 );
END;
Host can can be a host name, domain name, IP address, or subnet.

Principal name in the access control entry (ACE) section is the schema that holds the stored procedures you want to debug.

Now the error goes away and you can start your debugging task.

Connecting to the database SCOTT – ORA12CPDB1.
Executing PL/SQL: ALTER SESSION SET PLSQL_DEBUG=TRUE
Executing PL/SQL: CALL DBMS_DEBUG_JDWP.CONNECT_TCP( ‘192.168.0.10’, ‘49428’ )
Debugger accepted connection from database on port 49428.
Executing PL/SQL: CALL DBMS_DEBUG_JDWP.DISCONNECT()
Inside the Procedure
Process exited.
Disconnecting from the database SCOTT – ORA12CPDB1.
Debugger disconnected from database.

I hope this post saves you some time when you migrate to 12c.


Filed under: 12C, SQL Developer Tagged: 12C, SQL Developer
Categories: DBA Blogs

Kafka or Flume?

Chen Shapira - Tue, 2014-02-11 16:25

A question that keeps popping up is “Should we use Kafka or Flume to load data to Hadoop clusters?”

This question implies that Kafka and Flume are interchangeable components. It makes as much sense to me as “Should we use cars or umbrellas?”. Sure, you can hide from the rain in your car and you can use your umbrella when moving from place to place. But in general, these are different tools intended for different use-cases.

Flume’s main use-case is to ingest data into Hadoop. It is tightly integrated with Hadoop’s monitoring system, file system, file formats, and utilities such a Morphlines. A lot of the Flume development effort goes into maintaining compatibility with Hadoop. Sure, Flume’s design of sources, sinks and channels mean that it can be used to move data between other systems flexibly, but the important feature is its Hadoop integration.

Kafka’s main use-case is a distributed publish-subscribe messaging system. Most of the development effort is involved with allowing subscribers to read exactly the messages they are interested in, and in making sure the distributed system is scalable and reliable under many different conditions. It was not written to stream data specifically for Hadoop, and using it to read and write data to Hadoop is significantly more challenging than it is in Flume.

To summarize:
Use Flume if you have an non-relational data sources such as log files that you want to stream into Hadoop.
Use Kafka if you need a highly reliable and scalable enterprise messaging system to connect many multiple systems, one of which is Hadoop.


Categories: DBA Blogs

RMOUG Training Days 2014

Galo Balda's Blog - Sun, 2014-02-02 11:29

training_days

I’ll be presenting on February 6th. Here are the details of my session:

SQL Pattern Matching in Oracle 12c

Room 501, 11:15 – 12:15

Last year I had a great time, so I can’t wait to be back in Denver.

I hope to see you there!

Updated on 02/17/2014: The presentation is available on Slideshare


Filed under: 12C, RMOUG, SQL Tagged: 12C, RMOUG, SQL
Categories: DBA Blogs

My Latin American Tour in March

Oracle NZ - Thu, 2014-01-23 19:00

At March I will be presenting my very successful seminar “Mastering Backup and Recovery” in some countries of Latin America for the very first time. Thank you Ecuador,Panama, Chile and Brazil OUGs for inviting me to your amazing countries!

laouc.fwOracle_Magazine_Editors_Choice_Awards

Please, use the following links for registration and also to find more information about the seminar:

  • Quito, Ecuador – March 13-15 (2014) Click Here
  • Panama City, Panama – March 17-18 (2014) Click Here
  • Porto Alegre, Brazil – March 20-21 (2014) Click Here
  • Sao Paulo, Brazil – March 24-25 (2014) Click Here
  • Santiago, Chile – March 27-28 (2014) Click Here

 

 

Note: the next 10 people on each country to register before February 15 will receive a free signed copy of my book “Oracle Database 12c Backup and Recovery Survival Guide”

 

Regards,

 

Francisco Munoz Alvarez



Tags:  

Del.icio.us
Facebook
TweetThis
Digg
StumbleUpon

Copyright © OracleNZ by Francisco Munoz Alvarez [My Latin American Tour in March], All Right Reserved. 2016.
Categories: DBA Blogs

Join me at the next OTN Virtual Developer Day

Oracle NZ - Thu, 2014-01-23 18:18

260032_otn_virtual_dvlper_day_bnr_733w

 

Hi All,

On February 4, 2014 at 9:30 am PT I will be talking on the next OTN Virtual Developer Day about Oracle VM and Oracle Database. Come and discover the answers for the following questions:

  • Does an Oracle Database perform well on a virtualized environment?
  • What virtualization technology is more stable and allows an Oracle database to perform faster?
  • What is the performance difference between using a bare metal and a virtualized guest?
  • Is it safe to run a production database in a virtualized environment?

Come and join me on this fantastic event. Registrations and more information here.

Regards,

 

Francisco Munoz Alvarez



Tags:  

Del.icio.us
Facebook
TweetThis
Digg
StumbleUpon

Copyright © OracleNZ by Francisco Munoz Alvarez [Join me at the next OTN Virtual Developer Day], All Right Reserved. 2016.
Categories: DBA Blogs

Problems when doing a V2V or P2V of a windows server to Oracle VM

Oracle NZ - Thu, 2014-01-23 18:06

Some time ago, I received the request to migrate some Windows 2008 Servers to my Oracle VM farm and after complete the P2V migration the newly create VM would not start. It would crash on boot with blue screen and error:
STOP: 0x0000007B (0xXXXXXXXX,0xXXXXXXXX,0×00000000,0×00000000)

This issue is related to the storage drivers needed for the guest VM, and to solve this issue you should do the following on your Windows Server before start the P2V or V2V migration:

  1. Extract the Atapi.sys, Intelide.sys, Pciide.sys, and Pciidex.sys files from the %SystemRoot%\Driver Cache\I386\Driver.cab file, copy the files to the %SystemRoot%\System32\Drivers folder.
  2. Merge this register file (NOTE: You would need access to My Oracle Support to be able to download it)
  3. Turn off the virtual machine, and then try import it to OracleVM again.

You can also find more information regarding this problem on the following Microsoft link:

http://support.microsoft.com/kb/314082

Hope this would help you if having the same issue I had.

 

Regards,

 

Francisco Munoz Alvarez



Tags:  

Del.icio.us
Facebook
TweetThis
Digg
StumbleUpon

Copyright © OracleNZ by Francisco Munoz Alvarez [Problems when doing a V2V or P2V of a windows server to Oracle VM], All Right Reserved. 2016.
Categories: DBA Blogs

Quick Tip on Using Sqoop Action in Oozie

Chen Shapira - Wed, 2014-01-08 18:59

Another Oozie tip blog post.

If you try to use Sqoop action in Oozie, you know you can use the “command” format, with the entire Sqoop configuration in a single line:

<pre><workflow-app name="sample-wf" xmlns="uri:oozie:workflow:0.1">
    ...
    <action name="myfirsthivejob">
        <sqoop xmlns="uri:oozie:sqoop-action:0.2">
            <job-traker>foo:8021</job-tracker>
            <name-node>bar:8020</name-node>
            <command>import  --connect jdbc:hsqldb:file:db.hsqldb --table TT --target-dir hdfs://localhost:8020/user/tucu/foo -m 1</command>
        </sqoop>
        <ok to="myotherjob"/>
        <error to="errorcleanup"/>
    </action>
    ...
</workflow-app>

This is convenient, but can be difficult to read and maintain. I prefer using the “arg” syntax, with each argument in its own line:

<workflow-app name="sample-wf" xmlns="uri:oozie:workflow:0.1">
    ...
    <action name="myfirsthivejob">
        <sqoop xmlns="uri:oozie:sqoop-action:0.2">
            <job-traker>foo:8021</job-tracker>
            <name-node>bar:8020</name-node>
            <arg>import</arg>
            <arg>--connect</arg>
            <arg>jdbc:hsqldb:file:db.hsqldb</arg>
            <arg>--table</arg>
            <arg>TT</arg>
            <arg>--target-dir</arg>
            <arg>hdfs://localhost:8020/user/tucu/foo</arg>
            <arg>-m</arg>
            <arg>1</arg>
        </sqoop>
        <ok to="myotherjob"/>
        <error to="errorcleanup"/>
    </action>
    ...
</workflow-app>

As you can see, each argument here is in its own “arg” tag. Even two arguments that belong together like “–table” and “TT” go in two separate tags.
If you’ll try to put them together for readability, as I did, Sqoop will throw its entire user manual at you. It took me a while to figure out why this is an issue.

When you call Oozie from the command line, all the arguments you pass are sent as a String[] array, and the spaces separate the arguments into array elements. So if you call Sqoop with “–table TT” it will be two elements, “–table” and “TT”.
When using “arg” tags in Oozie, you are basically generating the same array in XML. Oozie will turn the XML argument list into an array and pass it to Sqoop just the way it would in the command line. Then Sqoop parses it in exactly the same way.
So every item separated with space in the command line must be in separate tags in Oozie.

Its simple and logical once you figure out why:)
If you want to dig a bit more into how Sqoop parses its arguments, it is using Apache Commons CLI with GnuParser. You can read all about it.


Categories: DBA Blogs

Why Oozie?

Chen Shapira - Mon, 2013-12-09 13:04

Thats a really frequently asked question. Oozie is a workflow manager and scheduler. Most companies already have a workflow schedulers – Activebatch, Autosys, UC4, HP Orchestration. These workflow schedulers run jobs on all their existing databases – Oracle, Netezza, MySQL. Why does Hadoop need its own special workflow scheduler?

As usual, it depends. In general, you can keep using any workflow scheduler that works for you. No need to change, really.
However, Oozie does have some benefits that are worth considering:

  1. Oozie is designed to scale in a Hadoop cluster. Each job will be launched from a different datanode. This means that the workflow load will be balanced and no single machine will become overburdened by launching workflows. This also means that the capacity to launch workflows will grow as the cluster grows.
  2. Oozie is well integrated with Hadoop security. This is especially important in a kerberized cluster. Oozie knows which user submitted the job and will launch all actions as that user, with the proper privileges. It will handle all the authentication details for the user as well.
  3. Oozie is the only workflow manager with built-in Hadoop actions, making workflow development, maintenance and troubleshooting easier.
  4. Oozie UI makes it easier to drill down to specific errors in the data nodes. Other systems would require significantly more work to correlate jobtracker jobs with the workflow actions.
  5. Oozie is proven to scale in some of the world’s largest clusters. The white paper discusses a deployment at Yahoo! that can handle 1250 job submissions a minute.
  6. Oozie gets callbacks from MapReduce jobs so it knows when they finish and whether they hang without expensive polling. No other workflow manager can do this.
  7. Oozie Coordinator allows triggering actions when files arrive at HDFS. This will be challenging to implement anywhere else.
  8. Oozie is supported by Hadoop vendors. If there is ever an issue with how the workflow manager integrates with Hadoop – you can turn to the people who wrote the code for answers.

So, should you use Oozie? If you find these benefits compelling, then yes. Step out of your comfort zone and learn another new tool. It will be worth it.


Categories: DBA Blogs

Parameterizing Hive Actions in Oozie Workflows

Chen Shapira - Fri, 2013-12-06 12:30

Very common request I get from my customers is to parameterize the query executed by a Hive action in their Oozie workflow.
For example, the dates used in the query depend on a result of a previous action. Or maybe they depend on something completely external to the system – the operator just decides to run the workflow on specific dates.

There are many ways to do this, including using EL expressions, capturing output from shell action or java action.
Here’s an example of how to pass the parameters through the command line. This assumes that whoever triggers the workflow (Human or an external system) has the correct value and just needs to pass it to the workflow so it will be used by the query.

Here’s what the query looks like:

insert into test select * from test2 where dt=${MYDATE}

MYDATE is the parameter that allows me to run the query on a different date each time. When running this query in hive, I’d use something like “set MYDATE=’2011-10-10′” before running the query. But when I run it from Oozie, I need to pass the parameter to the Hive action.

Lets assume I saved the query in a file hive1.hql. Here’s what the Oozie workflow would look like:

<workflow-app name="cmd-param-demo" xmlns="uri:oozie:workflow:0.4">
	<start to="hive-demo"/>
	<action name="hive-demo">
		<hive xmlns="uri:oozie:hive-action:0.2">
			<job-tracker>${jobTracker}</job-tracker>
			<name-node>${nameNode}</name-node>
			<job-xml>${hiveSiteXML}</job-xml>
			<script>${dbScripts}/hive1.hql</script>
			<param>MYDATE=${MYDATE}</param>
		</hive>
		<ok to="end"/>
		<error to="kill"/>
	</action>
	<kill name="kill">
		<message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
	</kill>
	<end name="end"/>
</workflow-app>

The important line is “MYDATE=${MYDATE}”. Here I translate an Oozie parameter to a parameter that will be used by the Hive script. Don’t forget to copy hive-site.xml and hive1.hql to HDFS! Oozie actions can run on any datanode and will not read files on local file system.

And here’s how you call Oozie with the commandline parameter:
oozie job -oozie http://myserver:11000/oozie -config ~/workflow/job.properties -run -verbose -DMYDATE=’2013-11-15′

Thats it!


Categories: DBA Blogs

Joint Webinar with Oracle – Top 5 Key Reasons Why Oracle VM is Best for Oracle Database

Oracle NZ - Wed, 2013-12-04 14:50

Top 5 Key Reasons Why Oracle VM is Best for Oracle Database

ovm_hq_revera

When: Tuesday December 17th , 10am PST

Register Here:

http://event.on24.com/r.htm?e=725350&s=1&k=1512E332202610FE9518AB8B01354C6A&partnerref=Francisco

Oracle VM application-driven architecture is designed for rapid deployment and ease of lifecycle management providing IT a highly scalable and cost effective virtualization solution for Oracle Database and business-critical applications.

Join us in this webcast featuring Francisco Munoz Alvarez, a seasoned expert, and Oracle ACE Director as he shares his performance benchmark findings highlighting key reasons why Oracle VM is the best virtualization technology for Oracle Databases.



Tags:  

Del.icio.us
Facebook
TweetThis
Digg
StumbleUpon

Copyright © OracleNZ by Francisco Munoz Alvarez [Joint Webinar with Oracle - Top 5 Key Reasons Why Oracle VM is Best for Oracle Database], All Right Reserved. 2016.
Categories: DBA Blogs

Using Oozie in Kerberized Cluster

Chen Shapira - Fri, 2013-11-22 09:14

In general, most Hadoop ecosystem tools work rather transparently in a kerberized cluster. Most of the time things “just work”. This includes Oozie. Still, when things don’t “just work”, they tend to fail with slightly alarming and highly ambiguous error messages. Here are few tips for using Oozie when your Hadoop cluster is kerberized. Note that this is a client/user guide. I assume you already followed the documentation on how to configure the Oozie server in the kerberized cluster (or you are using Cloudera Manager, which magically configures it for you).

    1. As always, use “kinit” to authenticate with Kerberos and get your tgt before trying to run oozie commands. Verify with klist. Failure to do this will result in “No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)”
    2. I always enable security debug messages. This helps me troubleshoot, and also helps when I need to ask support/mailing list for help.
      export HADOOP_ROOT_LOGGER=TRACE,console;
      export HADOOP_JAAS_DEBUG=true;
      export HADOOP_OPTS="-Dsun.security.krb5.debug=true"
    3. Your Oozie command typically contains a URL. Something like “oozie -url http://myserver:11000/oozie -conf job.properties -run” The server name in the URL must match an existing principal name in Kerberos. If your principals are actually “myserver.mydomain.com” make sure you use that in the URL.
    4. If you decide to use CURL to connect to your Oozie server, either for troubleshooting or for using the REST API, don’t forget to use “–negotiate -u foo:bar”. The actual username and password don’t matter (you are authenticating with your Kerberos ticket), but CURL throws a fit if they don’t exist.
    5. If you have Hive action in your Oozie workflow, you need to define and use credentials. Here’s an example:
      <workflow-app xmlns="uri:oozie:workflow:0.2.5" name="example-wf">
              <credentials>
                      <credential name='hive_credentials' type='hcat'>
                              <property>
                                  <name>hcat.metastore.uri</name>
                                  <value>thrift://metastore_server:9083</value>
                              </property>
                              <property>
                                  <name>hcat.metastore.principal</name>
                                  <value>hive/_HOST@KERBDOM.COM</value>
                              </property>
                      </credential>
              </credentials>
      <start to="hive-example"/>
      <action name="hive-example" cred="hive_credentials">
              <hive xmlns="uri:oozie:hive-action:0.2">
                      <job-tracker>${jobTracker}</job-tracker>
                      <name-node>${nameNode}</name-node>
                      <job-xml>${hiveSiteXML}</job-xml>
                      <script>${dbScripts}/hive-example.hql</script>
              </hive>
              <ok to="end"/>
              <error to="fail"/>
      </action>
      <kill name="fail">
              <message>Workflow failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
      </kill>
      <end name="end"/>
      </workflow-app>
      
    6. To make step #5 actually work (i.e. allow Oozie to run Hive actions), you will also need to do the following:
      In CM:
    7. go to “HDFS Service->Configuration->Service-Wide->Advanced->Cluster-wide Configuration Safety Valve for core-site.xml” and add:

      <property>
      <name>hadoop.proxyuser.oozie.hosts</name>
      <value>*</value>
      </property>
      <property>
      <name>hadoop.proxyuser.oozie.groups</name>
      <value>*</value>
      </property>
      

      – go to “Oozie service->Configuration->Oozie Server(default)->Advanced-> Oozie Server Configuration Safety Valve for oozie-site.xml” and add:

      <property>
      <name>oozie.credentials.credentialclasses</name>
      <value>hcat=org.apache.oozie.action.hadoop.HCatCredentials</value>
      </property>
      

      – Deploy client configuration and restart Hive service and Oozie service.

    8. Oozie doesn’t kinit a user for you on the node its launching the action on, and it doesn’t move principles and tickets around. Instead it uses delegation tokens. If you want to authenticate to Hadoop inside a shell or java action, you’ll need to use the same tokens.

      In a shell action, it will be something like:

      if [ -z ${HADOOP_TOKEN_FILE_LOCATION} ]
      then
      	hive -e "select x from test" -S
      else
      	hive -e "SET mapreduce.job.credentials.binary=$HADOOP_TOKEN_FILE_LOCATION; select x from test" -S
      fi
      

      In Java it will be:

      if (System.getenv("HADOOP_TOKEN_FILE_LOCATION") != null) {
                  jobConf.set("mapreduce.job.credentials.binary", System.getenv("HADOOP_TOKEN_FILE_LOCATION"));
      }
      

    Hope this helps! Feel free to comment with questions, especially if you ran into errors that I did not address. I’ll be happy to add more tips to the list.


Categories: DBA Blogs

When error say nothing about real issue

Virag Sharma - Thu, 2013-10-24 22:18
We want restore backup from source. So we took backup from source and copied it to /dba/share/MYDB on target. My collouge sent me email saying  restore failed with following error


rman target /

Recovery Manager: Release 11.2.0.3.0 - Production on Wed Sep 18 20:47:38 2013

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database (not started)

RMAN> startup nomount

Oracle instance started

Total System Global Area    8551575552 bytes

Fixed Size                     2245480 bytes
Variable Size                503319704 bytes
Database Buffers            8036286464 bytes
Redo Buffers                   9723904 bytes

RMAN> set DBID=2387922???;

executing command: SET DBID

RMAN> restore controlfile from '/dba/share/MYDB/c-2387922???-20130910-00.ctl';

Starting restore at 18-SEP-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=391 device type=DISK

channel ORA_DISK_1: no AUTOBACKUP in 7 days found
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 09/18/2013 20:48:04
RMAN-06172: no AUTOBACKUP found or specified handle is not a valid copy or piece




We searched MOS and forums. Most of the search , pointing it is old backup ( more then 7 day ),  some doc say backup corrupted etc etc. Instead going for whole story that how reached to conclusion, let we tell you the finding.

We found the mount option for /dba/share/MYDB ( where we copied backup from source ) not correct
...that's why getting error. We changed it to

           rw,bg,intr,hard,timeo=600,wsize=32768,rsize=32768,nfsvers=3,tcp,noacl 0 0

After this change we able to restore control file and rest of the backup successfully

Categories: DBA Blogs

Three Things To Do Before Starting Hadoop Project

Chen Shapira - Sun, 2013-10-20 11:45

I spent the last 6 month helping customers design, implement and deploy successful Hadoop systems. Over time, you start seeing patterns. Certain things that if the customer gets right before the project event starts increase the probability that the project will finish successfully and on-time.

Lets assume you already took the most important step – you have an actual use-case or a problem to solve, and you think  Hadoop could be the right technology to use in this case. What now?

  1. Learn how to automate production deployments: 
    Yes, Cloudera Manager would do a lot for you. But occasionally you’ll need to run the same command on 20+ servers. Thats what large clusters are all about. So get used to the idea from the beginning. Learn how to write loops in bash, how to ssh into remote machines to run commands, how to distribute files and restart services.
    You can build your own tools (I had for years), but these days you can just pick your favorite automation tool and use that instead. My favorite is Ansible. It works exactly the way I would have written a cluster automation tool, so learning it never felt like an effort and its usage is never surprising or unintuitive to me. Others prefer Puppet, Chef or Cfengine. It doesn’t matter what you use, but when I show up at your office as your Cloudera Solutions Architect and ask you to update sysctl.conf on your 50 node cluster, I don’t want you too look surprised, alarmed or tell me it will take few hours.
  2. Don’t try to boil the ocean:
    Hadoop implementation is often the first chance the dev/ops teams get to do something completely new. There is a blank slate, white sheet of paper, and you can design the perfect system. Fixing every problem the old system had and building functionality you always dreamed of.Better security!  Machine learning! Open source!
    I say – Using Hadoop successfully is a large and challenging project. Changing organizational processes and culture toward better security and processes is a large and challenging project. Creating a data driven organization is a huge project. Mixing them doesn’t give you three projects for the price of one. It isn’t even just three times more challenging than one project, I’d say the risk is an order of magnitude higher, and the risk of just implementing Hadoop is high enough. Especially in the early stages. Which brings us to… 
  3. Do a POC:
     Pay Cloudera or do it yourself. Either way, you need a POC. 
    If you start with a 12 month project, you will have to do a lot of design upfront. At a time when you have too little information. At the beginning of a large project, you won’t know for sure how the system will be used and you probably won’t know enough about Hadoop. Sure, you can call Cloudera Services and discuss the design with us, but even we can sometimes (rarely!) get things wrong. With 12 month projects you will be very deep in the project before you’ll find that limitation we completely forgot to mention.
    Be agile (really agile, 6 month project with daily scrum doesn’t count): Solve the smallest useful problem first. Implement just a single workflow, single statistical analysis, parse and search data from one source. Whatever is useful for your users – do it first. Learn in the process and build from there. This will allow you to build experience and iron-out issues at the system’s usefulness, load and importance grow.
  4. (Bonus tip) Get the most out of the POC: 
    Not all POCs are created equal. Sometimes the customer hires us to “prove that Hadoop can do X”. We get very specific requirements and very short time-frame, and we need to build a system that does X. I can see why customers need proof that their vendor can deliver. But this approach is of limited value. Because at the end of the POC you are left with a non-production system and you don’t know more Hadoop than you knew before. I love teaching, but when the attitude is “prove us this works” and the requirements are inflexible, there isn’t much time for discussions and casual learning. Most of the knowledge transfer will only happen in the delivery document, which is not the same as lively discussions.
    Better POC happens when a customer brings in Cloudera to help them build their first Hadoop project. There are still time and scope constraints, but now the POC is not about “Prove us this works” but rather “Help us make it work”. We work together as a team. We will brainstorm design possibilities with you and share best practices. We will teach you how to build the system, how to configure it and troubleshoot it. You get a chance to learn all our little tricks of development and deployment that makes life easier. At the end of the POC, your team will have real Hadoop expertise, relevant to your specific system, problems, culture, data and requirements. I see this as the best investment you can make in Hadoop for your organization. 
    But I may be a bit biased.

 


Categories: DBA Blogs

Control Charts

Chen Shapira - Sun, 2013-10-13 22:43

Last week, while working on customer engagement, I learned a new method of quantifying behavior of time-series data. The method is called “Control Chart” and credit to Josh Wills, our director of data science, for pointing it out. I thought I’ll share it with my readers as its easy to understand, easy to implement, flexible and very useful in many situations.

The problem is ages old – you collect measurements over time and want to know when your measurements indicate abnormal behavior. “Abnormal” is not well defined, and thats on purpose – we want our method to be flexible enough to match what you define as an issue.

For example, lets say Facebook are interested in tracking usage trend for each user, catching those with decreasing use

There are few steps to the Control Chart method:

  1. Collect all relevant data points. In our case, number of minutes of Facebook use per day for each user.
  2. Calculate a baseline – this can be average use for each user or average use for similar demographics.  Even adaptive average of the type calculated by Oracle Enterprise Manager, to take into account decreased Facebook use over the weekend.
  3. Calculate “zones” one, two and three standard deviations around the baseline

Those zones can be used to define rules for normal and abnormal behaviors of the system. These rules are what makes the system valuable.
Examples of rules that define abnormal behavior can be:

  1. Any point 3 standard deviations above the baseline. This will indicate extreme sudden increase.
  2. 7 consecutive measurements more than one standard deviation over the baseline. This indicates a sustained increase.
  3. 9 consecutive measurement each higher than previous one. This indicates steady upward trend.
  4. 6 consecutive measurements each more than two standard deviations away from baseline, each one of different side of the baseline than the previous measurement. This indicates instability of the system.

There are even sets of standard rules used in various industries, best practices of sorts. Western Electric rules and Nelson rules are particularly well know.

Note how flexible the method is – you can use any combination of rules that will highlight abnormalities you are interested in highlighting.
Also note that while the traditional use indeed involves charts, the values and rules are very easy to calculate programmatically and visualization can be useful but not mandatory.
If you measure CPU utilization on few servers, visualizing the chart and actually seeing the server behavior can be useful. If you are Facebook and monitor user behavior, visualizing a time series of every one of their millions of users is hopeless. Calculating baselines, standard deviations and rules for each use is trivial.

Also note how this problem is “embarrassingly parallel“. To calculate behavior for each user, you only need to look at data for that particular user. Parallel, share-nothing platform like Hadoop can be used to scale the calculation indefinitely simply by throwing increasing number of servers on the problem. The only limit is the time it takes to calculate the rules for a single user.

Naturally, I didn’t dive into some of the complexities in using Control Charts.  Such as how to select a good baseline, how to calculate standard deviation (or whether to use another statistic to define zones) and how many measurements should be examined before a behavior signals a trend. If you think this tool is useful for you, I encourage you to investigate more deeply.

 


Categories: DBA Blogs

Big Data News from Oracle OpenWorld 2013

Chen Shapira - Sun, 2013-10-06 20:00

Only a week after Oracle OpenWorld concluded and I already feel like I’m hopelessly behind on posting news and impressions. Behind or not, I have news to share!

The most prominent feature announced at OpenWorld is the “In-Memory Option”  for Oracle Database 12c.  This option is essentially a new part of the SGA that caches tables in column formats. This is expected to make data warehouse queries significantly faster and more efficient. I would have described the feature in more details, but Jonathan Lewis gave a better overview in this forum discussion, so just go read his post.

Why am I excited about a feature that has nothing to do with Hadoop?

First, because I have a lot of experience with large data warehouses. So I know that big data often means large tables, but only few columns used in each query. And I know that in order to optimize these queries and to avoid expensive disk reads every time each query runs, we build indexes on those columns, which makes data loading slow. In-memory option will allow us to drop those indexes and just store the columns we need in memory.

Second, because I’m a huge fan of in-memory data warehouses, and am happy that Oracle is now making these feasible. Few TB of memory in a large server are no longer science fiction, which means that most of your data warehouse will soon fit in memory. Fast analytics for all! And what do you do with the data that won’t fit in memory? Perhaps store it in your Hadoop cluster.

Now that I’m done being excited about the big news, lets talk about small news that you probably didn’t notice but you should.

Oracle announced two cool new features for the Big Data Appliance. Announced may be a big word, Larry Ellison did not stand up on stage and talked about them. Instead the features sneaked quietly into the last upgrade and appeared in the documentation.

Perfect Balance – If you use Hadoop as often as I do, you know how data skew can mess with query performance. You run a job with several reducers, each aggregates data for a subset of keys. Unless you took great care in partitioning your data, the data will not be evenly distributed between the reducers, usually because it wasn’t evenly distributed between the keys. As a result, you will spend 50% of the time waiting for that one last reducer to finish already.

Oracle’s Perfect Balance makes the “took great case in partitioning your data” part much much easier. This blog post is just a quick overview, not an in-depth blog post, so I won’t go into details of how this works (wait for my next post on this topic!). I’ll just mention that Perfect Balance can be used without any change to the application code, so if you are using BDA, there is no excuse not to use it.

And no excuse to play solitaire while waiting for the last reducer to finish.

Oracle XQuery for Hadoop – Announced but not officially released yet, which is why I’m pointing you at an Amis blog post. For now thats the best source of information about this feature. This feature, combined with the existing Oracle Loader for Hadoop will allow running XQuery operations on XMLs stored in Hadoop, pushing down the entire data processing bit to Map Reduce on the Hadoop cluster. Anyone who knows how slow, painful and CPU intensive XML processing can be on an Oracle database server will appreciate this feature. I wish I had it a year ago when I had to ingest XMLs at a very high rate. It is also so cool that I’m a bit sorry that we never developed more awesome XQuery capabilities for Hive and Impala. Can’t wait for the release so I can try that!

During OpenWorld there was also additional exposure for existing, but perhaps not very well known Oracle Big Data features – Hadoop for ODI, Hadoop for OBIEE and using GoldenGate with Hadoop. I’ll try to write more about those soon.

Meanwhile, let me know what you think of In-Memory, Perfect Balance and OXH.


Categories: DBA Blogs

Install APEX’s Sample Packaged Application by Importing its SQL Script

Ittichai Chammavanijakul - Wed, 2013-10-02 15:11

If for some reason, you could not install the APEX sample packaged application via Application Builder > Packaged Applications interface, you have an option of installing it by importing the SQL script (fxxxx.sql).

In my case, when installing via the Packaged Applications interface, I got the following error:

Screen Shot 2013-10-02 at 12.06.48 PM

While working with DBA and Oracle support to resolve the root cause of this issue, I’ve found that the installation script (like export file) of the packaged applications comes with the APEX installation files under apex/core/packaged_apps

ICHAMMA1:packaged_apps$ pwd
/Users/ichamma1/Downloads/apex/core/packaged_apps

ICHAMMA1:packaged_apps$ grep "prompt  APPLICATION" *.sql
f7000.sql:prompt  APPLICATION 7000 - Online Marketing Campaign Calendar
f7010.sql:prompt  APPLICATION 7010 - Decision Manager***
f7020.sql:prompt  APPLICATION 7020 - Asset Manager*
f7050.sql:prompt  APPLICATION 7050 - Opportunity Tracker ***
f7060.sql:prompt  APPLICATION 7060 - Bug Tracking***
f7090.sql:prompt  APPLICATION 7090 - Group Calendar ***
f7100.sql:prompt  APPLICATION 7100 - Artwork Catalog***
f7120.sql:prompt  APPLICATION 7120 - Expertise Tracker***
f7130.sql:prompt  APPLICATION 7130 - Community Requests ***
f7140.sql:prompt  APPLICATION 7140 - Incident Tracking***
f7150.sql:prompt  APPLICATION 7150 - Systems Catalog***
f7170.sql:prompt  APPLICATION 7170 - Customer Tracker***
f7190.sql:prompt  APPLICATION 7190 - Issue Tracker***
f7220.sql:prompt  APPLICATION 7220 - P-Track***
f7230.sql:prompt  APPLICATION 7230 - Data Model Repository Viewer*
f7240.sql:prompt  APPLICATION 7240 - Checklist Manager***
f7250.sql:prompt  APPLICATION 7250 - Data Reporter***
f7270.sql:prompt  APPLICATION 7270 - APEX Application Archive***
f7280.sql:prompt  APPLICATION 7280 - Survey Builder ***
f7290.sql:prompt  APPLICATION 7290 - Meeting Minutes***
f7300.sql:prompt  APPLICATION 7300 - Use Case Status***
f7600.sql:prompt  APPLICATION 7600 - Sample Access Control*
f7610.sql:prompt  APPLICATION 7610 - Sample Build Options*
f7650.sql:prompt  APPLICATION 7650 - Go Live Check List***
f7800.sql:prompt  APPLICATION 7800 - Brookstrut Sample Application ***
f7810.sql:prompt  APPLICATION 7810 - Sample Reporting***
f7820.sql:prompt  APPLICATION 7820 - Sample Calendar***
f7830.sql:prompt  APPLICATION 7830 - Sample Charts***
f7840.sql:prompt  APPLICATION 7840 - Sample Dynamic Actions***
f7850.sql:prompt  APPLICATION 7850 - Sample Data Loading***
f7860.sql:prompt  APPLICATION 7860 - Sample Master Detail***
f7870.sql:prompt  APPLICATION 7870 - Sample Forms and Grid Layout***
f7880.sql:prompt  APPLICATION 7880 - Sample Search***
f7890.sql:prompt  APPLICATION 7890 - Feedback ***
f7900.sql:prompt  APPLICATION 7900 - Sample Dialog***
f7910.sql:prompt  APPLICATION 7910 - Sample Trees***
f7920.sql:prompt  APPLICATION 7920 - Sample Lists***
f7930.sql:prompt  APPLICATION 7930 - Sample Wizards***
f7940.sql:prompt  APPLICATION 7940 - Sample Collections***
f7950.sql:prompt  APPLICATION 7950 - Sample Time Zones*
f7960.sql:prompt  APPLICATION 7960 - Sample File Upload and Download***
f7980.sql:prompt  APPLICATION 7980 - Sample RESTful Services***
f8950.sql:prompt  APPLICATION 8950 - Sample Database Application

The installation is just simple as importing this script file (Application Builder > Import) or run from the SQL Plus (with proper security setup).

Categories: DBA Blogs

My Oracle OpenWorld 2013 Presentations

Chen Shapira - Mon, 2013-09-30 16:47

Oracle OpenWorld was fantastic, as usual. The best show in San Francisco. This is the seventh year in a row that I’m attending – 3 times as HP employee, 3 times as Pythian employee, and now as a Clouderan. My life changes, but the event and people are always fantastic.

There will be a separate blogpost about what I learned at the event, new exciting products and my thoughts of them. But first, let me follow up on what I taught.

On Sunday afternoon, and then again on Thursday afternoon, I presented “Data Wrangling with Oracle Connectors for Hadoop”. I presented it twice because both Oracle and IOUG liked my abstract. I was surprised to discover that both audiences had no idea what “Data Wrangling” is! I appreciate the attendees, they trusted me enough to attend without even being sure what I’m planning to talk about. In both sessions I had people come up with excellent questions, mentioning that they are current or future Cloudera customers. I absolutely loved it, what a great opportunity to connect with Hadoopers from all industries.

You can find the slides here: Data Wrangling with Oracle Connectors for Hadoop

On Monday, at OakTable World, I presented ETL on Hadoop. I presented it at Surge earlier this year, but this time I think I misjudged the fit of the content to the audience – I gave pretty technical tips of how to implement ETL on Hadoop to an audience with very little experience with Hadoop. They were smart people and mostly followed along, but I should have kept my content to more introductory level.

You can find the slides here: Scaling ETL with Hadoop

On Wednesday, I was fortunate to present with my former colleague Marc Fielding on SSDs and their use in Exadata. The topic is not very Hadoop related, but I love SSDs regardless and presenting with Marc was fun and the audience was highly engaged. I did get a lot of questions on SSDs and Hadoop, so I’ll consider writing about the topic in the future.

Marc has the latest version of the slides, but you can find an approximation here: Databases in a Solid State World.

Thanks again to everyone who attended, to all the customers who stopped to say hello and to everyone who was friendly and made the event fun. I hope to see you again next year.


Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs