DBA Blogs

OCP 12C – SQL Enhancements

DBA Scripts and Articles - Thu, 2014-10-23 09:20

Extended Character Data Type Columns In this release Oracle changed the maximum sixe of three data types  In Oracle 12c if you set a VARCHAR2 to 4000 bytes or less it is stored inline, if you set it to more than 4000 bytes then it is transformed in extended character data type and stored out … Continue reading OCP 12C – SQL Enhancements

The post OCP 12C – SQL Enhancements appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Configure Oracle Exadata Write Back Flash Cache

VitalSoftTech - Thu, 2014-10-23 00:37
In addition to improving read I/Os, Oracle Exadata Write back flash cache also provides the ability to cache write I/Os directly to PCI flash. Exadata storage software version 11.2.3.2.1 is the minimum version required to use write back flash cache. Grid infrastructure and database homes must run 11.2.0.3.9 or later to use with Write-back Smart […]
Categories: DBA Blogs

ORA-16534 When Converting to/from Snapshot Standby with DataGuard Broker

Don Seiler - Wed, 2014-10-22 23:00
We here at Seilerwerks Industries (not really) have been using snapshot standby databases to refresh an array of unit test databases from a common primary. During the business day, these would be converted to snapshot standby databases for testing, then overnight they are converted back to physical standby and recovered up to the master again.

However we ran into one problem the other week. I noticed that the test3 database was still in physical standby mode well into the business day. Trying to manually convert returned this error:

DGMGRL> convert database test3 to snapshot standby
Converting database "test3" to a Snapshot Standby database, please wait...
Error:
ORA-16534: switchover, failover or convert operation in progress
ORA-06512: at "SYS.DBMS_DRS", line 157
ORA-06512: at line 1

A quick search of MOS yielded bug 13716797 (ORA-16534 from the broker when setting apply-off), which simply suggested restarting the problem database when encountering that error. However doing so did not get me any further. That's when the I checked the Data Guard Broker configuration:

DGMGRL> show configuration;

Configuration - testdb

  Protection Mode: MaxPerformance
  Databases:
    test1 - Primary database
    test5 - Physical standby database
    test6 - Snapshot standby database
    test3 - Physical standby database
    test4 - Snapshot standby database

Fast-Start Failover: DISABLED

Configuration Status:
ORA-16610: command "CONVERT DATABASE test6" in progress
DGM-17017: unable to determine configuration status

Looks like I have two databases stuck in physical standby mode, test3 and also test6. And the configuration is specifically complaining about test6. So I restarted that database and, sure enough, I was then able to convert both back to snapshots:

DGMGRL> show configuration;

Configuration - testdb

  Protection Mode: MaxPerformance
  Databases:
    test1 - Primary database
    test5 - Snapshot standby database
    test6 - Snapshot standby database
    test3 - Snapshot standby database
    test4 - Snapshot standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

It was very interesting to me to see one member of the Data Guard configuration prevent me from performing an operation on a different member. Hopefully this helps one of you in the future.

Categories: DBA Blogs

OCP 12C – DataPump, SQL*Loader, External Tables Enhancements

DBA Scripts and Articles - Wed, 2014-10-22 14:57

Oracle DataPump Enhancements Full Transportable Export and Import of Data In Oracle 12c you now have the possibility to create full transportable exports and imports. A full transportable export contains all objects and data needed to create a copy of the database. To create a fully transportable export of your database you need to specify … Continue reading OCP 12C – DataPump, SQL*Loader, External Tables Enhancements

The post OCP 12C – DataPump, SQL*Loader, External Tables Enhancements appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Learning Spark Lightning-Fast Big Data Analytics by Holden Karau, Andy Konwinski, Patrick Wendell, Matei Zaharia; O'Reilly Media

Surachart Opun - Sat, 2014-10-18 13:45
Apache Spark started as a research project at UC Berkeley in the AMPLab, which focuses on big data analytics. Spark is an open source cluster computing platform designed to be fast and general-purpose for data analytics - It's both fast to run and write. Spark provides primitives for in-memory cluster computing: your job can load data into memory and query it repeatedly much quicker than with disk-based systems like Hadoop MapReduce. Users can write applications quickly in Java, Scala or Python. In additional, it's easy to run standalone or on EC2 or Mesos. It can read data from HDFS, HBase, Cassandra, and any Hadoop data source.
If you would like a book about Spark - Learning Spark Lightning-Fast Big Data Analytics by Holden Karau, Andy Konwinski, Patrick Wendell, Matei Zaharia. It's a great book for who is interested in Spark development and starting with it. Readers will learn how to express MapReduce jobs with just a few simple lines of Spark code and more...
  • Quickly dive into Spark capabilities such as collect, count, reduce, and save
  • Use one programming paradigm instead of mixing and matching tools such as Hive, Hadoop, Mahout, and S4/Storm
  • Learn how to run interactive, iterative, and incremental analyses
  • Integrate with Scala to manipulate distributed datasets like local collections
  • Tackle partitioning issues, data locality, default hash partitioning, user-defined partitioners, and custom serialization
  • Use other languages by means of pipe() to achieve the equivalent of Hadoop streaming
With Early Release - 7 chapters. Explained Apache Spark overview, downloading and commands that should know, programming with RDDS (+ more advance) as well as working with Key-Value Pairs, etc. Easy to read and Good examples in a book. For people who want to learn Apache Spark or use Spark for Data Analytic. It's a book, that should keep in shelf.

Book: Learning Spark Lightning-Fast Big Data Analytics
Authors: Holden KarauAndy KonwinskiPatrick WendellMatei Zaharia
Categories: DBA Blogs

Advanced Sessions for Oracle Open World Conference 2015

VitalSoftTech - Fri, 2014-10-10 14:44
As database gurus and fanatics of the world, as well as hotels and cabbies of San Francisco know, one of the largest events the Oracle Open World conference ..
Categories: DBA Blogs

Using Flume - Flexible, Scalable, and Reliable Data Streaming by Hari Shreedharan; O'Reilly Media

Surachart Opun - Thu, 2014-10-09 03:37
Hadoop is an open-source software framework for storage and large-scale processing of data-sets on clusters of commodity hardware. How to deliver log to Hadoop HDFS. Apache Flume is open source to integrate with HDFS, HBASE and it's a good choice to implement for log data real-time collection from front end or log data system.
Apache Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data.It uses a simple data model. Source => Channel => Sink
It's a good time to introduce a good book about Flume - Using Flume - Flexible, Scalable, and Reliable Data Streaming by Hari Shreedharan (@harisr1234). It was written with 8 Chapters: giving basic about Apache Hadoop and Apache HBase, idea for Streaming Data Using Apache Flume, about Flume Model (Sources, Channels, Sinks), and some moew for Interceptors, Channel Selectors, Sink Groups, and Sink Processors. Additional, Getting Data into Flume* and Planning, Deploying, and Monitoring Flume.

This book was written about how to use Flume. It's very good to guide about Apache Hadoop and Apache HBase before starting about Flume Data flow model. Readers should know about java code, because they will find java code example in a book and easy to understand. It's a good book for some people who want to deploy Apache Flume and custom components.
Author separated each Chapter for Flume Data flow model. So, Readers can choose each chapter to read for part of Data flow model: reader would like to know about Sink, then read Chapter 5 only until get idea. In addition, Flume has a lot of features, Readers will find example for them in a book. Each chapter has references topic, that readers can use it to find out more and very easy + quick to use in Ebook.
With Illustration in a book that is helpful with readers to see Big Picture using Flume and giving idea to develop it more in each System or Project.
So, Readers will be able to learn about operation and how to configure, deploy, and monitor a Flume cluster, and customize examples to develop Flume plugins and custom components for their specific use-cases.
  • Learn how Flume provides a steady rate of flow by acting as a buffer between data producers and consumers
  • Dive into key Flume components, including sources that accept data and sinks that write and deliver it
  • Write custom plugins to customize the way Flume receives, modifies, formats, and writes data
  • Explore APIs for sending data to Flume agents from your own applications
  • Plan and deploy Flume in a scalable and flexible way—and monitor your cluster once it’s running
Book: Using Flume - Flexible, Scalable, and Reliable Data Streaming
Author: Hari Shreedharan
Categories: DBA Blogs

11 Tips To Get Your Conference Abstract Accepted

This page has been permanently moved. Please CLICK HERE to be redirected.

Thanks, Craig.11 Ways To Get Your Conference Abstract Accepted
This is what happens when your abstract is selected!Ready for some fun!? It's that time of year again and the competition will be intense. The "call for abstracts" for a number of Oracle Database conferences are about to close.

The focus of this posting is how you can get a conference abstract accepted.

As a mentor, Track Manager and active conference speaker I've been helping DBAs get their abstracts accepted for many years. If you follow my 11 tips below, I'm willing to bet you will get a free pass to any conference you wish in any part of the world.

1. No Surprises! 
Track Manager After A SurpriseThe Track Manager wants no surprises, great content and a great presentation. Believe me when I say, they are looking for ways to reduce the risk of a botched presentation, a cancelation or a no show. Your abstract submissions is your first way to show you are serious and will help make the track incredibly awesome.

Tip: In all your conference communications, demonstrate a commitment to follow through.

2. Creative Title.
The first thing everyone sees is the title. I can personally tell you, if the title does not peak my curiosity without sounding stupid, then unless I know the speaker is popular I will not read the abstract. Why do I do this? Because as a Track Manager, I know conference attendees will do the same thing! And as a Track Manager, I want attendees to want to attend sessions in my track.

Tip: Find two people, read the title to them and ask what they think. If they say something like, "What are you going to talk about?" that's bad. Rework the title.

3. Tell A Story
The abstract must tell a compelling story. Oracle conferences are not academic conferences! There needs to be some problem along with a solution complete with drama woven into the story.

Tip: People forget bullet points, but they never forget a good story.

4. Easy To Read
The abstract must be easy to review. The abstract reviewers may have over a hundred abstracts to review. Make it a good quick read for the reviewers and your chances increase.

Tip: Have your computer read your abstract back to you. If you don't say, "Wow!" rework the abstract. 

5. Be A Grown-Up
You can increase the perception you will physically show up and put on a great show at the conference by NOT putting into your abstract emoji, bullet points, your name and title or pushing a product or service. NEVER copy/paste from a powerpoint outline into the abstract or outline. (I've seen people do this!)

Tip: Track Managers do not want to baby sit you. They want an adult who will help make their track great.

6. Submit Introductory Level Abstracts
I finally figured this out a couple years ago. Not everyone is ready for a detailed understanding of cache buffer chain architecture, diagnosis, and solution development. Think of it from a business perspective. Your market (audience) will be larger if your presentation is less technical. If this bothers you, read my next point.

Tip: Submit both an introductory level version and advanced level version of your topic.

7. Topics Must Be Filled
Not even the Track Manager knows what people will submit. And you do not know what the Track Manager is looking for. And you do not know what other people are submitting. Mash this together and it means you must submit more than one abstract. I know you really, really want to present on topic X. But would you rather not have an abstract accepted?

Tip: Submit abstracts on multiple topics. It increases your chances of being accepted.

8. Submit Abstract To Multiple Tracks
This is similar to submitting both an introductory version of your abstract. Here's an example: If there is a DBA Bootcamp track and a Performance & Internals Track, craft your abstract to Bootcamp version has a more foundational/core feel to it. And craft your Performance & Internals version to feel more technical and advanced.

Do not simply change the title and the abstract can not be the same.  If the conference managers or the Track Manager feels you are trying to game the conference, you present a risk to the conference and their track and your abstracts will be rejected. So be careful and thoughtful.

Tip: Look for ways to adjust your topic to fit into multiple tracks.

9. Great Outline Shows Commitment
If the reviewers have read your title and abstract, they are taking your abstract seriously. Now is the time to close the deal by demonstrating you will put on a great show. And this means you already have in mind an organized and well thought out delivery. You convey this with a fantastic outline. I know it is difficult to create an outline BUT the reviewers also know this AND having a solid outline demonstrates to them you are serious, you will show up, and put on a great show.

Tip: Develop your abstract and outline together. This strengthens both and develops a kind of package the reviewers like to see.

10. Learning Objectives Show Value
You show the obvious value of your topic through the learning objectives. Personally, I use these to help keep me focused on my listener, just not what I'm interested in at the moment. Because I love my work, I tend to think everyone also does... not so. I must force myself to answer the question, "Why would a DBA care about this topic?"

Tip: Develop your learning objectives by asking yourself, "When my presentation is over, what do I want the attendees to remember?"

11. Submit About Problems You Solved
Submit on the topics you have personally explored and found fascinating. Every year, every DBA has had to drill deep into at least one problem. This concentrated effort means you know the topic very well. And this means you are qualified to tell others about it! People love to hear from people who are fascinated about something. Spread the good news resulting from a "bad" experience.

Tip: Submit on topics you have explored and are fascinated with.

How Many Abstracts Should I Submit?
It depends on the conference, but for a big North America conference like ODTUG, RMOUG and IOUG I suggest at least four.

Based on what I wrote above, pick three topics, perhaps create both an introductory and advanced version and look to see if it makes sense to submit to multiple tracks. That means you'll probably submit at least four abstracts. It's not as bad as it sounds, because you will only have perhaps three core abstracts. All the others are modifications to fit a specific need. Believe when you receive the acceptance email, it will all be worth it!

See you at the conference!

Craig.


Categories: DBA Blogs

Comparing SQL Execution Times From Different Systems

This page has been permanently moved. Please CLICK HERE to be redirected.

Thanks, Craig.Comparing SQL Execution Times From Different Systems
Suppose it's your job to identify SQL that may run slower in the about-to-be-upgrated Oracle Database. It's tricky because no two systems are alike. Just because the SQL run time is faster in the test environment doesn't mean the decision to upgrade is a good one. In fact, it could be disastrous.

For example; If a SQL statement runs 10 seconds in production and runs 20 seconds in QAT, but the production system is twice as fast as QAT, is that a problem? It's difficult to compare SQL runs times when the same SQL resides in different environments.

In this posting, I present a way to remove the CPU speed differences, so an appropriate "apples to apples" SQL elapsed time comparison can be made, thereby improving our ability to more correctly detect risky SQL that may be placed into the upgraded production system.

And, there is a cool, free, downloadable tool involved!

Why SQL Can Run Slower In Different Environments
There are a number of reasons why a SQL's run time is different in different systems. An obvious reason is a different execution plan. A less obvious and much more complex reason is a workload intensity or type difference. In this posting, I will focus on CPU speed differences. Actually, what I'll show you is how to remove the CPU speed differences so you can appropriately compare two SQL statements. It's pretty cool.

The Mental Gymnastics
If a SQL statement's elapsed time in production is 10 seconds and 20 seconds in QAT, that’s NOT an issue IF the production system is twice as fast.

If this makes sense to you, then what you did was mentally adjust one of the systems so it could be appropriately compared. This is how I did it:

10 seconds in production * production is 2 times as fast as QA  = 20 seconds 

And in QA the sql ran in 20 seconds… so really they ran “the same” in both environments. If I am considering placing the SQL from the test environment into the production environment, then this scenario does not raise any risk flags. The "trick" is determining "production is 2 times as fast as QA" and then creatively use that information.

Determining The "Speed Value"

Fortunately, there are many ways to determine a system's "speed value." Basing the speed value on Oracle's ability to process buffers in memory has many advantages: a real load is not required or even desired, real Oracle code is being run at a particular version, real operating systems are being run and the processing of an Oracle buffer highly correlates with CPU consumption.

Keep in mind, this type of CPU speed test is not an indicator of scalability (benefit of adding additional CPUs) in any way shape or form. It is simply a measure of brut force Oracle buffer cache logical IO processing speed based on a number of factors. If you are architecting a system, other tests will be required.

As you might expect, I have a free tool you can download to determine the "true speed" rating. I recently updated it to be more accurate, require less Oracle privileges, and also show the execution plan of the speed test tool SQL. (A special thanks to Steve for the execution plan enhancement!) If the execution plan used in the speed tool is difference on the various systems, then obviously we can't expect the "true speeds" to be comparable.

You can download the tool HERE.

How To Analyze The Risk

Before we can analyze the risk, we need the "speed value" for both systems. Suppose a faster system means its speed rating is larger. If the production system speed rating is 600 and the QAT system speed rating is 300, then production is deemed "twice as fast."

Now let's put this all together and quickly go through three examples.

This is the core math:

standardized elapsed time = sql elapsed time * system speed value

So if the SQL elapsed time is 25 seconds and the system speed value is 200, then the standardized "apples-to-apples" elapsed time is 5000 which is 25*200. The "standardized elapsed time" is simply a way to compare SQL elapsed times, not what users will feel and not the true SQL elapsed time.

To make this a little more interesting, I'll quickly go through three scenarios focusing on identifying risk.

1. The SQL truly runs the same in both systems.

Here is the math:

QAT standardized elapsed time = 20 seconds X 300 = 6000 seconds

PRD standardized elapsed time = 10 seconds X 600 = 6000 seconds

In this scenario, the true speed situation is, QAT = PRD. This means, the SQL effectively runs just as fast in QAT as in production. If someone says the SQL is running slower in QAT and therefore this presents a risk to the upgrade, you can confidently say it's because the PRD system is twice as fast! In this scenario, the QAT SQL will not be flagged as presenting a significant risk when upgrading from QAT to PRD.

2. The SQL runs faster in production.

Now suppose the SQL runs for 30 seconds in QAT and for 10 seconds in PRD. If someone was to say, "Well of course it's runs slower in QAT because QAT is slower than the PRD system." Really? Everything is OK? Again, to make a fare comparison, we must compare the system using a standardizing metric, which I have been calling the, "standardized elapsed time."

Here are the scenario numbers:

QAT standardized elapsed time = 30 seconds X 300 = 9000 seconds
PRD standardized elapsed time = 10 seconds X 600 = 6000 seconds

In this scenario, the QAT standard elapsed time is greater than the PRD standardized elapsed time. This means the QAT SQL is truly running slower in QAT compared to PRD. Specifically, this means the slower SQL in QAT can not be fully explained by the slower QAT system. Said another way, while we expect the SQL in QAT to run slower then in the PRD system, we didn't expect it to be quite so slow in QAT. There must another reason for this slowness, which we are not accounting for. In this scenario, the QAT SQL should be flagged as presenting a significant risk when upgrading from QAT to PRD.

3. The SQL runs faster in QAT.

In this final scenario, the SQL runs for 15 seconds in QAT and for 10 seconds in PRD. Suppose someone was to say, "Well of course the SQL runs slower in QAT. So everything is OK." Really? Everything is OK? To get a better understanding of the true situation, we need to look at their standardized elapsed times.

QAT standardized elapsed time = 15 seconds X 300 = 4500 seconds
PRD standardized elapsed time = 10 seconds X 600 = 6000 seconds 

In this scenario, QAT standard elapsed time is less then the PRD standardized elapsed time. This means the QAT SQL is actually running faster in the QAT, even though the QAT wall time is 15 seconds and the PRD wall time is only 10 seconds. So while most people would flag this QAT SQL as "high risk" we know better! We know the QAT SQL is actually running faster in QAT than in production! In this scenario, the QAT SQL will not be flagged as presenting a significant risk when upgrading from QAT to PRD.

In Summary...

Identify risk is extremely important while planning for an upgrade. It is unlikely the QAT and production system will be identical in every way. This mismatch makes identifying risk more difficult. One of the common differences in systems is their CPU processing speeds. What I demonstrated was a way to remove the CPU speed differences, so an appropriate "apples to apples" SQL elapsed time comparison can be made, thereby improving our ability to more correctly detect risky SQL that may be placed into the upgraded production system.

What's Next?

Looking at the "standardized elapsed time" based on Oracle LIO processing is important, but it's just one reason why a SQL may have a different elapsed time in a different environment. One of the big "gotchas" in load testing is comparing production performance to a QAT environment with a different workload. Creating an equivalent workload on different systems is extremely difficult to do. But with some very cool math and a clear understanding of performance analysis, we can also create a more "apples-to-apples" comparison, just like we have done with CPU speeds. But I'll save that for another posting.

All the best in your Oracle performance work!

Craig.




Categories: DBA Blogs

rsyslog: Send logs to Flume

Surachart Opun - Mon, 2014-10-06 05:12
Good day for learning something new. After read Flume book, that something popped up in my head. Wanted to test "rsyslog" => Flume => HDFS. As we know, forwarding log to other systems. We can set rsyslog:
*.* @YOURSERVERADDRESS:YOURSERVERPORT ## for UDP
*.* @@YOURSERVERADDRESS:YOURSERVERPORT ## for TCPFor rsyslog:
[root@centos01 ~]# grep centos /etc/rsyslog.conf
*.* @centos01:7777Came back to Flume, I used Simple Example for reference and changed a bit. Because I wanted it write to HDFS.
[root@centos01 ~]# grep "^FLUME_AGENT_NAME\="  /etc/default/flume-agent
FLUME_AGENT_NAME=a1
[root@centos01 ~]# cat /etc/flume/conf/flume.conf
# example.conf: A single-node Flume configuration
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
#a1.sources.r1.type = netcat
a1.sources.r1.type = syslogudp
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 7777
# Describe the sink
#a1.sinks.k1.type = logger
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = hdfs://localhost:8020/user/flume/syslog/%Y/%m/%d/%H/
a1.sinks.k1.hdfs.fileType = DataStream
a1.sinks.k1.hdfs.writeFormat = Text
a1.sinks.k1.hdfs.batchSize = 10000
a1.sinks.k1.hdfs.rollSize = 0
a1.sinks.k1.hdfs.rollCount = 10000
a1.sinks.k1.hdfs.filePrefix = syslog
a1.sinks.k1.hdfs.round = true


# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
[root@centos01 ~]# /etc/init.d/flume-agent start
Flume NG agent is not running                              [FAILED]
Starting Flume NG agent daemon (flume-agent):              [  OK  ]Tested to login by ssh.
[root@centos01 ~]#  tail -0f  /var/log/flume/flume.log
06 Oct 2014 16:35:40,601 INFO  [hdfs-k1-call-runner-0] (org.apache.flume.sink.hdfs.BucketWriter.doOpen:208)  - Creating hdfs://localhost:8020/user/flume/syslog/2014/10/06/16//syslog.1412588139067.tmp
06 Oct 2014 16:36:10,957 INFO  [hdfs-k1-roll-timer-0] (org.apache.flume.sink.hdfs.BucketWriter.renameBucket:427)  - Renaming hdfs://localhost:8020/user/flume/syslog/2014/10/06/16/syslog.1412588139067.tmp to hdfs://localhost:8020/user/flume/syslog/2014/10/06/16/syslog.1412588139067
[root@centos01 ~]# hadoop fs -ls hdfs://localhost:8020/user/flume/syslog/2014/10/06/16/syslog.1412588139067
14/10/06 16:37:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r--   1 flume supergroup        299 2014-10-06 16:36 hdfs://localhost:8020/user/flume/syslog/2014/10/06/16/syslog.1412588139067
[root@centos01 ~]#
[root@centos01 ~]#
[root@centos01 ~]# hadoop fs -cat hdfs://localhost:8020/user/flume/syslog/2014/10/06/16/syslog.1412588139067
14/10/06 16:37:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
sshd[20235]: Accepted password for surachart from 192.168.111.16 port 65068 ssh2
sshd[20235]: pam_unix(sshd:session): session opened for user surachart by (uid=0)
su: pam_unix(su-l:session): session opened for user root by surachart(uid=500)
su: pam_unix(su-l:session): session closed for user rootLook good... Anyway, It needs to adapt more...



Categories: DBA Blogs

How to setup passwordless ssh in Exadata using dcli

Alejandro Vargas - Sun, 2014-10-05 03:57

Setting passwordless ssh root connection using dcli is fast and simple and will easy later to execute commands on all servers using this utility.

In order to do that you should have either:

DNS resolution to all Database and Storage nodes OR have them registered in /etc/hosts

1) Create a parameter file that contains all the server names you want to reach via dcli, tipically we have a cell_group for storage cells, a dbs_group for database servers and an all_group for both of them.

The parameter files will have only the server name, in short format

ie: all_group will have on an Exadata quarter rack:

dbnode1
dbnode2
cell1
cell2
cell3

2) As root user create ssh equivalence:

ssh-keygen   -t    rsa

3) Distribute the key to all servers

dcli -g ./all_group -l root -k -s '-o StrictHostKeyChecking=no'

4) check 

dcli -g all_group -l root hostname 

Categories: DBA Blogs

How to setup passwordless ssh in Exadata using dcli

Alejandro Vargas - Sun, 2014-10-05 03:57

 




Normal
0




false
false
false

EN-US
X-NONE
HE













MicrosoftInternetExplorer4















DefSemiHidden="true" DefQFormat="false" DefPriority="99"
LatentStyleCount="267">
UnhideWhenUsed="false" QFormat="true" Name="Normal"/>
UnhideWhenUsed="false" QFormat="true" Name="heading 1"/>


















UnhideWhenUsed="false" QFormat="true" Name="Title"/>

UnhideWhenUsed="false" QFormat="true" Name="Subtitle"/>
UnhideWhenUsed="false" QFormat="true" Name="Strong"/>
UnhideWhenUsed="false" QFormat="true" Name="Emphasis"/>
UnhideWhenUsed="false" Name="Table Grid"/>

UnhideWhenUsed="false" QFormat="true" Name="No Spacing"/>
UnhideWhenUsed="false" Name="Light Shading"/>
UnhideWhenUsed="false" Name="Light List"/>
UnhideWhenUsed="false" Name="Light Grid"/>
UnhideWhenUsed="false" Name="Medium Shading 1"/>
UnhideWhenUsed="false" Name="Medium Shading 2"/>
UnhideWhenUsed="false" Name="Medium List 1"/>
UnhideWhenUsed="false" Name="Medium List 2"/>
UnhideWhenUsed="false" Name="Medium Grid 1"/>
UnhideWhenUsed="false" Name="Medium Grid 2"/>
UnhideWhenUsed="false" Name="Medium Grid 3"/>
UnhideWhenUsed="false" Name="Dark List"/>
UnhideWhenUsed="false" Name="Colorful Shading"/>
UnhideWhenUsed="false" Name="Colorful List"/>
UnhideWhenUsed="false" Name="Colorful Grid"/>
UnhideWhenUsed="false" Name="Light Shading Accent 1"/>
UnhideWhenUsed="false" Name="Light List Accent 1"/>
UnhideWhenUsed="false" Name="Light Grid Accent 1"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 1"/>

UnhideWhenUsed="false" QFormat="true" Name="List Paragraph"/>
UnhideWhenUsed="false" QFormat="true" Name="Quote"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Quote"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1"/>
UnhideWhenUsed="false" Name="Dark List Accent 1"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 1"/>
UnhideWhenUsed="false" Name="Colorful List Accent 1"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 1"/>
UnhideWhenUsed="false" Name="Light Shading Accent 2"/>
UnhideWhenUsed="false" Name="Light List Accent 2"/>
UnhideWhenUsed="false" Name="Light Grid Accent 2"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2"/>
UnhideWhenUsed="false" Name="Dark List Accent 2"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 2"/>
UnhideWhenUsed="false" Name="Colorful List Accent 2"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 2"/>
UnhideWhenUsed="false" Name="Light Shading Accent 3"/>
UnhideWhenUsed="false" Name="Light List Accent 3"/>
UnhideWhenUsed="false" Name="Light Grid Accent 3"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3"/>
UnhideWhenUsed="false" Name="Dark List Accent 3"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 3"/>
UnhideWhenUsed="false" Name="Colorful List Accent 3"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 3"/>
UnhideWhenUsed="false" Name="Light Shading Accent 4"/>
UnhideWhenUsed="false" Name="Light List Accent 4"/>
UnhideWhenUsed="false" Name="Light Grid Accent 4"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4"/>
UnhideWhenUsed="false" Name="Dark List Accent 4"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 4"/>
UnhideWhenUsed="false" Name="Colorful List Accent 4"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 4"/>
UnhideWhenUsed="false" Name="Light Shading Accent 5"/>
UnhideWhenUsed="false" Name="Light List Accent 5"/>
UnhideWhenUsed="false" Name="Light Grid Accent 5"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5"/>
UnhideWhenUsed="false" Name="Dark List Accent 5"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 5"/>
UnhideWhenUsed="false" Name="Colorful List Accent 5"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 5"/>
UnhideWhenUsed="false" Name="Light Shading Accent 6"/>
UnhideWhenUsed="false" Name="Light List Accent 6"/>
UnhideWhenUsed="false" Name="Light Grid Accent 6"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6"/>
UnhideWhenUsed="false" Name="Dark List Accent 6"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 6"/>
UnhideWhenUsed="false" Name="Colorful List Accent 6"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 6"/>
UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis"/>
UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Reference"/>
UnhideWhenUsed="false" QFormat="true" Name="Book Title"/>





/* Style Definitions */
table.MsoNormalTable
{mso-style-name:"Table Normal";
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-qformat:yes;
mso-style-parent:"";
mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
mso-para-margin:0cm;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:11.0pt;
font-family:"Calibri","sans-serif";
mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin;
mso-fareast-font-family:"Times New Roman";
mso-fareast-theme-font:minor-fareast;
mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;
mso-bidi-font-family:Arial;
mso-bidi-theme-font:minor-bidi;}


Setting passwordless ssh root connection using dcli is fast and simple and will easy later to execute commands on all servers using this utility.


In order to do that you should have either:


DNS resolution to all Database and Storage nodes OR have them registered in /etc/hosts


1) Create a parameter file that contains all the server names you want to reach via dcli, tipically we have a cell_group for storage cells, a dbs_group for database servers and an all_group for both of them.


The parameter files will have only the server name, in short format


ie: all_group will have on an Exadata quarter rack:


dbnode1
dbnode2
cell1
cell2
cell3


2) As root user create ssh equivalence:


ssh-keygen   -t    rsa


3) Distribute the key to all servers


dcli -g ./all_group -l root -k -s '-o StrictHostKeyChecking=no'


4) check 


dcli -g all_group -l root hostname 



 

Categories: DBA Blogs

Bash security fix made available for Exadata

Alejandro Vargas - Sun, 2014-10-05 03:29

Complete information about the security fix availability should be reviewed, before applying the fix, in MOS DOC:

 Responses to common Exadata security scan findings (Doc ID 1405320.1)

The security fix is available for download from:

http://public-yum.oracle.com/repo/OracleLinux/OL5/latest/x86_64/getPackage/bash-3.2-33.el5_11.4.x86_64.rpm

The summary installation instructions are as follows:

1) Download getPackage/bash-3.2-33.el5_11.4.x86_64.rpm

2) Copy bash-3.2-33.el5_11.4.x86_64.rpm into /tmp at both database and storage nodes.

3) Remove rpm  exadata-sun-computenode-exact

rpm -e exadata-sun-computenode-exact

4) On compute nodes install bash-3.2-33.el5_11.4.x86_64.rpm using this command:

 rpm -Uvh /tmp/bash-3.2-33.el5_11.4.x86_64.rpm

5) On storage nodes  install bash-3.2-33.el5_11.4.x86_64.rpm using this command:

rpm -Uvh --nodeps /tmp/bash-3.2-33.el5_11.4.x86_64.rpm

6) Remove /tmp/bash-3.2-33.el5_11.4.x86_64.rpm from all nodes

As a side effect of applyin this fix,  during future upgrades on the database nodes, a warning will appear informing:

The "exact package" was not found and it will use minimal instead.

That's a normal and expected message and will not interfere with the upgrade. 


Categories: DBA Blogs

Bash security fix made available for Exadata

Alejandro Vargas - Sun, 2014-10-05 03:29

Complete information about the security fix availability should be reviewed, before applying the fix, in MOS DOC:


 Responses to common Exadata security scan findings (Doc ID 1405320.1)


The security fix is available for download from:


http://public-yum.oracle.com/repo/OracleLinux/OL5/latest/x86_64/getPackage/bash-3.2-33.el5_11.4.x86_64.rpm


The summary installation instructions are as follows:


1) Download getPackage/bash-3.2-33.el5_11.4.x86_64.rpm


2) Copy bash-3.2-33.el5_11.4.x86_64.rpm into /tmp at both database and storage nodes.


3) Remove rpm  exadata-sun-computenode-exact



rpm -e exadata-sun-computenode-exact



4) On compute nodes install bash-3.2-33.el5_11.4.x86_64.rpm using this command:



 rpm -Uvh /tmp/bash-3.2-33.el5_11.4.x86_64.rpm



5) On storage nodes  install bash-3.2-33.el5_11.4.x86_64.rpm using this command:




rpm -Uvh --nodeps /tmp/bash-3.2-33.el5_11.4.x86_64.rpm


6) Remove /tmp/bash-3.2-33.el5_11.4.x86_64.rpm from all nodes


As a side effect of applyin this fix,  during future upgrades on the database nodes, a warning will appear informing:



The "exact package" was not found and it will use minimal instead.


That's a normal and expected message and will not interfere with the upgrade. 







Categories: DBA Blogs

Advanced Queue Quickie: Errors and Privileges

Don Seiler - Wed, 2014-10-01 23:07
File this one under the misleading-errors department. One of my developers was working with a new queue. He pinged me when he got this error trying to create a job that used the queue:

ERROR at line 1:
ORA-27373: unknown or illegal event source queue
ORA-06512: at "SYS.DBMS_ISCHED", line 124
ORA-06512: at "SYS.DBMS_SCHEDULER", line 314
ORA-06512: at line 2

The CREATE_JOB statement was:

BEGIN
DBMS_SCHEDULER.CREATE_JOB(
job_name => 'foo.bar_q_job',
job_type => 'PLSQL_BLOCK',
job_action => 'begin foo.bar_pkg.consume_bar_queue(); end;',
queue_spec => 'BAR.BAR_Q, FOO_BAR_AGENT',
enabled => true,
comments => 'This is a job to consume the bar.bar_q entries that affect foo.');
END;
/

After a few minutes of banging our heads, it became obvious that this was a permissions problem. The queue was owned by BAR, the job was being created as FOO. The ORA error message could/should have made this more obvious, in my opinion.

Anyway, the fix was simply to grant access to FOO:

DBMS_AQADM.GRANT_QUEUE_PRIVILEGE(
        privilege  => 'ALL',
        queue_name => 'bar.bar_q',
        grantee    => 'foo' );

Hope this saves some banged heads for others.
Categories: DBA Blogs

Packt Publishing - ALL eBooks and Videos are just $10 each or less until the 2nd of October

Surachart Opun - Tue, 2014-09-30 11:36
Just spread good campaign from Packt Publishing - It's a good news for people who love to learn something new - ALL eBooks and Videos are just $10 or less -- the more you choose to learn, the more you save:
  • Any 1 or 2 eBooks/Videos -- $10 each
  • Any 3-5 eBooks/Videos -- $8 each
  • Any 6 or more eBooks/Videos -- $6 each


Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs