Skip navigation.

DBA Blogs

4 Things Every CMO Should Do Before Approaching IT About Big Data

Pythian Group - Tue, 2014-04-08 10:42

Read the full article, 4 Things Every CMO Should Do Before Approaching IT About Big Data.  Approaching IT with Big Data article

“You’ve likely heard the whispers (or shouts) about Big Data’s potential, how it’s the holy grail of marketing—and it can be. But to uncover this information and take action on it, marketing needs to partner closely with all departments, especially with IT.” says Samer Forzley, VP of Marketing at Pythian.

“IT can go off and develop as many Big Data initiatives as it wants, but without the necessary insights from the marketing team, those projects will never translate into sales. But if each plays to its strengths, with CMOs entering the Big Data conversation with logistics upfront, then IT’s structural knowhow can bring that solution to fruition.”

Categories: DBA Blogs

RMAN Infatuation?

Pythian Group - Tue, 2014-04-08 09:59

Lately, I am becoming infatuated with RMAN again.

Have you ever run “restore database preview”?

Are you curious about how the recovery SCN are determined?

Media recovery start SCN is 1515046
Recovery must be done beyond SCN 1515051 to clear datafile fuzziness

If you are, then I will demonstrate this for you.

RMAN LEVEL0 backup and restore database preview summary:

RMAN> list backup summary;

using target database control file instead of recovery catalog

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
366     B  A  A DISK        07-APR-2014 14:11:32 1       1       YES        ARCHIVELOG
367     B  0  A DISK        07-APR-2014 14:11:34 1       1       YES        LEVEL0
368     B  0  A DISK        07-APR-2014 14:11:35 1       1       YES        LEVEL0
369     B  0  A DISK        07-APR-2014 14:11:37 1       1       YES        LEVEL0
370     B  0  A DISK        07-APR-2014 14:11:44 1       1       YES        LEVEL0
371     B  0  A DISK        07-APR-2014 14:11:44 1       1       YES        LEVEL0
372     B  0  A DISK        07-APR-2014 14:11:52 1       1       YES        LEVEL0
373     B  A  A DISK        07-APR-2014 14:11:55 1       1       YES        ARCHIVELOG
374     B  F  A DISK        07-APR-2014 14:11:58 1       1       NO         TAG20140407T141156

RMAN> restore database preview summary;

Starting restore at 07-APR-2014 14:13:04
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=18 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=107 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=23 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=108 device type=DISK

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
371     B  0  A DISK        07-APR-2014 14:11:38 1       1       YES        LEVEL0
370     B  0  A DISK        07-APR-2014 14:11:39 1       1       YES        LEVEL0
367     B  0  A DISK        07-APR-2014 14:11:34 1       1       YES        LEVEL0
368     B  0  A DISK        07-APR-2014 14:11:34 1       1       YES        LEVEL0
369     B  0  A DISK        07-APR-2014 14:11:37 1       1       YES        LEVEL0
using channel ORA_DISK_1
using channel ORA_DISK_2
using channel ORA_DISK_3
using channel ORA_DISK_4

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
373     B  A  A DISK        07-APR-2014 14:11:55 1       1       YES        ARCHIVELOG
Media recovery start SCN is 1524017
Recovery must be done beyond SCN 1524025 to clear datafile fuzziness
Finished restore at 07-APR-2014 14:13:05

RMAN>

Query database to determine recovery SCN:

ARROW:(SYS@db01):PRIMARY> r
  1  SELECT
  2    MIN(checkpoint_change#) start_scn,
  3    GREATEST(MAX(checkpoint_change#),MAX(absolute_fuzzy_change#)) beyond_scn
  4  FROM v$backup_datafile
  5  WHERE incremental_level=(SELECT MAX(incremental_level) FROM v$backup_datafile WHERE incremental_level>=0)
  6*

 START_SCN BEYOND_SCN
---------- ----------
   1524017    1524025

ARROW:(SYS@db01):PRIMARY>

RMAN LEVEL0 and LEVEL1 backup and restore database preview summary:

RMAN> list backup summary;

using target database control file instead of recovery catalog

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
366     B  A  A DISK        07-APR-2014 14:11:32 1       1       YES        ARCHIVELOG
367     B  0  A DISK        07-APR-2014 14:11:34 1       1       YES        LEVEL0
368     B  0  A DISK        07-APR-2014 14:11:35 1       1       YES        LEVEL0
369     B  0  A DISK        07-APR-2014 14:11:37 1       1       YES        LEVEL0
370     B  0  A DISK        07-APR-2014 14:11:44 1       1       YES        LEVEL0
371     B  0  A DISK        07-APR-2014 14:11:44 1       1       YES        LEVEL0
372     B  0  A DISK        07-APR-2014 14:11:52 1       1       YES        LEVEL0
373     B  A  A DISK        07-APR-2014 14:11:55 1       1       YES        ARCHIVELOG
374     B  F  A DISK        07-APR-2014 14:11:58 1       1       NO         TAG20140407T141156
375     B  A  A DISK        07-APR-2014 14:14:37 1       1       YES        ARCHIVELOG
376     B  1  A DISK        07-APR-2014 14:14:40 1       1       YES        LEVEL1
377     B  1  A DISK        07-APR-2014 14:14:40 1       1       YES        LEVEL1
378     B  1  A DISK        07-APR-2014 14:14:41 1       1       YES        LEVEL1
379     B  1  A DISK        07-APR-2014 14:14:42 1       1       YES        LEVEL1
380     B  1  A DISK        07-APR-2014 14:14:42 1       1       YES        LEVEL1
381     B  1  A DISK        07-APR-2014 14:14:45 1       1       YES        LEVEL1
382     B  A  A DISK        07-APR-2014 14:14:47 1       1       YES        ARCHIVELOG
383     B  F  A DISK        07-APR-2014 14:14:51 1       1       NO         TAG20140407T141448

RMAN> restore database preview summary;

Starting restore at 07-APR-2014 14:15:59
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=18 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=107 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=23 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=108 device type=DISK

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
371     B  0  A DISK        07-APR-2014 14:11:38 1       1       YES        LEVEL0
376     B  1  A DISK        07-APR-2014 14:14:39 1       1       YES        LEVEL1
370     B  0  A DISK        07-APR-2014 14:11:39 1       1       YES        LEVEL0
377     B  1  A DISK        07-APR-2014 14:14:40 1       1       YES        LEVEL1
367     B  0  A DISK        07-APR-2014 14:11:34 1       1       YES        LEVEL0
378     B  1  A DISK        07-APR-2014 14:14:41 1       1       YES        LEVEL1
368     B  0  A DISK        07-APR-2014 14:11:34 1       1       YES        LEVEL0
380     B  1  A DISK        07-APR-2014 14:14:41 1       1       YES        LEVEL1
369     B  0  A DISK        07-APR-2014 14:11:37 1       1       YES        LEVEL0
379     B  1  A DISK        07-APR-2014 14:14:41 1       1       YES        LEVEL1
using channel ORA_DISK_1
using channel ORA_DISK_2
using channel ORA_DISK_3
using channel ORA_DISK_4

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
382     B  A  A DISK        07-APR-2014 14:14:47 1       1       YES        ARCHIVELOG
Media recovery start SCN is 1524335
Recovery must be done beyond SCN 1524339 to clear datafile fuzziness
Finished restore at 07-APR-2014 14:16:00

RMAN>

Query database to determine recovery SCN:

ARROW:(SYS@db01):PRIMARY> r
  1  SELECT
  2    MIN(checkpoint_change#) start_scn,
  3    GREATEST(MAX(checkpoint_change#),MAX(absolute_fuzzy_change#)) beyond_scn
  4  FROM v$backup_datafile
  5  WHERE incremental_level=(SELECT MAX(incremental_level) FROM v$backup_datafile WHERE incremental_level>=0)
  6*

 START_SCN BEYOND_SCN
---------- ----------
   1524335    1524339

ARROW:(SYS@db01):PRIMARY>

Why is all of this important?

It allows one to automate the process to validate backup without having to actually run “restore database preview”.

Tested on 11.2.0.4 database.

Categories: DBA Blogs

Partner Webcast – Oracle SuperCluster Product Family: Technology Overview

When you’re under pressure to deliver more—more performance, more capacity, and more business value—you need systems that offer seamless integration. Oracle SuperCluster T5-8 and the...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Monitor Oracle Golden Gate from SQL

DBASolved - Mon, 2014-04-07 08:41

One of my presentations at Collaborate 14 this year revolves around how many different ways there are to monitor Oracle Golden Gate.   As I was putting the presentation together, I was listing out the different ways for monitoring. I have covered a few of the ways already in earlier posts.  What I want to show you here is how to execute a simple “info all” command and see the results from SQL*Plus or SQL Developer using SQL.

First, a script (shell, Perl, etc..) needs to be written to write the output of the “info all” command to a text file.  In this case, I’m going to write the text file in /tmp since I’m on Linux.


#!/usr/bin/perl -w
#
#Author: Bobby Curtis, Oracle ACE
#Copyright: 2014
#Title: gg_monitor_sqldev.pl
#
use strict;
use warnings;

#Static Variables

my $gghome = "/oracle/app/product/12.1.2/oggcore_1";
my $outfile = "/tmp/gg_process_sqldev.txt";

#Program
my @buf = `$gghome/ggsci << EOF
info all
EOF`;

open (GGPROC, ">$outfile") or die "Unable to open file";
foreach (@buf)
{
      if(/MANAGER/||/JAGENT/||/EXTRACT/||/REPLICAT/)
    {
        no warnings 'uninitialized';
         chomp;
        my ($program, $status, $group, $lagatchkpt, $timesincechkpt) = split(" ");
       
        if ($group eq "")
        {
            $group = $program;
        }
        if ($lagatchkpt eq "" || $timesincechkpt eq "")
        {
            $lagatchkpt = "00:00:00";
            $timesincechkpt = "00:00:00";
        }
        print GGPROC "$program|$status|$group|$lagatchkpt|$timesincechkpt\n";
    }
}
close (GGPROC);

Next, is the text file needs to be placed into a table to be read by SQL*Plus or SQL Developer.  External Tables are great for this.


create directory TEMP as '/tmp';
grant read on directory TEMP to PUBLIC;

drop table ggate.os_process_mon;

create table ggate.os_process_mon
(
process char(15),
status char(15),
group char(15),
lagatchk char(15),
timelastchk char(15)
)
organization external
(type oracle_loader
default directory TEMP
access parameters
(
RECORDS DELIMITED BY NEWLINE
    FIELDS TERMINATED BY '|'
    MISSING FIELD VALUES ARE NULL
(
            process char(15),
            status char(15),
            ggroup char(15),
            lagatchk char(15),
            timelastchk char(15)
         )
    )
    location ('gg_process_sqldev.txt')
);

select * from ggate.os_process_mon;

Lastly, with these two pieces in place, I can now select the status from SQL*Plus or SQL Developer using SQL. Image 1 shows a sample from my testing environment, I’m building.

Image 1:
image

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Golden Gate
Categories: DBA Blogs

COLLABORATE minus 3

Pythian Group - Mon, 2014-04-07 07:50

I always like to get to the location for a conference a day in advance so I can

  • Get accustomed to the time change
  • Get a feel for my way around the venue
  • Figure out where my room is
  • Establish a few landmarks so I do not wander aimlessly around the facility and hotel as though every voyage is a new life experience

COLLABORATE officially starts on Tuesday, though there are education sessions all day Monday facilitated by the three main groups responsible for the show – the IOUG, OAUG, and Quest International Users Group. So where did this animal called COLLABORATE come from one may wonder?

Rewind to about 2004. The three above-mentioned user groups each had their own show. Each reached out to Oracle for logistic and education support, something that the vendor was (and still is) happy to give. It was starting to become obvious that the marketplace upheaval was having a dramatic effect on user group conference attendance. At the same time Oracle expressed a desire to support fewer shows. You do the math – it only made sense. Why not have a 4-5 day mega conference and work with Oracle for many facets of support. Not only were the attendees of each show being asked to pick one or the other; Oracle was investing a massive number of personnel to support all three shows separately. It was a cumulative decision to amalgamate the shows and we wondered where it all would start.

With the blessing of the IOUG board I made one of those very first phone calls to one more people on the OAUG board and the rest is history. I do not remember who I spoke to first and there were probably a handful of feelers going out from other places in the IOUG infrastructure to OAUG bigwigs. I spoke to board member Donna Rosentrater  (@DRosentrater) and we jammed on what could/should become of a co-operative effort. We chatted a few times and the interest amongst board members of the IOUG and OAUG reflected cautious optimism that we could pull if off. Each user group had its own revenue stream from separate shows. We needed to embark down a path that would not put these at risk. That is what the brunt of the negotiations centered on and the work we did together led to the very first COLLBORATE at the Gaylord in Nashville in 2006.

Once the initial framework was established, it was time to turn the discussions over to the professionals. Both groups’ professional resources collaborated (hence the name maybe) and this mega/co-operative show became a reality. COLLABORATE 14 is the 9th show put on by Quest, OAUG, and IOUG. I am not going to say “this year’s show is going to be the best yet” as I believe that implicitly belittles previous successful events. Suffice to say, for what the user community needs from an information-sharing perspective – COLLABORATE is just what the doctor ordered.

Tomorrow’s a day off; wander aimlessly through Las Vegas tempted by curios, shops, food emporiums, and just about every other possible temptation one could think of. Sunday starts with a helicopter trip to the Grand Canyon and I went all out and forked over the extra $50 to sit in the convex bubble beside the pilot. There’s a bazillion vendors poised to whisk ine away to the canyon with a fly over the Hoover dam there or on the way back. I chose Papillon and am looking forward to the excitement of the day which starts at 5:10am with a shuttle to the site where the whirlybird takes off. Talk about taking one’s breath away.

Categories: DBA Blogs

ADF and alternate unique keys revisited

Today I'd like to share a quick (and dirty) trick to handle one nuisance of well known pattern to achieve uniqueness of "non-primary key" ADFBC entity attributes. If you ever needed to...

We share our skills to maximize your revenue!
Categories: DBA Blogs

#Oracle University Expert Summit in London

The Oracle Instructor - Mon, 2014-04-07 06:09

Three days full of seminars are offered by Oracle University in London (19th to 21st May) at the Expert Summit

Oracle University Expert Summit

It is my pleasure to present there together with Arup Nanda, Dan Hotka, Jonathan Lewis and my dear colleagues Iloon Ellen-Wolff and Joel Goodman.

One funny detail here: There has been another event (an Exadata Workshop) in Vienna on my schedule during that week – yes, I’m very busy these days. Now in order to make it possible for me to present in London, the class in Vienna will be interrupted on Tuesday and continued on Wednesday :-)

A big “Thank You!”  goes out to the attendees in Vienna who agreed with the one day interruption to make that happen! Specifically, I’m going to talk about and demonstrate the 12c New Features of Data Guard in London.


Categories: DBA Blogs

Where Does v$osstat Get It's Data? Trustworthy?

OS Data Without An OS Prompt
Have you ever wanted to get OS data from within Oracle Database 12c/11g/10g and without being at the OS prompt? Have you ever wondered where v$osstat gets it's data?

I have! It's the kind of stuff I think about all the time. In fact, I was so fascinated by what I found, I included the details in my online seminar, Utilization On Steroids. It's that cool.

What Is So Great About v$osstat?
For starters, v$osstat provides an operating system perspective view of OS activity. In contrast, v$sysstat or v$sys_time_model provide performance data about a specific instance. Looking at v$osstat data is like looking at OS data from within Oracle. In other words, seeing OS data without an OS prompt.

But is the data from v$osstat trustworthy? That's exactly what this posting is all about. To figure this out, I'm going to first look at where vmstat gets its data and then look at where v$osstat gets its data. If their data source is the same, then it is highly likely v$osstat is trustworthy. Make sense? Read on...

Where Does vmstat Get It's Data?
It's real simple to determine vmstat's data source. Just OS trace vmstat! On Linux I use the strace command. In the exert below, I did not change any lines. If I removed lines, I replaced them a "...". Here is the output:

[oracle@sixcore local]$ strace vmstat 2 5
execve("/usr/bin/vmstat", ["vmstat", "2", "5"], [/* 39 vars */]) = 0
...
write(1, "procs -----------memory---------"..., 81procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
) = 81
write(1, " r b swpd free buff cach"..., 81 r b swpd free buff cache si so bi bo in cs us sy id wa st
) = 81
open("/proc/meminfo", O_RDONLY) = 3
lseek(3, 0, SEEK_SET) = 0
read(3, "MemTotal: 24729532 kB\nMemF"..., 2047) = 1198
open("/proc/stat", O_RDONLY) = 4
read(4, "cpu 570795858 11695 216600983 8"..., 65535) = 4643
open("/proc/vmstat", O_RDONLY) = 5
lseek(5, 0, SEEK_SET) = 0
read(5, "nr_free_pages 119649\nnr_inactive"..., 2047) = 2047
write(1, " 2 0 146388 478596 317080 22558"..., 84 2 0 146388 478596 317080 22558284 0 0 5 234 1 4 72 27 0 0 0
) = 84
...
nanosleep({2, 0}, 0x7fff0a9313b0) = 0
...
lseek(3, 0, SEEK_SET) = 0
read(3, "MemTotal: 24729532 kB\nMemF"..., 2047) = 1198
lseek(4, 0, SEEK_SET) = 0
read(4, "cpu 570796318 11695 216600996 8"..., 65535) = 4643
lseek(5, 0, SEEK_SET) = 0
read(5, "nr_free_pages 119672\nnr_inactive"..., 2047) = 2047
write(1, " 2 0 146388 478688 317080 22558"..., 84 2 0 146388 478688 317080 22558284 0 0 0 36 3781 2311 38 1 61 0 0
) = 84
...
nanosleep({2, 0},
...
exit_group(0) = ?
[oracle@sixcore local]$

There are two things I want to bring to your attention. First is the nonosleep call. If you do a man page on nanosleep, you can see that vmstat is using the call for the two second sleep between data output.

The second and the most important for this posting, is vmstat repeatedly references data in the /proc filesystem. I write about the /proc filesystem in my Oracle Performance Firefighting book and also explore it in my Utilization On Steroids online seminar. It's amazing. The bottom line is this, vmstat gets performance data from the /proc filesystem. Place that in your stack and read on.

Where Does v$osstat Get Its Data?
The Oracle Database background process mmnl, at least in part, is responsible for collecting data that we see in v$osstat. So I am going to OS trace using strace the mmnl process and see what files it opens and reads!

First I need to get the OS process ID of the mmnl background process for my Oracle Database 12c instance.

[oracle@sixcore local]$ ps -eaf|grep mmnl
oracle 11521 11170 0 08:59 pts/1 00:00:00 grep mmnl
oracle 20109 1 0 Feb24 ? 00:18:47 ora_mmnl_prod35
oracle 60274 1 0 Jan13 ? 00:24:34 ora_mmnl_prod30

Got it! The PID for my Oracle Database 12c instance is 20109. Just like I did with vmstat, except mmnl is already running, I'm going to OS trace it using strace.  In the exert below, I did not change any lines. If I removed lines, I replaced them a "...". Here is the output:

[oracle@sixcore local]$ strace -p 20109
Process 20109 attached - interrupt to quit
...
open("/proc/cpuinfo", O_RDONLY) = 15
fstat(15, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f68548bd000
read(15, "processor\t: 0\nvendor_id\t: Genuin"..., 1024) = 1024
read(15, " size\t: 12288 KB\nphysical id\t: 0"..., 1024) = 1024
read(15, "gs\t\t: fpu vme de pse tsc msr pae"..., 1024) = 1024
read(15, "_tsc arch_perfmon pebs bts rep_g"..., 1024) = 1024
read(15, "2 popcnt aes lahf_lm ida arat ep"..., 1024) = 1024
read(15, "ical, 48 bits virtual\npower mana"..., 1024) = 41
read(15, "", 1024) = 0
open("/proc/mounts", O_RDONLY) = 16
fstat(16, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f68548bc000
read(16, "rootfs / rootfs rw 0 0\nproc /pro"..., 1024) = 886
read(16, "", 1024) = 0
close(16) = 0
munmap(0x7f68548bc000, 4096) = 0
close(15) = 0
munmap(0x7f68548bd000, 4096) = 0
open("/proc/loadavg", O_RDONLY) = 15
fstat(15, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f68548bd000
read(15, "2.11 2.07 1.73 5/302 11525\n", 1024) = 27
close(15) = 0
munmap(0x7f68548bd000, 4096) = 0
open("/proc/stat", O_RDONLY) = 15
fstat(15, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f68548bd000
read(15, "cpu 571012017 11695 216602922 8"..., 1024) = 1024
close(15) = 0
munmap(0x7f68548bd000, 4096) = 0
open("/proc/meminfo", O_RDONLY) = 15
fstat(15, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f68548bd000
read(15, "MemTotal: 24729532 kB\nMemF"..., 1024) = 1024
close(15) = 0
munmap(0x7f68548bd000, 4096) = 0
open("/proc/vmstat", O_RDONLY) = 15
fstat(15, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f68548bd000
read(15, "nr_free_pages 119496\nnr_inactive"..., 1024) = 1024
close(15)
...
semtimedop(5341187, {{24, -1, 0}}, 1, {1, 0}^C
Process 20109 detached
[oracle@sixcore local]$

Just in the few lines above you can see that the mmnl background process opens and reads data from /proc/cpuinfo, /proc/mounts, /proc/loadavg, /proc/stat, /proc/meminfo, and /proc/vmstat. While a bit scary if you are violating your Oracle Corporation licensing agreement, this is truly fascinating!

While my demonstration does not prove this and I have never seen or heard anything to the contrary, it does appear the source of OS data for v$osstat is the /proc filesystem. And certainly, the Oracle Database background process mmnl repeatedly references the /proc filesystem.

Therefore, if the source of OS data is the same for both vmstat and v$osstat, I will trust the data from v$osstat unless there is a really good reason to not trust the data.

What Can I Do With v$osstat?
That's a great question, but not the focus of this posting. But just to give you some ideas, we can easily determine OS CPU utilization solely with $osstat, an AWR report, or a Statspack report. If fact, next week I'm going to give a 30 minute Quick Tip at the 2014 IOUG/Collaborate conference entitled, Calculating OS CPU Utilization From V$ Views, AWR, and Statspack. OK... here is the link where you can download the slide deck for my most recent version of the conference presentation.

Utilization is a fascinating topic. It is far reaching and touches on all things computing and even beyond. And as you might expect, I really get into this in my online seminar entitle, Utilization On Steroids. It's the simplest concepts that are the most profound.

Thanks for reading!

Craig.

If you enjoy my blog, I suspect you'll get a lot out of my online or in-class training. Go to www.orapub.com. I also offer on-site training and consulting services.

P.S. If you want me to respond to a comment or you have a question, please feel free to email me directly at craig@orapub .com. Another option is to send an email to OraPub's general email address, which is currently info@ orapub. com.
https://resources.orapub.com/OraPub_Online_Seminars_About_Oracle_Database_Performance_s/100.htm


















Categories: DBA Blogs

VIDEO: The Evolution of IT Outsourcing

Pythian Group - Fri, 2014-04-04 07:46

“For us to understand what’s going on in the industry at large, it helps for us to have a deeper understanding of the history of the outsourcing industry.” Pythian founder, Paul Vallée shares his insights on the history and evolution of IT outsourcing.

Pythian has developed 5 criteria for choosing a data management outsourcing partner. Download the white paper here.

Categories: DBA Blogs

Log Buffer #366, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-04-04 07:43

While the Oracle blogosphere is buzzing with the Collaborate news and views, SQL Server and MySQL bloggers are also getting upbeat about their respective fields and producing gems of blog posts. This Log Buffer Edition covers that all and more.

Oracle:

Run Virtual Machines with Oracle VM.

Last Call to Submit to the JavaOne Java EE Track.

UX – No Time To Kill.

Updated: Price Comparison for Big Data Appliance and Hadoop.

PLSQL_OPTIMIZE_LEVEL paraeter and optimizing of the PL/SQL code.

SQL Server:

SQL Server 2012 AlwaysOn Groups and FCIs.

How to recover a suspect msdb database in SQL Server.

Data Science Laboratory System – Distributed File Databases.

Stairway to T-SQL: Beyond The Basics Level 5: Storing Data in Temporary Tables.

The Girl with the Backup Tattoo.

MySQL:

pt-online-schema change and row based replication.

MariaDB Client Library for C 2.0 and Maria JDBC Client 1.1.7 released.

Help MariaDB gather some statistics!

Uninitialized data in the TokuDB recovery log.

MySQL at LOADays Conference in Antwerp, Belgium (Apr 5-6, 2014)

Categories: DBA Blogs

A Simple Way to Monitor Java in Linux

Pythian Group - Thu, 2014-04-03 07:49

A quick and easy way to know what is it inside Java process that is using your CPU. Using just Linux command line tools and JDK supplied command line utilities.

Introduction

Here are a few things you need to know before starting. Following the links is not necessary, they are available for the reference.

  • there are different vendors of Java Virtual Machine. This post is about Oracle’s JVM which is called HotSpot. Linux x86-64 is considered as OS platform. Most of the things about HotSpot are applicable to other vendors too but with slight changes. OSes other than Linux may add some more complications
  • it’s called Virtual Machine, because it virtualizes runtime environment for a Java application. So to know where to look at you need to know a few things about how specific VM is organized. For a detailed overview of the HotSpot, please refer to this article
  • on Linux, a thread inside HotSpot VM is mapped directly to an OS level thread. Well, it may not be true on all OSes, but for modern Linux kernels this is true. So every thread on the OS level is a thread inside a Java application
  • there are generally two types of threads inside a HotSpot VM: native and application threads. Application threads are those that run some Java code, and usually this is what applications are using to run their code. Native threads run something which is not written in Java, usually it’s code in C/C++ and usually all these threads are special utility threads started by a VM itself.
Identifying Threads

Since a Java program may start many threads, each executing some program code, it is necessary to understand which threads are using CPUs. On Linux, top -H will show you CPU usage on a per-thread basis. Here is an example. First, a process which consumes CPU:

top - 16:32:29 up 10:29,  3 users,  load average: 1.08, 0.64, 0.56
Tasks: 172 total,   1 running, 171 sleeping,   0 stopped,   0 zombie
Cpu(s): 48.7%us, 51.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.3%si,  0.0%st
Mem:   1500048k total,  1476708k used,    23340k free,    62012k buffers
Swap:  4128764k total,    75356k used,  4053408k free,   919836k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 7599 oracle    20   0 1151m  28m 9100 S 85.5  1.9   0:12.67 java
 2575 oracle    -2   0  709m  10m 8336 S 10.6  0.7  47:34.05 oracle
 2151 root      20   0  207m  44m 6420 S  1.7  3.0   0:27.18 Xorg

If we check the details of CPU usage for PID=7599 with “top -H -p 7599″, then we will see something like this:

top - 16:40:39 up 10:37,  3 users,  load average: 1.47, 1.25, 0.90
Tasks:  10 total,   1 running,   9 sleeping,   0 stopped,   0 zombie
Cpu(s): 49.3%us, 50.3%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.3%si,  0.0%st
Mem:   1500048k total,  1460468k used,    39580k free,    50140k buffers
Swap:  4128764k total,    76208k used,  4052556k free,   912644k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 7601 oracle    20   0 1151m  28m 9100 R 85.9  1.9   7:19.98 java
 7602 oracle    20   0 1151m  28m 9100 S  1.0  1.9   0:02.95 java
 7599 oracle    20   0 1151m  28m 9100 S  0.0  1.9   0:00.01 java

So there is 1 execution thread inside a Java process, which is constantly on top and is utilizing around 85% of a single core.

Now the next thing to know is: what is this thread doing. To answer that question we need to know two things: thread stacks from a Java process and a way to map OS level thread to a Java thread. As I mentioned previously, there is one to one mapping between OS and Java level threads in HotSpot running on Linux.

To get a thread dump we need to use a JDK utility called jstack:

[oracle@oel6u4-2 test]$ jstack 7599
2014-02-28 16:57:23
Full thread dump Java HotSpot(TM) 64-Bit Server VM (20.12-b01 mixed mode):

"Attach Listener" daemon prio=10 tid=0x00007f05a0001000 nid=0x1e66 waiting on condition [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

"Low Memory Detector" daemon prio=10 tid=0x00007f05c4088000 nid=0x1db8 runnable [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

"C2 CompilerThread1" daemon prio=10 tid=0x00007f05c4085800 nid=0x1db7 waiting on condition [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

"C2 CompilerThread0" daemon prio=10 tid=0x00007f05c4083000 nid=0x1db6 waiting on condition [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

"Signal Dispatcher" daemon prio=10 tid=0x00007f05c4081000 nid=0x1db5 runnable [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

"Finalizer" daemon prio=10 tid=0x00007f05c4064800 nid=0x1db4 in Object.wait() [0x00007f05c0631000]
   java.lang.Thread.State: WAITING (on object monitor)
	at java.lang.Object.wait(Native Method)
	- waiting on <0x00000000eb8a02e0> (a java.lang.ref.ReferenceQueue$Lock)
	at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
	- locked <0x00000000eb8a02e0> (a java.lang.ref.ReferenceQueue$Lock)
	at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
	at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159)

"Reference Handler" daemon prio=10 tid=0x00007f05c4062800 nid=0x1db3 in Object.wait() [0x00007f05c0732000]
   java.lang.Thread.State: WAITING (on object monitor)
	at java.lang.Object.wait(Native Method)
	- waiting on <0x00000000eb8a0380> (a java.lang.ref.Reference$Lock)
	at java.lang.Object.wait(Object.java:485)
	at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)
	- locked <0x00000000eb8a0380> (a java.lang.ref.Reference$Lock)

"main" prio=10 tid=0x00007f05c4007000 nid=0x1db1 runnable [0x00007f05c82f4000]
   java.lang.Thread.State: RUNNABLE
	at java.security.SecureRandom.getInstance(SecureRandom.java:254)
	at java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:176)
	at java.security.SecureRandom.<init>(SecureRandom.java:133)
	at RandomUser.main(RandomUser.java:9)

"VM Thread" prio=10 tid=0x00007f05c405c000 nid=0x1db2 runnable

"VM Periodic Task Thread" prio=10 tid=0x00007f05c408b000 nid=0x1db9 waiting on condition

JNI global references: 975

To map OS level thread to a Java thread in a thread dump, we need to convert native thread ID from Linux to base 16, and search for “nid=$ID” in the stack trace. In our case thread ID is 7601 which is 0x1db1, and the Java thread had following stack trace at the time of running jstack:

"main" prio=10 tid=0x00007f05c4007000 nid=0x1db1 runnable [0x00007f05c82f4000]
   java.lang.Thread.State: RUNNABLE
	at java.security.SecureRandom.getInstance(SecureRandom.java:254)
	at java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:176)
	at java.security.SecureRandom.<init>(SecureRandom.java:133)
	at RandomUser.main(RandomUser.java:9)
A Way to Monitor

Here is a way to get a stack trace of a thread inside a Java process with command line tools (PID and TID are Process ID of Java process, and Thread ID of an interesting thread on the OS level):

[oracle@oel6u4-2 test]$ jstack $PID | awk '/ nid='"$(printf '%#x' $TID)"' /,/^$/'
"main" prio=10 tid=0x00007f05c4007000 nid=0x1db1 runnable [0x00007f05c82f4000]
   java.lang.Thread.State: RUNNABLE
	at java.io.FileInputStream.readBytes(Native Method)
	at java.io.FileInputStream.read(FileInputStream.java:220)
	at sun.security.provider.NativePRNG$RandomIO.readFully(NativePRNG.java:185)
	at sun.security.provider.NativePRNG$RandomIO.ensureBufferValid(NativePRNG.java:247)
	at sun.security.provider.NativePRNG$RandomIO.implNextBytes(NativePRNG.java:261)
	- locked <0x00000000eb8a3370> (a java.lang.Object)
	at sun.security.provider.NativePRNG$RandomIO.access$200(NativePRNG.java:108)
	at sun.security.provider.NativePRNG.engineNextBytes(NativePRNG.java:97)
	at java.security.SecureRandom.nextBytes(SecureRandom.java:433)
	- locked <0x00000000e43adc90> (a java.security.SecureRandom)
	at java.security.SecureRandom.next(SecureRandom.java:455)
	at java.util.Random.nextInt(Random.java:189)
	at RandomUser.main(RandomUser.java:9)

As you can see here, the thread is executing a main method of RandomUser class – at least at the time of taking a thread dump. If you would like to see how this changes over time, then a simple watch command may help to get an idea if this thread stack changes frequently or not:

watch -n .5 "jstack $PID | awk '/ nid='"$(printf '%#x' $TID)"' /,/^$/'"

Every 0.5s: jstack 7599 | awk '/ nid='0x1db1' /,/^$/'                                                                                Fri Mar 14 16:29:37 2014

"main" prio=10 tid=0x00007f05c4007000 nid=0x1db1 runnable [0x00007f05c82f4000]
   java.lang.Thread.State: RUNNABLE
        at java.util.LinkedHashMap$LinkedHashIterator.<init>(LinkedHashMap.java:345)
        at java.util.LinkedHashMap$KeyIterator.<init>(LinkedHashMap.java:383)
        at java.util.LinkedHashMap$KeyIterator.<init>(LinkedHashMap.java:383)
        at java.util.LinkedHashMap.newKeyIterator(LinkedHashMap.java:396)
        at java.util.HashMap$KeySet.iterator(HashMap.java:874)
        at java.util.HashSet.iterator(HashSet.java:153)
        at java.util.Collections$UnmodifiableCollection$1.<init>(Collections.java:1005)
        at java.util.Collections$UnmodifiableCollection.iterator(Collections.java:1004)
        at java.security.SecureRandom.getPrngAlgorithm(SecureRandom.java:523)
        at java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:165)
        at java.security.SecureRandom.<init>(SecureRandom.java:133)
        at RandomUser.main(RandomUser.java:9)

So this way you could see what the application thread is doing right now. Since it could be quite a lot of different type of work, the next reasonable step is to add aggregation.

[oracle@oel6u4-2 test]$ ./prof.sh 7599 7601
Sampling PID=7599 every 0.5 seconds for 10 samples
      6  "main" prio=10 tid=0x00007f05c4007000 nid=0x1db1 runnable [0x00007f05c82f4000]    java.lang.Thread.State: RUNNABLE
	at java.io.FileInputStream.readBytes(Native Method)
	at java.io.FileInputStream.read(FileInputStream.java:220)
	at sun.security.provider.NativePRNG$RandomIO.readFully(NativePRNG.java:185)
	at sun.security.provider.NativePRNG$RandomIO.ensureBufferValid(NativePRNG.java:247)
	at sun.security.provider.NativePRNG$RandomIO.implNextBytes(NativePRNG.java:261)
	- locked <address> (a java.lang.Object)
	at sun.security.provider.NativePRNG$RandomIO.access$200(NativePRNG.java:108)
	at sun.security.provider.NativePRNG.engineNextBytes(NativePRNG.java:97)
	at java.security.SecureRandom.nextBytes(SecureRandom.java:433)
	- locked <address> (a java.security.SecureRandom)
	at java.security.SecureRandom.next(SecureRandom.java:455)
	at java.util.Random.nextInt(Random.java:189)
	at RandomUser.main(RandomUser.java:9)

      2  "main" prio=10 tid=0x00007f05c4007000 nid=0x1db1 runnable [0x00007f05c82f4000]    java.lang.Thread.State: RUNNABLE
	at sun.security.provider.NativePRNG$RandomIO.implNextBytes(NativePRNG.java:268)
	- locked <address> (a java.lang.Object)
	at sun.security.provider.NativePRNG$RandomIO.access$200(NativePRNG.java:108)
	at sun.security.provider.NativePRNG.engineNextBytes(NativePRNG.java:97)
	at java.security.SecureRandom.nextBytes(SecureRandom.java:433)
	- locked <address> (a java.security.SecureRandom)
	at java.security.SecureRandom.next(SecureRandom.java:455)
	at java.util.Random.nextInt(Random.java:189)
	at RandomUser.main(RandomUser.java:9)

      1  "main" prio=10 tid=0x00007f05c4007000 nid=0x1db1 runnable [0x00007f05c82f4000]    java.lang.Thread.State: RUNNABLE
	at java.util.Random.nextInt(Random.java:189)
	at RandomUser.main(RandomUser.java:9)

      1  "main" prio=10 tid=0x00007f05c4007000 nid=0x1db1 runnable [0x00007f05c82f4000]    java.lang.Thread.State: RUNNABLE
	at java.security.SecureRandom.next(SecureRandom.java:452)
	at java.util.Random.nextInt(Random.java:189)
	at RandomUser.main(RandomUser.java:9)

Here is what’s inside the prof.sh script:

#!/bin/bash

P_PID=$1
P_NID=$2

if [ "$P_SLEEP" == "" ]; then
  P_SLEEP=0.5
fi

if [ "$P_CNT" == "" ]; then
  P_CNT=10
fi

echo Sampling PID=$P_PID every $P_SLEEP seconds for $P_CNT samples

if [ "$P_NID" == "" ]; then
  CMD="awk '//'"
else
  CMD="awk '/ nid='"$(printf '%#x' $P_NID)"' /,/^$/'"
fi

for i in `seq $P_CNT`
do
  jstack $P_PID | eval $CMD
  sleep $P_SLEEP;
done |
  awk ' BEGIN { x = 0; s = "" }
    /nid=/ { x = 1; }
    // {if (x == 1) {s = s ", "gensub(/<\w*>/, "<address>", "g") } }
    /^$/ { if ( x == 1) { print s; x = 0; s = ""; } }' |
  sort | uniq -c | sort -n -r | head -10 |
  sed -e 's/$/\n/g' -e 's/\t/\n\t/g' -e 's/,//g'

The idea of the script is based on the method from poor man’s profiler adapted for HotSpot thread dumps. The script does the following things:

  • Takes $P_CNT thread dumps of the Java process ID passed as $1 (10 by default)
  • If a native thread ID has been supplied as $2, then searches for the thread stack of this thread in the thread dump
  • Concatenates each thread stack trace into a comma-separated string
  • Aggregates strings and sorts them by the number of occurrences
  • Prettifies the output: removes tabs, commas, and adds new lines back to the thread stack
Conclusion

With a few little things it is possible to understand quite a lot of things in almost any situation related to Java: you can find out the most frequent stack trace by sampling thread dumps.
With this knowledge it is then easy to understand why an issue happening. In my test case, the application instantly generates random numbers without a pause, and 1 thread is occupying 1 CPU core.

Categories: DBA Blogs

SQL Developer’s Interface for GIT: Cloning a GitHub Repository

Galo Balda's Blog - Wed, 2014-04-02 22:05

SQL Developer 4 provides an interface that allows us to interact with Git repositories. In this post, I’m going to show how to clone a GitHub (A web based hosting service for software development projects that uses the Git revision control system) repository.

First you need to sign up for a GitHub account. You can skip this step if you already have one.

Your account will give you access to public repositories that could be cloned but I suggest you create your own repository so that you can play with SQL Developer and see what the different options can do.

Once you have an account, click on the green button that says “New Repository”. It will take you to a screen like this:

github_create_repo

Give your repository a name, decide if you want it to be public or private (you have to pay), click on the check box and then click on the green button. Now you should be taken to the main repository page.

github_repo

Pay attention to the red arrow on the previous image. It points to a text box that contains the HTTPS clone URL that we’re going to use in SQL Developer to connect to GitHub.

Let’s go to SQL Developer and click on Team –> Git –> Clone… to open the “Clone from Git Wizard”. Click on the next button and you should see the screen that lets you enter the repository details:

remote_repo

Enter the repository name, the HTTPS clone URL, your GitHub user name and your password. Click on next to connect to the repository and see the remote branches that are available.

remote_branch

The master branch gets created by default for every new repository. Take the defaults on this screen and click on next to get to the screen where you specify the destination for your local Git repository.

destination

Enter the path for the local repository and click on next. A summary screen is displayed and showing the options you chose. Click on finish to complete the setup.

How do we know if it worked? Go to the path of your local repository and it should contain the same structure as in the online repository.

local_repo

On a next post I’ll show how to commit changes to the local repository and how to push them to GitHub.


Filed under: Source Control, SQL Developer Tagged: Source Control, SQL Developer
Categories: DBA Blogs

Two Adaptive Plans Join Methods Examples

Bobby Durrett's DBA Blog - Wed, 2014-04-02 14:49

Here is a zip of two examples I built as I’m learning about the new adaptive plans features of Oracle 12c: zip

The first example has the optimizer underestimate the number of rows and the adaptive plans feature switches the plan on the fly from nested loops to hash join.

In the second example the optimizer overestimates the number of rows and the adaptive plans feature switches the plan from merge join to nested loops.

I ran the same scripts on 12c and 11.2.0.3 for comparison.

Example 1 11g:

Plan hash value: 2697562628

------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |      |      1 |        |      1 |00:00:00.01 |      18 |
|   1 |  SORT AGGREGATE               |      |      1 |      1 |      1 |00:00:00.01 |      18 |
|   2 |   NESTED LOOPS                |      |      1 |        |      8 |00:00:00.01 |      18 |
|   3 |    NESTED LOOPS               |      |      1 |      1 |      8 |00:00:00.01 |      17 |
|*  4 |     TABLE ACCESS FULL         | T1   |      1 |      1 |      8 |00:00:00.01 |      14 |
|*  5 |     INDEX RANGE SCAN          | T2I  |      8 |      1 |      8 |00:00:00.01 |       3 |
|   6 |    TABLE ACCESS BY INDEX ROWID| T2   |      8 |      1 |      8 |00:00:00.01 |       1 |
------------------------------------------------------------------------------------------------

Example 1 12c:

-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem |  O/1/M   |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |00:00:00.01 |       6 |       |       |          |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |00:00:00.01 |       6 |       |       |          |
|*  2 |   HASH JOIN         |      |      1 |      1 |      8 |00:00:00.01 |       6 |  2168K|  2168K|     1/0/0|
|*  3 |    TABLE ACCESS FULL| T1   |      1 |      1 |      8 |00:00:00.01 |       3 |       |       |          |
|   4 |    TABLE ACCESS FULL| T2   |      1 |      1 |     16 |00:00:00.01 |       3 |       |       |          |
-----------------------------------------------------------------------------------------------------------------

Example 2 11g

---------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem |  O/1/M   |
---------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |      |      1 |        |      1 |00:00:00.01 |      16 |       |       |          |
|   1 |  SORT AGGREGATE               |      |      1 |      1 |      1 |00:00:00.01 |      16 |       |       |          |
|   2 |   MERGE JOIN                  |      |      1 |      4 |      1 |00:00:00.01 |      16 |       |       |          |
|   3 |    TABLE ACCESS BY INDEX ROWID| T2   |      1 |     16 |      2 |00:00:00.01 |       2 |       |       |          |
|   4 |     INDEX FULL SCAN           | T2I  |      1 |     16 |      2 |00:00:00.01 |       1 |       |       |          |
|*  5 |    SORT JOIN                  |      |      2 |      4 |      1 |00:00:00.01 |      14 | 73728 | 73728 |          |
|*  6 |     TABLE ACCESS FULL         | T1   |      1 |      4 |      1 |00:00:00.01 |      14 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------

Example 2 12c

------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |      |      1 |        |      1 |00:00:00.01 |       5 |
|   1 |  SORT AGGREGATE               |      |      1 |      1 |      1 |00:00:00.01 |       5 |
|   2 |   NESTED LOOPS                |      |      1 |        |      1 |00:00:00.01 |       5 |
|   3 |    NESTED LOOPS               |      |      1 |      4 |      1 |00:00:00.01 |       4 |
|*  4 |     TABLE ACCESS FULL         | T1   |      1 |      4 |      1 |00:00:00.01 |       3 |
|*  5 |     INDEX RANGE SCAN          | T2I  |      1 |        |      1 |00:00:00.01 |       1 |
|   6 |    TABLE ACCESS BY INDEX ROWID| T2   |      1 |      1 |      1 |00:00:00.01 |       1 |
------------------------------------------------------------------------------------------------

The output of the plans for the 12c examples end with this line:

Note
-----
   - this is an adaptive plan

So, that tells me it is the adaptive plan feature that is changing the plan despite the wrong estimate of the number of rows.

- Bobby

 

Categories: DBA Blogs

Dealing with technical questions about #Oracle

The Oracle Instructor - Wed, 2014-04-02 10:51

During the OCM Preparation Workshop today in Bucharest, we got into a discussion about parallel inserts vs. direct load with append hint, where I said something like: “I think a parallel insert is always* also a direct load.” Most attendees were satisfied with that, but some still looked a bit sceptical. And I was also not 100% sure about it. While the others practiced, I did a brief research and came up with this answer from Tom Kyte and quoted him: “Parallel is always a direct path, if you go parallel, you will be appending.” Everybody in the room was immediately convinced – only I felt a bit uncomfortable with the outcome myself. After all, I don’t want to teach people to blindly follow my or any other guys claims without checking it if possible. And as with so many other claims about Oracle, it is possible to prove (or falsify) it with little effort:

 

SQL> grant dba to adam identified by adam;

Grant succeeded.

SQL> connect adam/adam
Connected.

SQL> create table t as select 'TEST' as col1 from dual connect by level<=1e6; 
Table created. 

SQL> select bytes/1024/1024 from user_segments;

BYTES/1024/1024
---------------
             13

SQL> delete from t;

1000000 rows deleted.

SQL> commit;

Commit complete.

SQL> insert into t select 'TEST' as col1 from dual connect by level<=1e6; 
1000000 rows created. 

SQL> commit;

Commit complete.

SQL> select bytes/1024/1024 from user_segments;

BYTES/1024/1024
---------------
             13

The above shows that a conventional insert reuses the empty space before the high water mark of the table and does not allocate additional extents. Now I’m going to do a parallel insert:

SQL> delete from t;

1000000 rows deleted.

SQL> commit;

Commit complete.
SQL> alter table t parallel;

Table altered.

SQL> alter session enable parallel dml;

Session altered.

SQL> insert into t select 'TEST' as col1 from dual connect by level<=1e6; 
1000000 rows created.

SQL> commit;

Commit complete.

SQL> select bytes/1024/1024 from user_segments;

BYTES/1024/1024
---------------
         24.375

QED! Obviously, the parallel insert appended rows after the high water mark – it was a direct load, in other words. That is in itself not so spectacular, but that little anecdote illustrates the in my opinion proper way to deal with technical questions about Oracle: Always try to back up your assumptions respectively claims with proof instead of relying only on (your or others) authority.

In other words: Don’t believe it, test it! :-)

*Addendum: Never say always, I always say: If you include the NOAPPEND hint to the parallel insert, it will not be done as a direct load. Thank you guys for pointing that out!


Tagged: 3e
Categories: DBA Blogs

Indexing Foreign Keys (Helden)

Richard Foote - Wed, 2014-04-02 00:41
A recent question on an internal forum asked whether an index on a Foreign Key (FK) constraint designed to avoid locking issues associated with deletes on the parent tables needs to precisely match the columns in the FK. Could the columns in the index potentially be a different order or be appended with additional columns ? The answer is […]
Categories: DBA Blogs

Cupcake Wars at NoCOUG Spring Conference

Iggy Fernandez - Tue, 2014-04-01 14:49
FOR IMMEDIATE RELEASE Cupcake Wars at NoCOUG Spring Conference on May 15 at UCSC Extension Silicon Valley SILICON VALLEY (APRIL 1, 2014) – In a bold experiment aimed at increasing attendance at its awesome educational conferences, the Northern California Oracle Users Group (NoCOUG) is considering changing the format of its spring conference to that of […]
Categories: DBA Blogs

A few words about deployment of ADF production domain on Oracle Linux

Recently I have configured ADF 11.1.2.4 production domain on Oracle Linux 6 OS. One of the stages of this process is installation and configuration of Application Development Runtime (ADR) which is...

We share our skills to maximize your revenue!
Categories: DBA Blogs

MapR Sandbox for Hadoop Learning

Surachart Opun - Mon, 2014-03-31 10:49
I got email about MapR Sandbox, that is a fully functional Hadoop cluster running on a virtual machine (CentOS 6.5) that provides an intuitive web interface for both developers and administrators to get started with Hadoop. I belief it's a good idea to learn about Hadoop and its ecosystem. Users can download for VMware VM or VirtualBox. I downloaded for VirtualBox and imported it. I changed about network to use "Bridged Adapter". After started... I connected it http://ip-address:8443
Then, I selected "Launch HUE" and "Launch MCS", but got some error and fixed.
Finally,  I could use HUE and MCS.


Hue is an interface for interacting with web applications that access the MapR File System (MapR-FS). Use the applications in HUE to access MapR-FS, work with tables, run Hive queries, MapReduce jobs, and Oozie workflows.

The MapR Control System (MCS) is a graphical, programmatic control panel for cluster administration that provides complete cluster monitoring functionality and most of the functionality of the command line.

After reviewing MapR Sandbox for VirtualBox, "maprdev" account is development account that can sudo to be root.
login as: maprdev
Server refused our key
Using keyboard-interactive authentication.
Password:
Welcome to your Mapr Demo virtual machine.
[maprdev@maprdemo ~]$ sudo -l
Matching Defaults entries for maprdev on this host:
    !visiblepw, always_set_home, env_reset, env_keep="COLORS DISPLAY HOSTNAME HISTSIZE INPUTRC KDEDIR LS_COLORS", env_keep+="MAIL PS1 PS2 QTDIR USERNAME LANG
    LC_ADDRESS LC_CTYPE", env_keep+="LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES", env_keep+="LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE",
    env_keep+="LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY", secure_path=/sbin\:/bin\:/usr/sbin\:/usr/bin
User maprdev may run the following commands on this host:
    (ALL) NOPASSWD: ALL
[maprdev@maprdemo ~]$
[maprdev@maprdemo ~]$ sudo showmount -e localhost
Export list for localhost:
/mapr                *
/mapr/my.cluster.com *
[maprdev@maprdemo ~]$
Read More Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Partner Webcast – Oracle ADF Mobile - Implementing Data Caching and Syncing for Working Off Line

Mobile access to enterprise applications is fast becoming a standard part of corporate life. Such applications increase organizational efficiency because mobile devices are more readily at hand...

We share our skills to maximize your revenue!
Categories: DBA Blogs