Skip navigation.

DBA Blogs

Webcast - Oracle Database In-Memory Option

Next to the recent announcement by Larry Ellison on the Future of the Database, we are happy to share this exclusive series of live webcasts from Oracle Database Product Management, where you can...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Log Buffer #384, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-08-15 07:32

This Log Buffer Edition starts with some great posts from Oracle arena, then passes through the world of SQL Server, and stops at the MySQL field.

Oracle:

OAG/OES Integration for Web API Security: skin and guts by Andre Correa

Showing Foreign Key Names in your Data Modeler Diagrams

walkmod : A Tool to Apply Coding Conventions

Oracle VM Virtual Appliances for E-Business Suite 12.1.3 Now Available

RMAN Catalog requires Enterprise Edition (EE) since Oracle Database 12.1.0.2

SQL Server:

Restore Gene : Automating SQL Server Database Restores

With a hybrid cloud, can you get the freedom and flexibility of a public cloud with the security and bandwidth of a private cloud?

A clear understanding of SQL Data Types and domains is a fundamental requirement for the Database Developer, but it is not elementary.

Automating SQL Server Agent Notification

Adding Custom Reports to SQL Server Management Studio

MySQL:

The Road to MySQL 5.6 — A DBA Perspective

Virtual servers for MySQL are popular but are they the answer? Should we be containing our instances instead.

Jeremy Cole recently blogged about the feature SET GLOBAL sql_log_bin.

Which SQL queries take all the time? Using MaxScale to answer that age old question.

SBR vs RBR when using On Duplicate Key Update for High Availability

Categories: DBA Blogs

OGG-00212, what a frustrating error.

DBASolved - Thu, 2014-08-14 14:50

Normally, I don’t mind errors when I’m working with Oracle GoldenGate (OGG); I actually like getting errors, keeps me on my toes and gives me something to solve.  Clients on the other hand do not like errors…LOL.  Solving errors in OGG is normally pretty straight forward with the help of the documentation.  Although today I can almost disagree with the docs.

Today, as I’ve been working on implementing a solution with OGG 11.1.x on the source side and OGG 11.2.x on the target side, this error came up as I was trying to start the OGG 11.1.x Extracts:

OGG-00212  Invalid option for MAP: PMP_GROUP=@GETENV(“GGENVIRONMENT”.

OGG-00212  Invalid option for MAP:  TOKENS(.

In looking around in the OGG documentation and other resources (online and offline).  Some errors are self-explanatory; not in the case of OGG-00212.  Looking up the error in OGG 11.1.x docs was pointless; didn’t exist.  When I finally found the error in the docs for OGG 11.2.x, the docs say:

OGG-00212: Invalid option for [0]:{1}
Cause: The parameter could not be parsed because the specified option is invalid.
Action: Fix the syntax

Now that the documentation has stated the obvious, how is the error actually corrected?  There is no easy way to correct this error because it is syntax related.  In the case that I’m having the error was being thrown due to needing additional spaces in the TABLE mapping.  Silly I know, but true.  

Keep in mind, to fix an OGG-00212 error, especially with OGG 11.1.x or older, remember to add spaces where you many not think one is needed.

Example (causes the error):

TABLE <schema>.SETTINGS,TOKENS( #opshb_info() );

Example (fixed the error):

TABLE <schema>.SETTINGS, TOKENS ( #opshb_info() );

Notice the space between the common (,) and TOKEN. Also between TOKENS and the open parentheses (().  Those simple changes fixed the OGG-00212 error I was getting.

Hope this helps!

Enjoy!

http://about.me/dbasolved

 

 

 

 


Filed under: Golden Gate
Categories: DBA Blogs

SQL Tuning Health Check (SQLHC)

DBA Scripts and Articles - Thu, 2014-08-14 09:12

What is SQL Tuning Health Check? The SQL Tuning Health Check is provided by Oracle (Doc ID 1366133.1) in order to check the environment where the problematic SQL query runs. It checks the statistics, the metadata, initialization parameters and other elements that may influence the performance of the SQL being analyzed. The script generates an [...]

The post SQL Tuning Health Check (SQLHC) appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Managing Files with SaltStack

Pythian Group - Thu, 2014-08-14 09:06

Before we begin, take a look at my previous two blog posts, SaltStack for Remote Parallel Execution of Commands and Using SaltStack for Configuration Management.

We manage files using configurations management much for the same reasons that we manage packages and services – we want/need consistency across all of our boxes, and to spend our time on tasks that add more business value than logging into all of our servers to change a line in a config file.

Using Salt I will show you have to manage the message of the day (MOTD) on the server.

Templates

There are many examples of configuration files which differ only by a few lines between staging and production. A great example of this is a config file which has database host/user/pass information. The drive for consistency tells us that we would like the environment that our release is tested on to match (as closely as possible) the production environment where the code will run.

Using templates allows us to affect the few lines in a configuration file, which we would like to change, while leaving the rest alone. This also simplifies the management of our servers and the configuration repository, allowing us to maintain one config file for many servers.

Salt grains

The salt minions know a lot of information about the boxes they run on. Everything from the hostname, to the IP address, to the kernel, and more is available to be queried by the salt master. These pieces of information are called “grains” in the salt world and allow us to insert dynamic variables into our templates.

A great use case for grains would be the expansion of our Apache formula from my last post. On Red Hat-based machines, the Apache package is called “httpd” but on Debian-based machines the package is called “Apache2″ Using the “osfamily” grain we can dynamically redefine the package name for each minion while maintaining a single formula for all servers.

Likewise, any configurations files which need to have the current box IP address can benefit from grains. As each minion installs that configuration file it will see that the variable needs to be populated with a “grain” and will then do so as requested. Rather than maintaining an Apache vhost file for each of your 10 web servers where the only difference is the IP address declaration you can maintain one dynamic template which will ensure that everything else in that config file matches on each of the 10 nodes other then the one thing that needs to be dynamic (the IP address).

Putting it all together – the MOTD

In your /srv/salt dir we are going to create a subdir called motd. inside of that directory you will find 2 files. an init.sls which is the default top level formula for the director and an motd.template file which is our config file. The init.sls looks like this:

/etc/motd: 
  file.managed: 
    - user: root 
    - group: root 
    - mode: 0644 
    - source: salt://motd/motd.template 
    - template: jinja

For the file /etc/motd we are telling Salt that we want to manage the file that its owner and group should be root, that we want the file to have 0644 permissions. We are letting Salt know that it will find the config file (source) under the motd subdir, the salt:// maps to /srv/salt and that our template will be in the jinja format.

Our template will look like:

------------------------------------------------------------------------------
Welcome to {{ grains['fqdn'] }}

Server Stats at a Glance:
------------------------

OS: {{ grains['osfullname'] }}
Kernel: {{ grains['kernelrelease'] }}
Memory: {{ grains['mem_total'] }} MB

This server is managed using a configuration management system (saltstack.org).
Changes made to this box directly will likely be over-written by SALT. Instead
modify server configuration via the configuration management git repository.
------------------------------------------------------------------------------

As each minion installs this MOTD file it will see the variables in use because they are grains the minion will know that it has the information required to populate the variable and will do so for each server. This will give you a final MOTD that looks like this:

[root@ip-10-0-0-172 ~]# cat /etc/motd

------------------------------------------------------------------------------
Welcome to ip-10-0-0-172.ec2.internal

Server Stats at a Glance:
------------------------

OS: Amazon Linux AMI
Kernel: 3.10.42-52.145.amzn1.x86_64
Memory: 996 MB

This server is managed using a configuration management system (saltstack.org).
Changes made to this box directly will likely be over-written by SALT.  Instead
modify server configuration via the configuration management git repository.
------------------------------------------------------------------------------
[root@ip-10-0-0-172 ~]#

As you can see each variable was populated with the information specific to the node.

If we wanted to add, remove, or change anything in the MOTD, rather than having to box walk the entire infrastructure (which depending on your side, could tie up a resource for days), we can edit the single template file on the master and allow the tool to propagate the change out to the boxes for us, reducing that task from a very boring day (or more) to a few minutes!

Categories: DBA Blogs

How to Configure an Azure Point-to-Site VPN – Part 2

Pythian Group - Thu, 2014-08-14 08:47

This blog post is the second in a series of three which will demonstrate how to configure a Point-to-Site VPN step-by-step. In my first blog post, I demonstrated how to configure a virtual network and a dynamic routing gateway. Today’s post will be about creating certificates.

CREATING CERTIFICATES

At this step, we will create and upload a certificate. This certificate will be used to authenticate the VPN clients and are performed in few steps:

  • Generate the certificate
  • Upload the root certificate to the Azure Management Portal
  • Generate a client certificate
  • Export and install the client certificate

Let’s start …

  1. We will need to use the MakeCert tool. MakeCert is part of “Microsoft Visual Studio Express” available here.
  2. After successfully downloading the tool, start the setup and follow the installation steps. Note that you can generate this certificate in any computer, not only in the computer where you are configuring the VPN.
    After the installation, you can find MakeCert at:

    • C:\Program Files (x86)\Windows Kits\8.1\bin\x64
    • C:\Program Files (x86)\Windows Kits\8.1\bin\x86
  3. Launch the command prompt as Administrator. Point the path to one of the folders referred in the previous step and execute the following command (note: keep the command line opened):
    makecert -sky exchange -r -n “CN=RootCertificateMurilo” -pe -a sha1 -len 2048 -ss My “RootCertificateMurilo.cer”
    (where “RootCertificateMurilo” is teh certificate name).Screen Shot 2014-07-30 at 11.38.38
    This command will create and install a root certificate in the Personal certificate store and create the define RootCertificateMurilo.cer file in the same directory that you are executing the command.Screen Shot 2014-07-30 at 11.59.41Note: Store this certificate in a safe location.
  4. Now, go to the Windows Azure Management Portal https://manage.windowsazure.com/ in order to upload the certificate.
  5. In the networks section, select the previously created network and go to the certificate page.Screen Shot 2014-07-30 at 13.02.05
  6. Click Upload a root certificate, select your certificate, and click in the check mark.Screen Shot 2014-07-30 at 13.04.10
    • Depending on the time zone of the server where you created the certificate, you might receive an error message, “The certificate is not valid yet, effective date is [date and time].” To work around this, delete the created certificate, and create another one adding the following parameter (change the date):-b “07/30/2014″It will be valid form 00:00:00 hours for the day you set.
  7. Now we need to create a Client Certificate. We will use the Root Certificate to do this.
    In the same command line window, opened before, execute the following command:makecert.exe -n “CN=ClientCertificateMurilo” -pe -sky exchange -m 96 -ss My -in “RootCertificateMurilo” -is my -a sha1This certificate will be stored in your personal certificate store.
  8. Now we need to export this certificate, as this should be installed on each computer that needs to be connected to the virtual network. To achieve this, enter the command “mmc”, still in the opened command line. The following window will be shown: Screen Shot 2014-07-30 at 16.52.59
    • Go to File->Add/Remove Snap-in.
    • Select “Certificates” and click on “Add >”.Screen Shot 2014-07-30 at 16.54.02
    • Select My user account and click Finish.Screen Shot 2014-07-30 at 16.54.56
    • Click OK in the remaining window.
    • Now you will be able to see your certificates under the “Personal\Certificates” folder:Screen Shot 2014-07-30 at 16.56.13
  9. To export the certificate, right click the Client certificate and click on “All Tasks->Export…”, as shown:Screen Shot 2014-07-30 at 17.00.46
  10. A wizard will be presented. Choose Yes, export the private key and click.Screen Shot 2014-07-31 at 11.15.06
  11. Leave this as default, and click Next.Screen Shot 2014-07-31 at 11.22.15
  12. Choose a strong password (try to remember this) and click Next.Screen Shot 2014-07-31 at 11.23.39
  13. Now you need to set the path to store you .pfx file.Screen Shot 2014-07-31 at 11.25.01
  14. Click Next, then Finish.
  15. To finalize the “Certificates part”, we will need to install the certificate on all the servers where we want to setup the VPN.To accomplish this, you just need to:
    • Copy the exported .pfx file (step 13) to all the servers.
    • Double-click the pfx on all the servers.
    • Enter the password.
    • Proceed with the installation, maintaining the default location.

Stay tuned for my next blog post on how to configure the VPN client.

Categories: DBA Blogs

Michael Abbey: Still Presenting After All These Years

Pythian Group - Thu, 2014-08-14 07:50

A cool, wintery day in late 1989. This kid’s working for the Office of the Auditor General of Canada. I’d been working with Oracle and in my fourth year. I had cut my teeth on 6.0.27.9.4 after first seeing V3 some four years prior. I stumbled across a well-placed ad for a show happening in Anaheim USA in September 1990. I’ve got the bug. I apply to go to the show and was told by my employer ,”Just a sec, David and I were thinking of going to that show – let us get back to you.” Some three weeks I am told it’s a go.

I am off to sunny California for six wonderful days of International Oracle User Week (IOUW); this was a joint effort put on by Oracle and the International Oracle User Group (IOUG). I had spent the better part of the summer of 1969 in southern Cali so this was shaping up to be a resurrection. I toddle off to Cali and have a wonderful time. It’s magic – such a learning opportunity. I even came away knowing how to place a database in archivelog mode. I was so pleased with myself and got to meet one of my heroes. I had only been working with the software for 4 years, but already knew of Ken Jacobs (a.k.a. Dr. DBA).

I had the bug to present almost from day one. I saw an ad in one of the bazillion pieces of paper I brought home from that IOUW about a show in DC – Sheraton Woodley Park to be exact. I don’t even think that it exists anymore. I figured I’d attend ECO then present an abstract for IOUW 1991 in Miami. Some of the history is described in a blog post I made in 2013 located here. Enough said about that. It was quite a whirlwind of activity on the presentation circuit in those days. Starting in 1992 I became very active in the IOUG holding a handful of board positions up to the 2006 or maybe 2007 time frame. I attended a gazillion conferences in those days and the pinnacle was a show in Philly in 1995. I had been on the board of the IOUW for a few years and the paid attendance count at that show was staggering. Chubby Checker played at the big bash and arrangements were made for me to sit in on the bass guitar for the Twist. That got cancelled at the last minute but it was close. My paper was in one of the biggest rooms in the convention centre. There were over 1,500 people in attendance and it was intoxicating. I was pleased when I got my evals to find out the attendees were as pleased as I was. It was all (or close to all) about the CORE database technology in those days. In 1995, Oracle7 was the hot item having been on the street for over 3 years.

As guests of Oracle, a handful of us had the pleasure of attending the launch of Oracle7 at the Hudson Theatre in the Hotel Macklowe on 44th St. in beloved NYC. We were thrilled to be very close in those days to Ray Lane, then President of Oracle Corp. and we introduced Ray to a lot of his direct reports at that “party.” A mere four years later we were back for the release launch of Oracle8 at Radio City Music Hall. Again, a pleasant time was had by all. There turned out to be surprisingly little coverage/mention of Oracle8 at that event. It was more concentrated on Oracle Network Computer (NC) designed to bring computing power to every desktop at a low cost. Once during that Oracle8 launch, the operator of the boom mic in then pit swept the stage to get from one side to the other and almost hit LJE in the side of the head. I think I was the only one who heard what Larry said – “Watch out Bill.” Does anyone get the reference but me?

My torrid Oracle technology career was just that. Between 1991 and the date of this post I have probably given over 100 papers at shows from Ottawa to Hyderabad, Brighton to San Diego, and Vienna to Addis Ababa. There is still a voracious hunger out there for the heart of my expertise – anything that starts with an “O” and ends in an “E” and has the word database tagged on the end. After becoming very close to some of the kernel developers at Oracle, we discussed how they were still in the middle of their workday when the Loma Prieta quake hit in October 1989. Me and a few close friends hung out with the guys whose names litter the bottom of the “this change was done when” section of the ?/rdbms/admin directory on Oracle database software installs. We were in David Anderson’s office schmoozing and asked what he happened to be up to that day. He was ftp’ing source code from a VAX to a Sun box in preparation for the base-platform change that happened in the early 1990s. It was a magic carpet ride.

In some ways it still is. To finish off this year I am appearing at:

  • OOW (Oracle Open World) in San Francisco – September 29-October 2
  • ECO (East Coast Oracle) event in Raleigh/Durham – November 3-5
  • MOUS (Michigan Oracle User Summit) in Livonia – November 13
  • UKOUG in Liverpool – December 8-10

My personal top 10 moments (actually top 11 – the exchange rate) in my still developing tech career you say … drum roll:

Rank Event Date 11 First ever tech show 1990 10 Longest lasting tech contact – Yossi Amor 25 years 9 Number of IOUG yearly events attended 23 8 Books published in Oracle Press series (including translations) 42 7 Most attendees at a presentation – 1500 (Philadelphia) 1995 6 Fewest attendees at a presentation – 1 2013 5 Most exciting event attended – CODA in Burlingame CA 1993 4 First PL/SQL code block to compile – Oracle7 1993 3 Favourite version of SQL*Forms – 2.3 1993 2 First got hands wet with this famous technology – 5.1.22 1986 1 Biggest thrill – the rush of speaking to live audiences 1991-??
Categories: DBA Blogs

keeping my fingers crossed just submitted abstract for RMOUG 2015 Training Days ...

Grumpy old DBA - Tue, 2014-08-12 12:06
The Rocky Mountain Oracle Users Group has been big and organized for a very long time.  I have never been out there ( my bad ) but am hoping to change that situation in 2015.

Abstracts are being accepted for Training Days 2015 ... my first one is in there now thinking about a second submission but my Hotsos 2014 presentation needs some more work/fixing.  Ok lets be honest I need to shrink it considerably and tighten the focus of that one.

Information on RMOUG 2015 can be found here: RMOUG Training Days 2015

Keeping my fingers crossed!
Categories: DBA Blogs

Watch Oracle DB Session Activity With The Real-Time Session Sampler

Watch Oracle DB Session Activity With My Real-Time Session Sampler
Watching session activity is a great way to diagnose and learn about Oracle Database tuning. There are many approaches to this. I wanted something simple, useful, modifiable, no Oracle licensing
issues and that I could give away. The result is what I call the Oracle Real-Time Session Sampler (OSM: rss.sql).

The tool is simple to use.  Based on a number filtering command line inputs, it repeatedly samples active Oracle sessions and writes the output to a file in /tmp. You can do a "tail -f" on the file to watch session activity in real time!

The rss.sql tool is included in the OraPub System Monitor (OSM) toolkit (v13j), which can be downloaded HERE.

If you simply want to watch a video demo, watch below or click HERE.


The Back-Story
Over the past two months I have been creating my next OraPub Online Institute seminar about how to tune Oracle with an AWR/Statspack report using a quantitative time based approach. Yeah... I know the title is long. Technically I could have used Oracle's Active Session History view (v$active_session_history) but I didn't want anyone to worry about ASH licensing issues. And ASH is not available with Oracle Standard Edition.

The Real-Time Session Sampler is used in a few places in the online seminar where I teach about Oracle session CPU consumption and wait time. I needed something visual that would obviously convey the point I wanted to make. The Real-Time Session Sampler worked perfectly for this.

What It Does
Based on a number of command line inputs, rss.sql repeatedly samples active Oracle sessions and writes the output to file in /tmp. The script contains no dml statements. You can do a "tail -f" on the output file to see session activity in real time. You can look at all sessions, a single session, sessions that are consuming CPU or waiting or both, etc. You can even change the sample rate. For example, once every 5.0 seconds or once every 0.25 seconds! It's very flexible and it's fascinating to watch.

Here is an example of some real output.



How To Use RSS.SQL
The tool is run within SQL*Plus and the output is written to the file /tmp/rss_sql.txt. You need two windows: one to sample the sessions and other other to look at the output file. Here are the script parameter options:

rss.sql  low_sid  high_sid  low_serial  high_serial  session_state  wait_event_partial|%  sample_delay

low_sid is the low Oracle session id.
high_sid is the high Oracle session id.
low_serial is the low Oracle session's serial number.
high_serial is the high Oracle session's serial number.
session_state is the current state of the session at the moment of sampling: "cpu", "wait" or for both "%".
wait_event_partial is when the session is waiting, select the session only with this wait event. Always set this to "%" unless you want to tighten the filtering.
sample_delay is the delay between samples, in seconds.

Examples You May Want To Try
By looking at the below examples, you'll quickly grasp that this tool can be used in a variety of situations.

Situation: I want to sample a single session (sid:10 serial:50) once every five seconds.

SQL>@rss.sql  10 10 50 50 % % 5.0

Situation: I want to essentially stream a single session's (sid:10 serial:50) activity.

SQL>@rss.sql 10 10 50 50 % % 0.125

Situation: I want to see what sessions are waiting for an row level lock while sampling once every second.

SQL>@rss.sql 0 99999 0 99999 wait enq%tx%row% 1.0

Situation: I want to see which sessions are consuming CPU, while sampling once every half second.

SQL>@rss.sql 0 99999 0 99999 cpu % 0.50

Be Responsible... It's Not OraPub's Fault!
Have fun and explore...but watch out! Any time you are sample repeatedly, you run the risk of impacting the system under observation. You can reduce this risk by sampling less often (perhaps once every 5 seconds), by limiting the sessions you want to sample (not 0 to 99999) and by only select sessions in either a "cpu" or "wait" state.

A smart lower impact strategy would be to initially keep a broader selection criteria but sample less often; perhaps once every 15 seconds. Once you know what you want to look for, tighten the selection criteria and sample more frequently. If you have identified a specific session of interest, then you stream the activity (if appropriate) every half second or perhaps every quarter second.

All the best in your Oracle Database tuning work,

Craig.
https://resources.orapub.com/OraPub_Online_Training_About_Oracle_Database_Tuning_s/100.htmYou can watch the seminar introductions for free on YouTube!If you enjoy my blog, subscribing will ensure you get a short-concise email about a new posting. Look for the form on this page.

P.S. If you want me to respond to a comment or you have a question, please feel free to email me directly at craig@orapub .com.






Categories: DBA Blogs

Offline Visualization of Azkaban Workflows

Pythian Group - Mon, 2014-08-11 07:51

As mentioned in my past adventures, I’m often working with the workflow management tool ominously called Azkaban. Its foreboding name is not really deserved; it’s relatively straightforward to use, and offers a fairly decent workflow visualization. For that last part, though, there is a catch: to be able to visualize the workflow, you have to (quite obviously) upload the project bundle to the server. Mind you, it’s not that much of a pain, and could easily managed by, say, a Gulp-fueled watch job. But still, it would be nice to tighten the feedback loop there, and be able to look at the graphs without having to go through the server at all.

Happily enough, all the information we need is available in the Azkaban job files themselves, and in a format that isn’t too hard to deal with. Typically, a job file will be called ‘foo.job’ and look like

type=command
command=echo "some command goes here"
dependencies=bar,baz

So what we need to do to figure out a whole workflow is to begin at its final job, and recursively walk down all its dependencies.

use 5.12.0;

use Path::Tiny;

sub create_workflow {
  my $job = path(shift);
  my $azkaban_dir = $job->parent;

  my %dependencies;

  my @files = ($job);

  while( my $file = shift @files ) {
    my $job = $file->basename =~ s/\.job//r;

    next if $dependencies{$job}; # already processed

    my @deps = map  { split /\s*,\s*/ }
               grep { s/^dependencies=\s*// }
                    $file->lines( { chomp => 1 } );

    $dependencies{$job} = \@deps;

    push @files, map { $azkaban_dir->child( $_.'.job' ) } @deps;
  }

  return %dependencies;
}

Once we have that dependency graph, it’s just a question of drawing the little boxes and the little lines. Which, funnily enough, is a much harder job one would expect. And better left off to the pros. In this case, I decided to go with Graph::Easy, which output text and svg.

use Graph::Easy;

my $graph = Graph::Easy->new;

while( my( $job, $deps ) = each %dependencies ) {
    $graph->add_edge( $_ => $job ) for @$deps;
}

print $graph->as_ascii;

And there we go. We put those two parts together in a small script, and we have a handy cli workflow visualizer.

$ azkaban_flow.pl target/azkaban/foo.job

  +------------------------+
  |                        v
+------+     +-----+     +-----+     +-----+
| zero | --> | baz | --> | bar | --> | foo |
+------+     +-----+     +-----+     +-----+
               |                       ^
               +-----------------------+

Or, for the SVG-inclined,

$ azkaban_flow.pl -f=svg target/azkaban/foo.job

which gives us

Screen Shot 2014-08-10 at 3.09.42 PM
Categories: DBA Blogs

Partner Webcast - The Revolution of Oracle Java 8

Java 8, released in March 2014, is a revolutionary release of the world’s #1 development platform. It is the single largest upgrade ever to the programming model, with coordinated core code evolution...

We share our skills to maximize your revenue!
Categories: DBA Blogs

12c: Fun with WITH!

Pythian Group - Fri, 2014-08-08 11:30

Last night I couldn’t sleep and what else you’re going to do? I was thinking about Oracle stuff.

In Oracle version 12, Oracle has enhanced the WITH clause – traditionally used for sub-query factoring – to allow the declaration of functions and procedures. This can be (ab)used to create a very interesting scenario, that is not very common in Oracle: Reading data within the same SELECT statement, but from two different points in time. And the points in time are in the future, and not in the past.

Let’s say I want to take a snapshot of the current SCN, and then another one 5 or 10 seconds after that. Traditionally we’d have to store that somewhere. What if I could take two snapshots – at different SCNs – using a single SELECT statement ? Without creating any objects ?

col value for a50
set lines 200 pages 99

with  
procedure t (secs in number, scn out varchar2)
  is
    pragma autonomous_transaction;
  begin
    dbms_lock.sleep(secs);
    select 'at ' || to_char(sysdate,'HH24:MI:SS') || ' SCN: ' 
                 || dbms_flashback.get_system_change_number 
      into scn 
      from dual;
  end;
function wait_for_it (secs in number) 
 return varchar2 is
    l_ret varchar2(32767);
  begin
    t(secs, l_ret);
    return l_ret;
  end;
select 1 as time, 'at ' || to_char(sysdate,'HH24:MI:SS') || ' SCN: ' 
                || dbms_flashback.get_system_change_number as value 
  from dual
union all
select 5, wait_for_it(5) from dual
union all
select 10, wait_for_it(5) from dual
/

And the result is:

      TIME VALUE
---------- --------------------------------------------------
         1 at 09:55:49 SCN: 3366336
         5 at 09:55:54 SCN: 3366338
        10 at 09:55:59 SCN: 3366339

 


We can clearly see there, that the SCN is different, and the time shown matches the intervals we’ve chosen, 5 seconds apart. I think there could be some very interesting uses for this. What ideas can you folks come up with ?

Categories: DBA Blogs

Log Buffer #383, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-08-08 07:34

This Log Buffer Edition picks few of the informative blog posts from Oracle, SQL Server, and MySQL fields of database.


Oracle:

g1gc logs – Ergonomics -how to print and how to understand

In Solaris 11.2, svcs gained a new option, “-L”.  The -L option allows a user to easily look at the most recent log events for a service.

ADF Thematic Map component from DVT library was updated in ADF 12c with marker zoom option and area layer styling

When cloning pluggable databases Oracle gives you also SNAPSHOT COPY clause to utilize storage system snapshot capabilities to save on storage space.

It is normal for bloggers including myself to post about the great things they have done.

SQL Server:

In six years Microsoft has come from almost zero corporate knowledge about how cloud computing works to it being an integral part of their strategy.

A brief overview of Columnstore index and its usage with an example.

The Road To Hell – new article from the DBA Team

Encryption brings data into a state which cannot be interpreted by anyone who does not have access to the decryption key, password, or certificates.

How to test what a SQL Server application would do in the past or in the future with date and time differences.

MySQL:

MySQL for Visual Studio 1.2.3 GA has been released

An approach to MySQL dynamic cross-reference query.

The MySQL replication and load balancing plugin for PHP, PECL/mysqlnd_ms, aims to make using a cluster of MySQL servers instead of a single server as transparent as possible.

Picking the Right Clustering for MySQL: Cloud-only Services or Flexible Tungsten Clusters? New webinar-on-demand.

Collation options for new MySQL schemas and tables created in MySQL for Excel

Categories: DBA Blogs

Oracle Database RAC Diagnostics and Tuning

Oracle Real Application Clusters (Oracle RAC) is a clustered version of Oracle Database based on a comprehensive high-availability stack that can be used as the foundation of a database cloud system...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Space used by objects

DBA Scripts and Articles - Thu, 2014-08-07 12:35

Calculate the space used by a single object This script will help you calculate the size of a single object : [crayon-542b143d5b9c1355578545/] Calculate the space used by a whole schema If you want the space used by a whole schema, then here is a variation of the first query : [crayon-542b143d5b9d0200646342/]

The post Space used by objects appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

ECEMEA Webcast - Getting Started with your Big Data project

Big data is a new kind of power that transforms everything it touches in business, government, and private life. As a result, bringing big data to your company has the potential to provide big...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle Database 12.1.0.2 Released (22 July 2014)

Oracle in Action - Thu, 2014-08-07 00:54

RSS content

Oracle Database 12.1.0.2 has been released on 22 July 2014.

Following are some of the new features and options:

• In-memory column store
• Attribute clustering on disk
• Oracle Flashback archive capabilities for pluggable databases
• Rapid Oracle home provisioning
• Centralized security key vault capabilities
• Storage and query capabilities for nonrelational data
• Advanced Index Compression
• Oracle Big Data SQL
• Oracle JSON Document Store
• Oracle REST Data Services
• Improvements to Oracle Multitenant
• Zone Maps
• Approximate Count Distinct
• Attribute Clustering
• Full Database Caching
…..

You can download it here.

Enjoy!!!



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [Oracle Database 12.1.0.2 Released (22 July 2014)], All Right Reserved. 2014.

The post Oracle Database 12.1.0.2 Released (22 July 2014) appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Why Write-Through is still the default Flash Cache Mode on #Exadata X-4

The Oracle Instructor - Wed, 2014-08-06 12:41

The Flash Cache Mode still defaults to Write-Through on Exadata X-4 because most customers are better suited that way – not because Write-Back is buggy or unreliable. Chances are that Write-Back is not required, so we just save Flash capacity that way. So when you see this

CellCLI> list cell attributes flashcachemode
         WriteThrough

it is likely to your best :-)
Let me explain: Write-Through means that writing I/O coming from the database layer will first go to the spinning drives where it is mirrored according to the redundancy of the diskgroup where the file is placed that is written to. Afterwards, the cells may populate the Flash Cache if they think it will benefit subsequent reads, but there is no mirroring required. In case of hardware failure, the mirroring is already sufficiently done on the spinning drives, as the pictures shows:

Flash Cache Mode Write-Through

Flash Cache Mode WRITE-THROUGH

That changes with the Flash Cache Mode being Write-Back: Now writes go primarily to the Flashcards and popular objects may even never get aged out onto the spinning drives. At least that age out may happen significantly later, so the writes on flash must be mirrored now. The redundancy of the diskgroup where the object in question was placed on determines again the number of mirrored writes. The two pictures assume normal redundancy. In other words: Write-Back reduces the usable capacity of the Flashcache at least by half.

Flash Cache Mode Write-Back

Flash Cache Mode WRITE-BACK

Only databases with performance issues on behalf of writing I/O will benefit from Write-Back, the most likely symptom of which would be high numbers of the Free Buffer Waits wait-event. And Flash Logging is done with both Write-Through and Write-Back. So there is a good reason behind turning on the Write-Back Flash Cache Mode only on demand. I have explained this just very similar during my present Oracle University Exadata class in Frankfurt, by the way :-)


Tagged: exadata
Categories: DBA Blogs