Skip navigation.

DBA Blogs

C program to dump shared memory segments to disk on Linux.

ContractOracle - Tue, 2014-07-08 01:26
The following program was written to help investigate Oracle database shared memory on Linux.  It dumps the contents of existing shared memory segments to files on disk.  Note that it won't work against Oracle 11g and 12C databases as they use mmap instead of shmat for managing shared memory.  Sample program for reading from 11g and 12C here (mmap example )

Compile it using "gcc -o shared shared.c"  It is free for anyone to copy or modify as they wish, but I do not guarantee the functionality.
Check the format of the include listings below as I had to remove hashes and greater-than/less-than symbols to keep blogger happy.
include stdio.h
include stdlib.h
include sys/shm.h

int main (int argc, char *argv[]) {    int maxkey, id, shmid = 0;    struct shm_info shm_info;    struct shmid_ds shmds;    void * shared_data;    FILE * outfile;        maxkey = shmctl(0, SHM_INFO, (void *) &shm_info);    for(id = 0; id <= maxkey; id++) {        shmid = shmctl(id, SHM_STAT, &shmds);        char shmidchar[16];        snprintf(shmidchar, sizeof(shmidchar), "%d", shmid);        if (shmid < 0)            continue;        if(shmds.shm_segsz > 0) {            printf("Shared memory segment %s found.\n",shmidchar);                        shared_data = shmat(shmid, NULL, 0666);            if(shared_data != NULL) {                outfile = fopen(shmidchar, "wb");                if(outfile == NULL) {                    printf("Could not open file %s for writing.", shmidchar);                }                else {                    fwrite(shared_data, shmds.shm_segsz, 1, outfile);                    fclose(outfile);                                        printf("Dumped to file %s\n\n", shmidchar);                }            }        }    }}


Categories: DBA Blogs

It was 12 years ago today…

Richard Foote - Tue, 2014-07-08 01:07
It was exactly 12 years ago today that I first presented my Index Internals – Rebuilding The Truth presentation at a local ACT Oracle User Group event. And so my association with Oracle indexes started. It would be an interesting statistic to know how many people have subsequently read the presentation :) It would no doubt result in […]
Categories: DBA Blogs

Pro-active AWR Data Mining to Find Change in SQL Execution Plan

Pythian Group - Mon, 2014-07-07 11:11

Many times we have been called for the poor performance of a database and it has been narrowed down to a  SQL statement. Subsequent analysis have shown that the execution plan has been changed and a wrong execution plan was being used.

Resolution normally, is to fix the execution plan in 11g by running

variable x number
begin
:x :=
    dbms_spm.load_plans_from_cursor_cache(
    sql_id=>'&sql_id',
    plan_hash_value=>&plan_hash,
    fixed=>'YES');
end;
/

or for 10g, SQL_PROFILE is created as mentioned in Carlos Sierra’s blog .

A pro-active approach can be to mine AWR data for any SQL execution plan changes.

Following query from dba_hist_sqlstat can retrieve the list of SQL IDs whose plans have changed. It orders the SQL IDs,so that those SQL IDs for which maximum gains can be achieved by fixing plan, are listed first.

 
spool sql_with_more_than_1plan.txt
set lines 220 pages 9999 trimspool on
set numformat 999,999,999
column plan_hash_value format 99999999999999
column min_snap format 999999
column max_snap format 999999
column min_avg_ela format 999,999,999,999,999
column avg_ela format 999,999,999,999,999
column ela_gain format 999,999,999,999,999
select sql_id,
       min(min_snap_id) min_snap,
       max(max_snap_id) max_snap,
       max(decode(rw_num,1,plan_hash_value)) plan_hash_value,
       max(decode(rw_num,1,avg_ela)) min_avg_ela,
       avg(avg_ela) avg_ela,
       avg(avg_ela) - max(decode(rw_num,1,avg_ela)) ela_gain,
       -- max(decode(rw_num,1,avg_buffer_gets)) min_avg_buf_gets,
       -- avg(avg_buffer_gets) avg_buf_gets,
       max(decode(rw_num,1,sum_exec))-1 min_exec,
       avg(sum_exec)-1 avg_exec
from (
  select sql_id, plan_hash_value, avg_buffer_gets, avg_ela, sum_exec,
         row_number() over (partition by sql_id order by avg_ela) rw_num , min_snap_id, max_snap_id
  from
  (
    select sql_id, plan_hash_value , sum(BUFFER_GETS_DELTA)/(sum(executions_delta)+1) avg_buffer_gets,
    sum(elapsed_time_delta)/(sum(executions_delta)+1) avg_ela, sum(executions_delta)+1 sum_exec,
    min(snap_id) min_snap_id, max(snap_id) max_snap_id
    from dba_hist_sqlstat a
    where exists  (
       select sql_id from dba_hist_sqlstat b where a.sql_id = b.sql_id
         and  a.plan_hash_value != b.plan_hash_value
         and  b.plan_hash_value > 0)
    and plan_hash_value > 0
    group by sql_id, plan_hash_value
    order by sql_id, avg_ela
  )
  order by sql_id, avg_ela
  )
group by sql_id
having max(decode(rw_num,1,sum_exec)) > 1
order by 7 desc
/
spool off
clear columns
set numformat 9999999999

The sample output for this query will look like

SQL_ID        MIN_SNAP MAX_SNAP PLAN_HASH_VALUE          MIN_AVG_ELA              AVG_ELA             ELA_GAIN     MIN_EXEC     AVG_EXEC
------------- -------- -------- --------------- -------------------- -------------------- -------------------- ------------ ------------
ba42qdzhu5jb0    65017    67129      2819751536       11,055,899,019       90,136,403,552       79,080,504,532           12            4
2zm7y3tvqygx5    65024    67132       362220407       14,438,575,143       34,350,482,006       19,911,906,864            1            3
74j7px7k16p6q    65029    67134      1695658241       24,049,644,247       30,035,372,306        5,985,728,059           14            7
dz243qq1wft49    65030    67134      3498253836        1,703,657,774        7,249,309,870        5,545,652,097            1            2

MIN_SNAP and MAX_SNAP are the minimum/maximum snap id where the SQL statement occurs

PLAN_HASH_VALUE is the hash_value of the plan with the best elapsed time

ELA_GAIN is the estimated improvement in elapsed time by using this plan compared to the average execution time.

Using the output of the above query, sql execution plans can be fixed, after proper testing.  This method can help DBAs pin-point and resolve problems with SQL execution plans, faster.

Categories: DBA Blogs

Salt Stack for Remote Parallel Execution of Commands

Pythian Group - Mon, 2014-07-07 11:08

There are many scenarios when a SysAdmin has to do a “box walk” of the entire infrastructure to execute a command across many servers. This is universally accepted as one of the less glamorous parts of our job. The larger the infrastructure, the longer these box walks take, and the greater chance that human error will occur.

Even giving this task to a junior resource, as is often the case, is not sustainable as the infrastructure grows, and does not represent the best value to the business in terms of resource utilization. Additionally, too much of this type of “grind” work can demoralize even the most enthusiastic team member.

Thankfully the days of having to do these box walks are over. Thanks to configuration management and infrastructure automation tools, the task has been automated and no longer requires the investment in time by a human SysAdmin that it once did. These tools allow you, at a very high level, to off load this repetitive work to the computer, with the computer doing the heavy lifting for you.

 

Introducing Salt Stack

Salt Stack is a distributed remote execution system used to execute commands and query data on remote nodes, either individually or by arbitrary selection criteria. Salt Stack is also a configuration management system in it’s own right but this post will be focusing on Salt from a “Command and Control” point of view.

Salt has 2 main components, the “salt master” (server) and the “salt minions” (clients). Once the minions are accepted by the master, then further execution of commands can come directly from the central salt master server.

Once you have installed your packages the minion needs to be configured to know where its master is. This can be accomplished through a DNS or hosts-file entry or by setting the variable in the /etc/salt/minion config.


master: XXX.XXX.XXX.XXX

Where “XXX.XXX.XXX.XXX” is the IP Address of your master server. Once that is done, and the salt-minion service has been started the minion will generate and ship an SSL key back to the master to ensure all communication is secure.

The master must accept the key from the minion before any control can begin.


# Listing the Keys

[root@ip-10-154-193-216 ~]# salt-key -L
Accepted Keys:
Unaccepted Keys:
ip-10-136-76-163.ec2.internal
Rejected Keys:

# Adding The Key

[root@ip-10-154-193-216 ~]# salt-key -A
The following keys are going to be accepted:
Unaccepted Keys:
ip-10-136-76-163.ec2.internal
Proceed? [n/Y] y
Key for minion ip-10-136-76-163.ec2.internal accepted.

# Nailed It! Now the Master can control the Minion!

[root@ip-10-154-193-216 ~]# salt-key -L
Accepted Keys:
ip-10-136-76-163.ec2.internal
Unaccepted Keys:
Rejected Keys:

Note: Not Shown – I added a 2nd Minion

Now that your master has minions the fun begins. From your master you can now query information from your minions such as disk space:


[root@ip-10-154-193-216 ~]# salt '*' disk.percent

ip-10-136-76-163.ec2.internal:
----------
/:
15%
/dev/shm:
0%
ip-10-147-240-208.ec2.internal:
----------
/:
14%
/dev/shm:
0%

And you can also execute remote commands such as finding out service status, and restarting services.


[root@ip-10-154-193-216 ~]# salt '*' cmd.run "service crond status"

ip-10-136-76-163.ec2.internal:
crond (pid 1440) is running...
ip-10-147-240-208.ec2.internal:
crond (pid 1198) is running...

[root@ip-10-154-193-216 ~]# salt '*' cmd.run "service crond restart"
ip-10-136-76-163.ec2.internal:
Stopping crond: [ OK ]
Starting crond: [ OK ]
ip-10-147-240-208.ec2.internal:
Stopping crond: [ OK ]
Starting crond: [ OK ]

These are only the most basic use cases for what Salt Stack can do, but even from these examples it is clear that salt can become a powerful tool which can reduce the potential for human error and increase the efficiency of your SysAdmin Team.

By Implementing Configuration Management and Infrastructure Automation tools such as Salt Stack you can free up the time of your team members to work on higher quality work which delivers more business value.

Salt Stack (depending on your setup) can be deployed in minutes. On RHEL/CentOS/Amazon Linux using the EPEL repo I was able to be up and running with Salt in about 5 minute on the 3 nodes I used for the examples in this post. Salt can be deployed using another configuration management tool, it can be baked into your provisioning environment, or into base images. If all else fails, (ironically) you can do a box walk to install the package on your existing servers.

Even if you have another configuration management solution deployed, depending on what you are trying to accomplish using Salt for parallel command execution rather then the Config Management system can often prove a much simpler and lightweight solution.

Salt is also a great choice in tools for giving other teams access to execute commands on a subset of boxes without requiring them to have shell access to all of the servers. This allows those teams to get their job done without the SysAdmin team becoming a bottle neck.

Categories: DBA Blogs

Log Buffer #378, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-07-04 08:43

New technologies, new ideas, and new tips are forthcoming in abundance in numerous blog posts across Oracle, SQL Server, and MySQL. This Log Buffer Edition covers many of the salient ones.

Oracle:

Wither you use a single OEM and migrating to a new OEM or have multiple OEMs, the need to move templates between environments will arise.

Oracle Coherence is the industry’s leading in-memory data grid solution that enables applications to predictably scale by providing fast, reliable and scalable access to frequently used data.

Needless to say, some ATG applications are more complex than others.  Some ATG applications support a single site, single language, single catalog, single currency, have a single development staff, single business team, and a relatively simple business model.

The purpose of this article is to describe some of the important foundational concepts of ATG.

You can use Ops Center to perform some very complex tasks. For instance, you might use it to provision several operating systems across your environment, with multiple configurations for each OS.

SQL Server:

SSRS In a Flash – Level 1 in the Stairway to Reporting Services.

The “Numbers” or “Tally” Table: What it is and how it replaces a loop.

Arshad Ali demonstrates granular level encryption in detail and explains how it differs from Transparent Data Encryption (TDE).

There were many new DMVs added in SQL Server 2012, and some that have changed since SQL Server 2008 R2.

There are some aspects of tables in SQL Server that a lot of people get wrong, purely because they seem so obvious that one feels embarrassed about asking questions.

MySQL:

A much awaited release from the MariaDB project is now stable (GA) – MariaDB Galera Cluster 10.0.12.

Failover with the MySQL Utilities: Part 2 – mysqlfailover.

HowTo: Integrating MySQL for Visual Studio with Connector/Net.

Single database backup and restore with MEB.

Externally Stored Fields in InnoDB.

Categories: DBA Blogs

Speedy #em12c template export

DBASolved - Thu, 2014-07-03 20:50

Wither you use a single OEM and migrating to a new OEM or have multiple OEMs, the need to move templates between environments will arise.  I had this exact problem come up recently at a customer site between an OEM 11g and OEM 12c.  In order to move the templates, I needed to export the multiple monitoring templates using EMCLI.  The command that I used to do individual exports was the following:


./emcli export_template -name="<template name>" -target_type="<target_type>" -output_file="/tmp/<template name>.xml"

If you have only one template to move, the EMCLI command above will work.  If you have more than one template to move, the easiest thing to do is to have the EMCLI command run in a script.  This is the beauty of EMCLI; the ability to interact with OEM at the command line and use it in scripts for repeated executions.  Below is a script that I wrote to export templates based on target_types.

Note: If you need to identify the target_types that are supported by OEM, they can be found in SYSMAN.EM_TARGET_TYPES in the repository.


#!/usr/bin/perl -w
#
#Author: Bobby Curtis, Oracle ACE
#Copyright: 2014
#
use strict;
use warnings;

#Parameters
my $oem_home_bin = "/opt/oracle/app/product/12.1.0.4/middleware/oms/bin";
my @columns = ("", 0, 0, 0, 0);
my @buf;
my $target_type = $ARGV[0];

#Program

if (scalar @ARGV != 1)
{
 print "\nUsage:\n";
 print "perl ./emcli_export_templates.pl <target_type>\n\n";
 print "<target_type> = target type for template being exported\n";
 print "refer to sysman.em_target_types in repository for more info.";
 print "\n";
 exit;
}

system($oem_home_bin.'/emcli login -username=<userid> -password=<password>');
system($oem_home_bin.'/emcli sync');

@buf = `$oem_home_bin/emcli list_templates`;

foreach (@buf)
{
 @columns = split (/ {2,}/, $_);

 if ($columns[2] eq $target_type )
 {
 my $cmd = 'emcli export_template -name="'.$columns[0].'" -target_type="'.$columns[2].'" -output_file="/tmp/'.$columns[0].'.xml"';
 system($oem_home_bin.'/'.$cmd);
 print "Finished export of: $columns[0] template\n";
 }
}

system($oem_home_bin.'/emcli logout');

If you would like to learn more about EMCLI and other ways to use it have a look at these other blogs:

Ray Smith: https://oramanageability.wordpress.com/
Kellyn Pot’Vin: http://dbakevlar.com/
Seth Miller: http://sethmiller.org/

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: OEM
Categories: DBA Blogs

Partner Webcast - Oracle Coherence & Weblogic Server: Close Integration of Application & Data Grid Tier

Oracle Coherence is the industry’s leading in-memory data grid solution that enables applications to predictably scale by providing fast, reliable and scalable access to frequently used data. The key...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle Global FY15 Global Partner Kickoff

Last week, during 25 and 26th of June, Oracle PartnerNetwork had the FY15 Global Partner Kickoff where you, hopefully, got to meet Oracle executives, including Rich Geraffo, SVP, Worldwide Alliances...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Welcome the New Oracle Weblogic Server 12.1.3 Release

If you remember, last summer Oracle has released Oracle WebLogic Server 12.1.2 which introduced a lot of new enterprise features such as: Dynamic Clusters, Elastic JMS, Coherence Managed Servers,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Partner Webcast – Modernizing Oracle Forms for the Cloud era

Oracle Forms is one of the most widely used tools for building applications for the Oracle database. Many organizations still run enterprise Oracle Forms applications created in the 90s, leading in...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Hot off the press : Latest Release of Oracle Enterprise Manager 12c (R4)

Pankaj Chandiramani - Tue, 2014-06-03 06:53

Read more here about the PRESS RELEASE:  Oracle Delivers Latest Release of Oracle Enterprise Manager 12c


Richer
Service Catalog for Database and Middleware as a Service; Enhanced
Database and Middleware Management Help Drive Enterprise-Scale Private
Cloud Adoption


In coming weeks  , i will be covering latest topics like :



  1. DbaaS Service Catalog incorporating High Availability and Disaster Recovery

  2. New Rapid Start kit

  3. Other new Features 


Stay Tuned !

Categories: DBA Blogs

Interesting info-graphics on Data-center / DB-Manageability

Pankaj Chandiramani - Mon, 2014-05-19 04:21


 Interesting info-graphics on Data-center / DB-Manageability



Categories: DBA Blogs

Tackling the challange of Provisoning Databases in an agile datacenter

Pankaj Chandiramani - Wed, 2014-05-14 01:03

One of the key task that a DBA performs repeatedly is Provisioning of Databases which also happens to one of the top 10 Database Challenges as per IOUG Survey .

Most of the challenge comes in form of either Lack of Standardization or it being a Long and Error Prone Process . This is where Enterprise Manager 12c can help by making this a standardized process using profiles and lock-downs ; plus have a role and access separation where lead dba can lock certain properties of database (like character-set or Oracle Home location  or SGA etc) and junior DBA's can't change those during provisioning .Below image describes the solution :



In Short :



  • Its Fast

  • Its Easy 

  • And you have complete control over the lifecycle of your dev and production resources.


I actually wanted to show step by step details on how to provision a 11204 RAC using Provisioning feature of DBLM  , but today i saw a great post by MaaZ Anjum that does the same , so i am going to refer you to his blog here :


Patch and Provision in EM12c: #5 Provision a Real Application Cluster Database


Other Resources : 


Official Doc : http://docs.oracle.com/cd/E24628_01/em.121/e27046/prov_db_overview.htm#CJAJCIDA


Screen Watch : https://apex.oracle.com/pls/apex/f?p=44785:24:112210352584821::NO:24:P24_CONTENT_ID%2CP24_PREV_PAGE:5776%2C1


Others : http://www.oracle.com/technetwork/oem/lifecycle-mgmt-495331.html?ssSourceSiteId=ocomen



Categories: DBA Blogs

Nationwide Deploys Database Applications 600% Faster

Pankaj Chandiramani - Mon, 2014-04-28 03:37

Nationwide Deploys Database Applications 600% Faster





Heath Carfrey of Nationwide, a leading global insurance and
financial services organization, discusses how Nationwide saves time and
effort in database provisioning with Oracle Enterprise Manager
.


Key-points :



  1. Provisioning Databases using Profiles  (aka Gold Images)

  2. Automated Patching

  3.  Config/Compliance tracking




Categories: DBA Blogs

EMCLI setup

Pankaj Chandiramani - Mon, 2014-04-28 02:15

A quick note on how to install EMCLI which is used for various CLI operations from EM . I was looking to test some Database provisioning automation via EMCLI and thus was looking to setup the same . 


EMCLI Setup
To set up EMCLI on the host, follow these steps:
1.    Download the emcliadvancedkit.jar from the OMS using URL https://<omshost>:<omsport>/em/public_lib_download/emcli/kit/emcliadvancedkit.jar
2.    Set your JAVA_HOME environment variable and ensure that it is part of your PATH. You must be running Java 1.6.0_43 or greater. For example:
o    setenv JAVA_HOME /usr/local/packages/j2sdk
o    setenv PATH $JAVA_HOME/bin:$PATH
3.    You can install the EMCLI with scripting option in any directory either on the same machine on which the OMS is running or on any machine on your network (download the emcliadvancedkit.jar to that machine)
java -jar emcliadvancedkit.jar client -install_dir=<emcli client dir>
4.    Run emcli help sync from the EMCLI Home (the directory where you have installed emcli) for instructions on how to use the "sync" verb to configure the client for a particular OMS.
5.    Navigate to the Setup menu then the Command Line Interface. See the Enterprise Manager Command Line Tools Download page for details on setting EMCLI.



Categories: DBA Blogs

Webcast: Database Cloning in Minutes using Oracle Enterprise Manager 12c Database as a Service Snap Clone

Pankaj Chandiramani - Thu, 2014-04-17 04:02

Since the demands
from the business for IT services is non-stop, creating copies of production
databases in order to develop, test and deploy new applications can be
labor intensive and time consuming. Users may also need to preserve private
copies of the database, so that they can go back to a point prior to when
a change was made in order to diagnose potential issues. Using Snap Clone,
users can create multiple snapshots of the database and “time
travel
” across these snapshots to access data from any point
in time.


Join us for an in-depth
technical webcast and learn how Oracle Cloud Management Pack for Oracle
Database's capability called Snap Clone, can fundamentally improve the
efficiency and agility of administrators and QA Engineers while saving
CAPEX on storage. Benefits include:



  • Agile provisioning
    (~ 2 minutes to provision a 1 TB database)

  • Over 90% storage
    savings

  • Reduced administrative
    overhead from integrated lifecycle management


Register
Now!


April 24 — 10:00 a.m. PT | 1:00 p.m. ET

May 8 — 7:00 a.m. PT | 10:00 a.m. ET | 4:00 p.m. CET

May 22 — 10:00 a.m. PT | 1:00 p.m. ET





Categories: DBA Blogs

Wed, 1969-12-31 18:00