Skip navigation.

DBA Blogs

Salt Stack for Remote Parallel Execution of Commands

Pythian Group - Mon, 2014-07-07 11:08

There are many scenarios when a SysAdmin has to do a “box walk” of the entire infrastructure to execute a command across many servers. This is universally accepted as one of the less glamorous parts of our job. The larger the infrastructure, the longer these box walks take, and the greater chance that human error will occur.

Even giving this task to a junior resource, as is often the case, is not sustainable as the infrastructure grows, and does not represent the best value to the business in terms of resource utilization. Additionally, too much of this type of “grind” work can demoralize even the most enthusiastic team member.

Thankfully the days of having to do these box walks are over. Thanks to configuration management and infrastructure automation tools, the task has been automated and no longer requires the investment in time by a human SysAdmin that it once did. These tools allow you, at a very high level, to off load this repetitive work to the computer, with the computer doing the heavy lifting for you.

 

Introducing Salt Stack

Salt Stack is a distributed remote execution system used to execute commands and query data on remote nodes, either individually or by arbitrary selection criteria. Salt Stack is also a configuration management system in it’s own right but this post will be focusing on Salt from a “Command and Control” point of view.

Salt has 2 main components, the “salt master” (server) and the “salt minions” (clients). Once the minions are accepted by the master, then further execution of commands can come directly from the central salt master server.

Once you have installed your packages the minion needs to be configured to know where its master is. This can be accomplished through a DNS or hosts-file entry or by setting the variable in the /etc/salt/minion config.


master: XXX.XXX.XXX.XXX

Where “XXX.XXX.XXX.XXX” is the IP Address of your master server. Once that is done, and the salt-minion service has been started the minion will generate and ship an SSL key back to the master to ensure all communication is secure.

The master must accept the key from the minion before any control can begin.


# Listing the Keys

[root@ip-10-154-193-216 ~]# salt-key -L
Accepted Keys:
Unaccepted Keys:
ip-10-136-76-163.ec2.internal
Rejected Keys:

# Adding The Key

[root@ip-10-154-193-216 ~]# salt-key -A
The following keys are going to be accepted:
Unaccepted Keys:
ip-10-136-76-163.ec2.internal
Proceed? [n/Y] y
Key for minion ip-10-136-76-163.ec2.internal accepted.

# Nailed It! Now the Master can control the Minion!

[root@ip-10-154-193-216 ~]# salt-key -L
Accepted Keys:
ip-10-136-76-163.ec2.internal
Unaccepted Keys:
Rejected Keys:

Note: Not Shown – I added a 2nd Minion

Now that your master has minions the fun begins. From your master you can now query information from your minions such as disk space:


[root@ip-10-154-193-216 ~]# salt '*' disk.percent

ip-10-136-76-163.ec2.internal:
----------
/:
15%
/dev/shm:
0%
ip-10-147-240-208.ec2.internal:
----------
/:
14%
/dev/shm:
0%

And you can also execute remote commands such as finding out service status, and restarting services.


[root@ip-10-154-193-216 ~]# salt '*' cmd.run "service crond status"

ip-10-136-76-163.ec2.internal:
crond (pid 1440) is running...
ip-10-147-240-208.ec2.internal:
crond (pid 1198) is running...

[root@ip-10-154-193-216 ~]# salt '*' cmd.run "service crond restart"
ip-10-136-76-163.ec2.internal:
Stopping crond: [ OK ]
Starting crond: [ OK ]
ip-10-147-240-208.ec2.internal:
Stopping crond: [ OK ]
Starting crond: [ OK ]

These are only the most basic use cases for what Salt Stack can do, but even from these examples it is clear that salt can become a powerful tool which can reduce the potential for human error and increase the efficiency of your SysAdmin Team.

By Implementing Configuration Management and Infrastructure Automation tools such as Salt Stack you can free up the time of your team members to work on higher quality work which delivers more business value.

Salt Stack (depending on your setup) can be deployed in minutes. On RHEL/CentOS/Amazon Linux using the EPEL repo I was able to be up and running with Salt in about 5 minute on the 3 nodes I used for the examples in this post. Salt can be deployed using another configuration management tool, it can be baked into your provisioning environment, or into base images. If all else fails, (ironically) you can do a box walk to install the package on your existing servers.

Even if you have another configuration management solution deployed, depending on what you are trying to accomplish using Salt for parallel command execution rather then the Config Management system can often prove a much simpler and lightweight solution.

Salt is also a great choice in tools for giving other teams access to execute commands on a subset of boxes without requiring them to have shell access to all of the servers. This allows those teams to get their job done without the SysAdmin team becoming a bottle neck.

Categories: DBA Blogs

Log Buffer #378, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-07-04 08:43

New technologies, new ideas, and new tips are forthcoming in abundance in numerous blog posts across Oracle, SQL Server, and MySQL. This Log Buffer Edition covers many of the salient ones.

Oracle:

Wither you use a single OEM and migrating to a new OEM or have multiple OEMs, the need to move templates between environments will arise.

Oracle Coherence is the industry’s leading in-memory data grid solution that enables applications to predictably scale by providing fast, reliable and scalable access to frequently used data.

Needless to say, some ATG applications are more complex than others.  Some ATG applications support a single site, single language, single catalog, single currency, have a single development staff, single business team, and a relatively simple business model.

The purpose of this article is to describe some of the important foundational concepts of ATG.

You can use Ops Center to perform some very complex tasks. For instance, you might use it to provision several operating systems across your environment, with multiple configurations for each OS.

SQL Server:

SSRS In a Flash – Level 1 in the Stairway to Reporting Services.

The “Numbers” or “Tally” Table: What it is and how it replaces a loop.

Arshad Ali demonstrates granular level encryption in detail and explains how it differs from Transparent Data Encryption (TDE).

There were many new DMVs added in SQL Server 2012, and some that have changed since SQL Server 2008 R2.

There are some aspects of tables in SQL Server that a lot of people get wrong, purely because they seem so obvious that one feels embarrassed about asking questions.

MySQL:

A much awaited release from the MariaDB project is now stable (GA) – MariaDB Galera Cluster 10.0.12.

Failover with the MySQL Utilities: Part 2 – mysqlfailover.

HowTo: Integrating MySQL for Visual Studio with Connector/Net.

Single database backup and restore with MEB.

Externally Stored Fields in InnoDB.

Categories: DBA Blogs

Speedy #em12c template export

DBASolved - Thu, 2014-07-03 20:50

Wither you use a single OEM and migrating to a new OEM or have multiple OEMs, the need to move templates between environments will arise.  I had this exact problem come up recently at a customer site between an OEM 11g and OEM 12c.  In order to move the templates, I needed to export the multiple monitoring templates using EMCLI.  The command that I used to do individual exports was the following:


./emcli export_template -name="<template name>" -target_type="<target_type>" -output_file="/tmp/<template name>.xml"

If you have only one template to move, the EMCLI command above will work.  If you have more than one template to move, the easiest thing to do is to have the EMCLI command run in a script.  This is the beauty of EMCLI; the ability to interact with OEM at the command line and use it in scripts for repeated executions.  Below is a script that I wrote to export templates based on target_types.

Note: If you need to identify the target_types that are supported by OEM, they can be found in SYSMAN.EM_TARGET_TYPES in the repository.


#!/usr/bin/perl -w
#
#Author: Bobby Curtis, Oracle ACE
#Copyright: 2014
#
use strict;
use warnings;

#Parameters
my $oem_home_bin = "/opt/oracle/app/product/12.1.0.4/middleware/oms/bin";
my @columns = ("", 0, 0, 0, 0);
my @buf;
my $target_type = $ARGV[0];

#Program

if (scalar @ARGV != 1)
{
 print "\nUsage:\n";
 print "perl ./emcli_export_templates.pl <target_type>\n\n";
 print "<target_type> = target type for template being exported\n";
 print "refer to sysman.em_target_types in repository for more info.";
 print "\n";
 exit;
}

system($oem_home_bin.'/emcli login -username=<userid> -password=<password>');
system($oem_home_bin.'/emcli sync');

@buf = `$oem_home_bin/emcli list_templates`;

foreach (@buf)
{
 @columns = split (/ {2,}/, $_);

 if ($columns[2] eq $target_type )
 {
 my $cmd = 'emcli export_template -name="'.$columns[0].'" -target_type="'.$columns[2].'" -output_file="/tmp/'.$columns[0].'.xml"';
 system($oem_home_bin.'/'.$cmd);
 print "Finished export of: $columns[0] template\n";
 }
}

system($oem_home_bin.'/emcli logout');

If you would like to learn more about EMCLI and other ways to use it have a look at these other blogs:

Ray Smith: https://oramanageability.wordpress.com/
Kellyn Pot’Vin: http://dbakevlar.com/
Seth Miller: http://sethmiller.org/

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: OEM
Categories: DBA Blogs

Partner Webcast - Oracle Coherence & Weblogic Server: Close Integration of Application & Data Grid Tier

Oracle Coherence is the industry’s leading in-memory data grid solution that enables applications to predictably scale by providing fast, reliable and scalable access to frequently used data. The key...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Restore datafile from service: A cool #Oracle 12c Feature

The Oracle Instructor - Wed, 2014-07-02 09:02

You can restore a datafile directly from a physical standby database to the primary. Over the network. With compressed backupsets. How cool is that?

Here’s a demo from my present class Oracle Database 12c: Data Guard Administration. prima is the primary database on host01, physt is a physical standby database on host03. There is an Oracle Net configuration on both hosts that enable host01 to tnsping physt and host03 to tnsping prima

 

[oracle@host01 ~]$ rman target sys/oracle@prima

Recovery Manager: Release 12.1.0.1.0 - Production on Wed Jul 2 16:43:39 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: PRIMA (DBID=2084081935)

RMAN> run
{
set newname for datafile 4 to '/home/oracle/stage/users01.dbf';
restore (datafile 4 from service physt) using compressed backupset;
catalog datafilecopy '/home/oracle/stage/users01.dbf';
}

executing command: SET NEWNAME

Starting restore at 02-JUL-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=47 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service physt
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00004 to /home/oracle/stage/users01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 02-JUL-14

cataloged datafile copy
datafile copy file name=/home/oracle/stage/users01.dbf RECID=8 STAMP=851877850

This does not require backups taken on the physical standby database.


Tagged: 12c New Features, Backup & Recovery, Data Guard
Categories: DBA Blogs

What’s New with Apps Password Change in R12.2 E-Business Suite ?

Pythian Group - Wed, 2014-07-02 08:39

Apps password change routine in Release 12.2 E-Business Suite changed a little bit. We have now extra options to change password, as well as some manual steps after changing the password using FNDCPASS.

There is a new utility introduced called AFPASSWD. This utility unlike FNDCPASS wont require you to enter apps and system user password, and makes it possible to separate duties between database administrator and application administrator. In most cases both these roles are done by same DBA. But in large organizations, there may be different teams that manage Database and Application. You can read about different options available in AFPASSWD in EBS Maintenance guide.

Whether you use FNDCPASS or AFPASSWD to change the APPLSYS/APPS password, you must also perform some additional steps. This is because in R12.2, the old AOL/J connection pooling is replaced with Weblogic Connection Pool ( JDBC Datasource ).  Currently this procedure is not yet automated. It would be good, if this can be automated using some WLS scripting.

  • Shut down the application tier services
  • Change the APPLSYS password, as described for the utility you are using.
  • Start AdminServer using the adadminsrvctl.sh script from your RUN filesystem
  • Do not start any other application tier services.
  • Update the “apps” password in WLS Datasource as follows:
    • Log in to WLS Administration Console.
    • Click Lock & Edit in Change Center.
    • In the Domain Structure tree, expand Services, then select Data Sources.
    • On the “Summary of JDBC Data Sources” page, select EBSDataSource.
    • On the “Settings for EBSDataSource” page, select the Connection Pool tab.
    • Enter the new password in the “Password” field.
    • Enter the new password in the “Confirm Password” field.
    • Click Save.
    • Click Activate Changes in Change Center.
  • Start all the application tier services using the adstrtal.sh script.

I will be posting more of these What’s new with R12.2 articles in future. Post your experiences changing passwords in Oracle EBS in the comments section. I will happy to hear your stories and give my inputs

Categories: DBA Blogs

Essential Hadoop Concepts for Systems Administrators

Pythian Group - Wed, 2014-07-02 08:38

Of course, everyone knows Hadoop as the solution to Big Data. What’s the problem with Big Data? Well, mostly it’s just that Big Data is too big to access and process in a timely fashion on a conventional enterprise system. Even a really large, optimally tuned, enterprise-class database system has conventional limits in terms of its maximum I/O, and there is a scale of data that outstrips this model and requires parallelism at a system level to make it accessible. While Hadoop is associated in many ways with advanced transaction processing pipelines, analytics and data sciences, these applications are sitting on top of a much simpler paradigm… that being that we can spread our data across a cluster and provision I/O and processor in a tunable ratio along with it. The tune-ability is directly related to the hardware specifications of the cluster nodes, since each node has processing, I/O and storage capabilities in a specific ratio. At this level, we don’t need Java software architects and data scientists to take advantage of Hadoop. We’re solving a fundamental infrastructure engineering issue, which is “how can we scale our I/O and processing capability along with our storage capacity”? In other words, how can we access our data?

The Hadoop ecosystem at it’s core is simply a set of RESTfully interacting Java processes communicating over a network. The base system services, such as the data node (HDFS) and task tracker (MapReduce) run on each node in the cluster, register with an associated service master and execute assigned tasks in parallel that would normally be localized on a single system (such as reading some data from disk and piping it to an application or script). The result of this approach is a loosely coupled system that scales in a very linear fashion. In real life, the service masters (famously, NameNode and JobTracker) are a single point of failure and potential performance bottleneck at very large scales, but much has been done to address these shortcomings. In principal, Hadoop uses the MapReduce algorithm to extend parallel execution from a single computer to an unlimited number of networked computers.

MapReduce is conceptually a very simple system. Here’s how it works. Given a large data set (usually serialized), broken into blocks (as for any filesystem) and spread among the HDFS cluster nodes, feed each record in a block to STDIN of a local script, command or application, and collect the records from STDOUT that are emitted. This is the “map” in MapReduce. Next, sort each record by key (usually just the first field in a tab-delimited record emitted by the mapper, but compound keys are easily specified). This is accomplished by fetching records matching each specific key over the network to a specific cluster node, and accounts for the majority of network I/O during a MapReduce job. Finally, process the sorted record sets by feeding the ordered records to STDIN of a second script, command or application, collecting the result from STDOUT and writing them back to HDFS. This is the “reduce” in MapReduce. The reduce phase is optional, and usually takes care of any aggregations such as sums, averages and record counts. We can just as easily pipe our sorted map output straight to HDFS.

Any Linux or Unix systems administrator will immediately recognize that using STDIO to pass data means that we can plug any piece of code into the stream that reads and writes to STDIO… which is pretty much everything! To be clear on this point, Java development experience is not required. We can take advantage of Linux pipelines to operate on very large amounts of data. We can use ‘grep’ as a mapper. We can use the same pipelines and commands that we would use on a single system to filter and process data that we’ve stored across the cluster. For example,

grep -i ${RECORD_FILTER} | cut -f2 | cut -d’=’ -f2 | tr [:upper:][:lower:]

We can use Python, Perl and any other languages with support configured on the task tracker systems in the cluster, as long as our scripts and applications read and write to STDIO. To execute these types of jobs, we use the Hadoop Streaming jar to wrap the script and submit it to the cluster for processing.

What does this mean for us enterprise sysadmins? Let’s look at a simple, high level example. I have centralized my company’s log data by writing it to a conventional enterprise storage system. There’s lots of data and lots of people want access to it, including operations teams, engineering teams, business intelligence and marketing analysts, developers and others. These folks need to search, filter, transform and remodel the data to shake out the information they’re looking for. Conventionally, I can scale up from here by copying my environment and managing two storage systems. Then 4. Then 8. We must share and mount the storage on each system that requires access, organize the data across multiple appliances and manage access controls on multiple systems. There are many business uses for the data, and so we have many people with simultaneous access requirements, and they’re probably using up each appliance’s limited I/O with read requests. In addition, we don’t have the processor available… we’re just serving the data at this point, and business departments are often providing their own processing platforms from a limited budget.

Hadoop solves this problem of scale above the limits of conventional enterprise storage beautifully. It’s a single system to manage that scales in a practical way to extraordinary capacities. But the real value is not the raw storage capacity or advanced algorithms and data analytics available for the platform… it’s about scaling our I/O and processing capabilities to provide accessibility to the data we’re storing, thereby increasing our ability to leverage it for the benefit of our business. The details of how we leverage our data is what we often leave for the data scientists to figure out, but every administrator should know that the basic framework and inherent advantages of Hadoop can be leveraged with the commands and scripting tools that we’re already familiar with.

Categories: DBA Blogs

Differences Between R12.1 and R12.2 Integration with OAM

Pythian Group - Wed, 2014-07-02 08:37

With the revamp of technology stack in R12.2 of Oracle E-Business Suite (EBS) , the way we integrate Oracle Access Manager (OAM) has changed. R12.2 now is built on Weblogic techstack, which drastically changed how it integrates with Other Fusion Middleware Products like OAM

Here is a overview of Steps to configure OAM with EBS R12.1

  • Install Oracle HTTP Server ( OHS)  11g
  • Deploy & Configure Webgate on OHS 11g
  • Install Weblogic
  • Deploy & Configure Accessgate on Weblogic
  • Integrate Webgate, Accessgate with EBS and OAM/OID

R12.2 has both OHS and Weblogic built-in. So we no longer have to Install OHS and Weblogic for Webgate and Accessgate. All we have to do is Deploy and Configure Webgate and Accessgate.  Webgate is deployed on top of R12.2 OHS 11g home. Accessgate is deployed as a separate managed server ( oaea_server1 )  on top of R12.2 weblogic.

Here is the pictorial version of the same

R12.1 and 11i EBS integration with OAM/OID

11iand12_Reference_Architecture

 

R12.2 Integration with OAM/OID

12.2_Reference_Architecture_2

Basically R12.2 reduces the number of moving parts in the OAM integration EBS. It saves DBAs lot of time, as it reduces the number of servers to manage.

References:

Integrating Oracle E-Business Suite Release 12.2 with Oracle Access Manager 11gR2 (11.1.2) using Oracle E-Business Suite AccessGate (Doc ID 1576425.1)

Integrating Oracle E-Business Suite Release 12 with Oracle Access Manager 11gR2 (11.1.2) using Oracle E-Business Suite AccessGate (Doc ID 1484024.1)

Images are courtesy of Oracle from note “Overview of Single Sign-On Integration Options for Oracle E-Business Suite (Doc ID 1388152.1)”

 

Categories: DBA Blogs

Oracle Global FY15 Global Partner Kickoff

Last week, during 25 and 26th of June, Oracle PartnerNetwork had the FY15 Global Partner Kickoff where you, hopefully, got to meet Oracle executives, including Rich Geraffo, SVP, Worldwide Alliances...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Configure shared storage for #em12c Business Intelligence Publisher (BIP)

DBASolved - Tue, 2014-07-01 07:52

Oracle Enterprise Manager 12c Release 4 has a lot of new features; however, I quickly want to focus on a feature that has been in OEM12c for awhile.  This feature is Business Intelligence Publisher (BIP).  BIP has been a part of OEM12c since it was initially released; at first it was a bit cumbersome to get it installed. With the release of 12.1.0.4, the OEM team has done a great job at making the process a lot easier. Although this post is not directly talking about BIP installation; just understand that the process is easier and details can be found here.

What I want to focus on is how to configure BIP, once installed, to use shared storage.  I don’t recall if the requirement for shared storage was required in earlier versions of OEM12c; however, if you want to share BIP reports between OMS nodes in a high-avaliablity configuration, a shared location is required.  The initial directions for reconfiguring BIP for shared storage can be found here.

In order to allow multiple OMS nodes to support multiple BIP servers the following command needs to be ran:


emctl config oms -bip_shared_storage -config_volume <directory location> -cluster_volume <directory location>

Note: The directory location supplied for the shared location has to be accessible by both OMS nodes.


Example:

emctl config oms -bip_shared_storage -config_volume /oms/BIP/config -cluster_volume /oms/BIP/cluster

When the reconfiguring of BIP begins, you will be asked for the Admin User’s password (Weblogic User) and the SYSMAN password.  Supply these and then wait for the completion of the script. Once completed the CONFIG and CLUSTER directories for BIP will be moved to the location specified.

The new directory locations can be verified from the BIP web page under Administration -> Server Configuration.

In the end, reconfiguring BIP to use shared storage is quite simple.

Enjoy!

twitter: @dbasolved

blog: https://dbasolved.com


Filed under: OEM
Categories: DBA Blogs

How The Oracle Database Determines Wait Time When It's Not Set

Have you ever wondered how the Oracle Database 12c (and earlier versions) determines the wait time when it has absolutely no control over how long the wait will take? If so, then read on!

The Back Story
Using wait time is part of an Oracle Time Based Analysis (OTBA). While Oracle process CPU consumption is a big part of the analysis, the other category is non-idle wait time.

You can see the two categories of time clearly "in action" with my Real-Time Session Sampler script, rss.sql. It's part of my OraPub System Monitor (OSM) toolkit, that can be downloaded for free. Here's an example of the output:


I need to  explain a little more about CPU time and then Oracle wait time.

An Oracle processes wants to burn CPU. Without consuming CPU an Oracle process, can well...not process work! Oracle keeps track of the CPU consumption time. It's the actual CPU time consumed, for example, 500 ms or 3 seconds.

When an Oracle process can not burn CPU, the process, in Oracle terms, it must wait. For example, when an Oracle process waits, it's like it yells out details about why it's waiting. We call this yell a wait event. Each wait event has a name. And the name provides clues about why the process can't burn CPU and is therefore waiting.

There Are Different Reasons Why An Oracle Process Waits
There are three broad categories Oracle must time, when a process is not consuming CPU:
  1. When the wait time is predetermined and not interruptible. Perhaps when a latch can not be acquired through repeated attempts so the process takes a break (i.e., sleeps) for a fix period of time, say 10ms.
  2. When the wait time is predetermined but the process can be woken by another process. Perhaps a log writer is in the middle of its three second sleep and then a server process commits. The log writer will be signaled to wake and do some work.
  3. When Oracle has no idea how long the wait may last. Perhaps a process submitted a block to the IO subsystem for a synchronous read. Oracle has no idea how long this may take. This situation is what I'm focusing on in this posting and I show in the video below.
Exploring When Oracle Has No Idea How Long The Wait Will Take
Let's say when I work it's like an Oracle process consuming CPU. And if I have to stop working to drive to a meeting, it's like an Oracle process waiting. And in this situation,  I really don't know how long the wait will take. It's out of my control. There could be an accident along the way (think: table level lock)! An Oracle process can experience this same kind of thing.

If I'm an Oracle server process and I discover a block I need is not in the buffer cache, I'm going to need to make a call to the OS for that block. When I make the synchronous IO call, I really do not know how long it will take and I have to wait, that is, the Oracle process must wait.

From a DBA perspective, when I perform an Oracle Time Based Analysis (OTBA) I need know how long the IO, that is, the wait took. How does Oracle figure this out? It's pretty simple actually. It's basiclly like this:
  • Get the current time, start time
  • Make the synchronous IO call and wait until IO received
  • Get the current time, end time
  • Calculate the delta, end time - start time
The "delta" is the wait time for the single block read. If the single block read is a synchronous read and then placed into the buffer cache, Oracle will tag or name the wait time calling it a "db file sequential read".

Actually Watching An Oracle Process Figuring Out the Wait Time
Some say that seeing is believing. For sure it helps one to learn quickly. If you want to see with your own eyes an Oracle server process determine wait time for a multiple block synchronous read (event name is, db file scattered read), watch the below video.



Pretty cool, eh? By the way, the processing of timing processes and events has a special name, called instrumentation. As we can see, Oracle has instrumented its kernel code.

Enjoy your work and thanks for reading!

Craig.
https://resources.orapub.com/OraPub_Online_Seminars_About_Oracle_Database_Performance_s/100.htmYou can watch seminar introductions (like above) for free on YouTube!
If you enjoy my blog, I suspect you'll get a lot out of my online or in-class training. For details go to www.orapub.com. I also offer on-site training and consulting services.

P.S. If you want me to respond to a comment or you have a question, please feel free to email me directly at craig@orapub .com. Another option is to send an email to OraPub's general email address, which is currently info@ orapub. com.










Categories: DBA Blogs

Welcome the New Oracle Weblogic Server 12.1.3 Release

If you remember, last summer Oracle has released Oracle WebLogic Server 12.1.2 which introduced a lot of new enterprise features such as: Dynamic Clusters, Elastic JMS, Coherence Managed Servers,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Penetration Testing A Hands-On Introduction to Hacking

Surachart Opun - Sun, 2014-06-29 03:47
Assessing overall security on a new system before it goes on-line is also a good idea. It's useful to find holes before somebody else does, verify secure configurations and testing.  Penetration Testing is the process of attempting to gain access to resources without knowledge of credentials and find security weaknesses (Interesting paper).
Penetration Testing A Hands-On Introduction to Hacking By Georgia Weidman. A book was written about the basic of Penetration testing. It gave concepts, ideas, and techniques in 5 parts: The Basics, Assessments, Attacks, Exploit Development and Mobile Hacking.
  • Crack passwords and wireless network keys with brute-forcing and wordlists
  • Test web applications for vulnerabilities
  • Use the Metasploit Framework to launch exploits and write your own Metasploit modules
  • Automate social-engineering attacks
  • Bypass antivirus software
  • Turn access to one machine into total control of the enterprise in the post exploitation phase
First of all, Readers must set up their Virtual lab by using Kali linux. A book gave lots of ideas, examples how to use Tools for Penetration Testing. It's very good book for some people who are new with Penetration Testing. It might not cover everything about Penetration Testing, or gave the deeply knowledge. However, it helps readers are able to understand in Penetration testing easily and practice in examples.

Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

The Hitchhiker’s Guide to the EXPLAIN PLAN: The story so far (Part 1–10)

Iggy Fernandez - Sat, 2014-06-28 15:13
On the Toad World site, I’m writing a series of blog posts and articles on the subject of EXPLAIN PLAN. I’m using EXPLAIN PLAN as a motif to teach not just SQL tuning but also relational theory, logical database design, and physical database design. In a year’s time, I hope to have enough material for a […]
Categories: DBA Blogs

long project nearing completion at work and a good book available

Grumpy old DBA - Sat, 2014-06-28 08:02
For any long time fans of the grumpy old dba blog, you may remember I changed jobs in June of 2013.  Sometimes changing jobs allows you to slowly come up to speed in a new environment but the last change involved me jumping right into a project that was already kind of rolling.

It has been a long year on the same project and a lot of time involved.  Not exactly what I was expecting in this change but hey I am old and grumpy right?  I shoulda known eh?

Anyhow just as our project was about to be implemented this weekend it has now been put temporarily on delay ( long story cannot go into details ).  One gets kind of ready for something to finally go live ... and then Charlie Brown the football gets yanked away from you?  ( Sorry apologies this will mean more to people who have watched the Peanuts based television specials ).

On the good side ... I just received Christian Antognini's ( new 2nd edition ) Troubleshooting Oracle Performance book and I have started wading into that.

I recommend this book very much!

 
Categories: DBA Blogs

Editor’s Choice award at ODTUG Kscope14: NoSQL and Big Data for the Oracle Professional

Iggy Fernandez - Sat, 2014-06-28 07:42
My paper on NoSQL and Big Data won the Editor’s Choice award at ODTUG Kscope14. Here are some key points from the paper: The relational camp made serious mistakes that limited the performance and usefulness of the relational model. NoSQL is based on the incorrect premise that tables in the relational model must be mapped to […]
Categories: DBA Blogs

vCPU sharing in EC2: HVM to the rescue?

Pythian Group - Fri, 2014-06-27 07:51

I’ve been doing some testing to clarify what a vCPU in Amazon Web Services actually is. Over the course of the testing, I experienced inconsistent results on a 2-thread test on a 4-vCPU m3.xlarge system, due to the mislabeling of the vCPUs as independent single-core processors by the Linux kernel. This issue manifests itself in a CPU-bound, multithreaded workload where there is idle CPU time.

My test environment used a paravirtualized (PV) kernel, which moves some of the virtualization logic into the Linux kernel, reducing the need for high-overhead hardware emulation. One drawback is that the kernel cannot be modified to, for example, resolve the CPU mislabeling. But there is an alternative: an HVM system relying on virtualization extensions in the CPU hardware and allowing custom kernels or even non-Linux operating systems to run. Historically the drawback has been a performance hit, though I read a very interesting post from Brendan Gregg’s blog, indicating that what’s called HVM in Amazon EC2 is actually a hybrid of PV and HVM, combining aspects of both. A test run by Phoronix on EC2 showed HVM performance on par with PV, and in some cases even better. So it definitely seems worth repeating my earlier tests on.

As before, I fire up an instance, but this time using the latest HVM Amazon Linux image:

$ ec2-describe-images ami-76817c1e -H
Type    ImageID Name    Owner   State   Accessibility   ProductCodes    Architecture    ImageType       KernelId        RamdiskId Platform        RootDeviceType  VirtualizationType      Hypervisor
IMAGE   ami-76817c1e    amazon/amzn-ami-hvm-2014.03.2.x86_64-ebs        amazon  available       public          x86_64  machine                           ebs     hvm     xen
BLOCKDEVICEMAPPING      /dev/xvda               snap-810ffc56   8
$ ec2-run-instances ami-76817c1e -k marc-aws --instance-type m3.xlarge --availability-zone us-east-1d
RESERVATION     r-a4f480da      462281317311    default
INSTANCE        i-c5d5b6ef      ami-76817c1e                    pending marc-aws        0               m3.xlarge       2014-06-23T19:02:18+0000  us-east-1d                              monitoring-disabled                                     ebs                                       hvm     xen             sg-5fc61437     default

Checking in on CPUs:

[ec2-user@ip-10-145-187-117 ~]$ egrep '(processor|model name|cpu MHz|physical id|siblings|core id|cpu cores)' /proc/cpuinfo
processor       : 0
model name      : Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
cpu MHz         : 2593.949
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 4
processor       : 1
model name      : Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
cpu MHz         : 2593.949
physical id     : 0
siblings        : 4
core id         : 1
cpu cores       : 4
processor       : 2
model name      : Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
cpu MHz         : 2593.949
physical id     : 0
siblings        : 4
core id         : 2
cpu cores       : 4
processor       : 3
model name      : Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
cpu MHz         : 2593.949
physical id     : 0
siblings        : 4
core id         : 3
cpu cores       : 4

It’s the same 2.6GHz E5-2670 processor, but is reported as a single-socket non-hyperthreaded quad-core processor. Not yet the dual-core hyperthreaded processing we’re getting though.

Time to run a few tests.

[ec2-user@ip-10-145-187-117 ~]$ taskset -pc 0 $$
pid 1768's current affinity list: 0-3
pid 1768's new affinity list: 0
[ec2-user@ip-10-145-187-117 ~]$ dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null
2170552320 bytes (2.2 GB) copied, 18.1955 s, 119 MB/s
[ec2-user@ip-10-145-187-117 ~]$ for i in {1..2}; do (dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null &) done
2170552320 bytes (2.2 GB) copied, 36.4968 s, 59.5 MB/s
2170552320 bytes (2.2 GB) copied, 36.506 s, 59.5 MB/s

In the same range as with PV, but also 1-2% slower, meaning we’re seeing a small amount of HVM overhead. Let’s try across processors

[ec2-user@ip-10-145-187-117 ~]$ taskset -pc 0,1 $$
pid 1768's current affinity list: 0
pid 1768's new affinity list: 0,1
[ec2-user@ip-10-145-187-117 ~]$ for i in {1..2}; do (dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null &) done
2170552320 bytes (2.2 GB) copied, 27.8401 s, 78.0 MB/s
2170552320 bytes (2.2 GB) copied, 27.8398 s, 78.0 MB/s
[ec2-user@ip-10-145-187-117 ~]$ taskset -pc 0,2 $$
pid 1768's current affinity list: 0,1
pid 1768's new affinity list: 0,2
[ec2-user@ip-10-145-187-117 ~]$ for i in {1..2}; do (dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null &) done
2170552320 bytes (2.2 GB) copied, 18.1849 s, 119 MB/s
2170552320 bytes (2.2 GB) copied, 18.2014 s, 119 MB/s

Again, a tiny bit slower than with PV. To test variability, I’ll kick off 20 consecutive runs, and print a histogram of output:

[ec2-user@ip-10-145-187-117 ~]$ taskset -pc 0-3 $$
pid 1768's current affinity list: 0,2
pid 1768's new affinity list: 0-3
[ec2-user@ip-10-145-187-117 ~]$ for run in {1..20}; do
>  for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2>> output | gzip -c > /dev/null & done
>  wait
> done
...
[ec2-user@ip-10-145-187-117 ~]$ cat output | awk '/bytes/ {print $8,$9}' | sort -n | uniq -c
      1 113 MB/s
      3 114 MB/s
      4 115 MB/s
      6 116 MB/s
     10 117 MB/s
     10 118 MB/s
      6 119 MB/s

Running between 113 and 119 MB/s per thread: much less variability than before. In chart form:
aws-cpu-hvm

Looking at “top”:

[ec2-user@ip-10-145-187-117 ~]$ cat > ~/.toprc <<-EOF
> RCfile for "top with windows"           # shameless braggin'
> Id:a, Mode_altscr=0, Mode_irixps=1, Delay_time=3.000, Curwin=0
> Def     fieldscur=AEHIOQTWKNMbcdfgjplrsuvyzX
>         winflags=25913, sortindx=10, maxtasks=2
>         summclr=1, msgsclr=1, headclr=3, taskclr=1
> Job     fieldscur=ABcefgjlrstuvyzMKNHIWOPQDX
>         winflags=62777, sortindx=0, maxtasks=0
>         summclr=6, msgsclr=6, headclr=7, taskclr=6
> Mem     fieldscur=ANOPQRSTUVbcdefgjlmyzWHIKX
>         winflags=62777, sortindx=13, maxtasks=0
>         summclr=5, msgsclr=5, headclr=4, taskclr=5
> Usr     fieldscur=ABDECGfhijlopqrstuvyzMKNWX
>         winflags=62777, sortindx=4, maxtasks=0
>         summclr=3, msgsclr=3, headclr=2, taskclr=3
> EOF
[ec2-user@ip-10-145-187-117 ~]$ top -b -n20 -U ec2-user

top - 20:31:51 up 28 min,  2 users,  load average: 1.37, 1.17, 0.63
Tasks:  82 total,   4 running,  78 sleeping,   0 stopped,   0 zombie
Cpu0  : 22.9%us,  0.3%sy,  0.0%ni, 76.7%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu1  : 74.0%us,  3.0%sy,  0.0%ni, 23.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu2  : 59.7%us,  4.0%sy,  0.0%ni, 36.3%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu3  : 33.7%us,  2.7%sy,  0.0%ni, 63.7%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 1951 ec2-user  20   0  4444  608  400 R 97.1  0.0   0:08.92 gzip
 1953 ec2-user  20   0  4444  608  400 R 97.1  0.0   0:08.92 gzip                                                         

top - 20:31:54 up 28 min,  2 users,  load average: 1.37, 1.17, 0.63
Tasks:  82 total,   4 running,  78 sleeping,   0 stopped,   0 zombie
Cpu0  : 72.3%us,  4.3%sy,  0.0%ni, 23.3%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu1  :  0.0%us,  0.3%sy,  0.0%ni, 99.7%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu2  : 94.4%us,  5.6%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu3  : 21.3%us,  2.0%sy,  0.0%ni, 76.7%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 1953 ec2-user  20   0  4444  608  400 R 97.1  0.0   0:11.84 gzip
 1951 ec2-user  20   0  4444  608  400 R 96.8  0.0   0:11.83 gzip                                                         

top - 20:31:57 up 28 min,  2 users,  load average: 1.34, 1.17, 0.64
Tasks:  82 total,   3 running,  79 sleeping,   0 stopped,   0 zombie
Cpu0  : 95.3%us,  4.7%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu1  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu2  : 92.4%us,  7.6%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu3  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 1951 ec2-user  20   0  4444  608  400 R 96.8  0.0   0:14.74 gzip
 1953 ec2-user  20   0  4444  608  400 R 96.8  0.0   0:14.75 gzip                                                         

top - 20:32:00 up 28 min,  2 users,  load average: 1.32, 1.17, 0.64
Tasks:  82 total,   4 running,  78 sleeping,   0 stopped,   0 zombie
Cpu0  : 29.9%us,  1.7%sy,  0.0%ni, 68.4%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu1  : 63.0%us,  3.7%sy,  0.0%ni, 33.3%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu2  : 40.5%us,  2.3%sy,  0.0%ni, 57.1%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu3  : 55.3%us,  3.7%sy,  0.0%ni, 41.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 1951 ec2-user  20   0  4444  608  400 R 97.1  0.0   0:17.66 gzip
 1953 ec2-user  20   0  4444  608  400 R 97.1  0.0   0:17.67 gzip

We see that work is split between adjacent CPUs, but that the scheduler is doing a good job of keeping the adjacent CPUs near 100% usage between them.

So based on these tests, it looks like, even though the CPU is still mislabeled, HVM has almost entirely avoided the issue of variability due to shared-core scheduling, at the cost of a small reduction in overall throughput.

Categories: DBA Blogs

Log Buffer #377, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-06-27 07:49

Improvement and progress are all about growth. Pythian just recently grew more by entering into an agreement to acquire Blackbird.io as announced by Paul Vallee. This Log Buffer also adds one more edition to its growing history.

Oracle:

WebLogic 12.1.3 is a major step forward to becomes SOA Suite 12c, BPM Suite 12c and all the other Fusion Middleware Components will run on it!

The Internet of Things is changing everything. From the way you start your day in the morning, to the way products are manufactured and cities operate their services, machines all around you will collect data, share it with each other, and use it to recommend and even make decisions along the way.

Pivotal Cloud Foundry Installed lets create an ORG / USER to get started

An in-line sub process can best be thought of as a callable scope. It is a scope that is not part of the normal process flow, but rather a unit of encapsulated logic that can be called zero, one or multiple times from anywhere within the BPEL process

Do you need to create PDF reports from PL/SQL?

SQL Server:

Unique indexes are the database developer’s responsibility. Non-unique indexes can be more easily maintained directly on the production database by an automated process.

Continuous Delivery is fairly generally understood to be an effective way of tackling the problems of software delivery and deployment by making build, integration and delivery into a routine.

When you are writing TSQL code there are times when you want to perform the same logic over and over again. To accomplish this you can used the WHILE keyword.

What you need for a Multi Subnet Configuration for AlwaysOn in SQL Server 2012

All the wonderful functionality that in-memory tables and natively compiled procedures provide in SQL Server 2014 is pretty cool. But, changes to core of the engine results in changes in things that we may have developed a level of comfort with. Grant Fritchey explains.

MySQL:

Optimistic updates for Edit Data operations in MySQL for Excel

Here are a few videos those of you who are just getting started with Sphinx may find useful.

Harnessing the power of master/slave clusters to operate data-driven businesses on MySQL

Every SELECT from your Python program may acquire a metadata lock!

Why %util number from iostat is meaningless for MySQL capacity planning

Categories: DBA Blogs

Partner Webcast – Modernizing Oracle Forms for the Cloud era

Oracle Forms is one of the most widely used tools for building applications for the Oracle database. Many organizations still run enterprise Oracle Forms applications created in the 90s, leading in...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Sam Alapati’s 12c OCP upgrade book includes test software with key DBA skills

Bobby Durrett's DBA Blog - Thu, 2014-06-26 11:42

Sweet!  I just installed the software from the CD that came with Sam Alapati’s book related to the OCP 12c upgrade exam and found that it has questions related to the key DBA skills section of the test.  Now I feel good about my test preparation materials.  The first time I took the test I didn’t have this degree of guidance on what to study, especially on the key DBA skills section.

Now I have questions on key DBA skills from the testing software included with Sam Alapati’s book, questions on key DBA skills in the Kaplan Selftest software,  and a large section on key DBA skills in Matthew Morris’ book.

I’ve read Matthew’s book already, and I’ve read the key DBA skills part twice.  Now I’m going to start reading Sam’s book and then on to the software tests.  All this to buy me one more correct answer on the test but I’d rather over prepare this time instead of leaving it to chance.

- Bobby

Categories: DBA Blogs