Skip navigation.

DBA Blogs

Packt - The $5 eBook Bonanza is here!

Surachart Opun - Tue, 2014-12-23 00:38
 The $5 eBook Bonanza is here!Spread out news for people who are interested in reading IT books. The $5 eBook Bonanza is here! You will be able to get any Packt eBook or Video for just $5 until January 6th 2015.Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Is Oracle Database Time Correct? Something Is Not Quite Right.

Is Oracle Database Time Correct? Something Is Not Quite Right.
Oracle Database tuning and performance analysis is usually based on time. As I blogged HERE, the Oracle "database time" statistic is more interesting than simply "time spent in the database." It is the sum of CPU consumption and non-idle wait time. And Elapsed Time is the sum of all the database time related to perhaps a session or a SQL statement execution. However...
If you do the fundamental math, you'll notice the numbers don't always add up. In fact, they rarely match. In this posting, I want to demonstrate this mismatch and I want you to see this on your systems!

I'll include experimental data from a number of different systems, the statistical analysis (including pictures) and provide a tool you can download for free from OraPub.com to check out the situation on your systems.
Checking DB Time Math
DB Time is defined as "time spent in the database." This is the sum of Oracle process CPU consumption plus non-idle wait time. Usually we don't derive DB Time. The Time Model Statistics view v$sys_time_mode contains the DB Time statistic. But this view also contains the DB CPU statistic. Since there is no sum of non-idle wait time, most people just assume everything is fine.
However, if you run the simple query below on your system, it could look something this:
SQL> l
1 select db_time_s, db_cpu_s, tot_ni_wt_s
2 from (select value/1000000 db_time_s from v$sys_time_model where stat_name = 'DB time' ),
3 (select value/1000000 db_cpu_s from v$sys_time_model where stat_name = 'DB CPU' ),
4* (select sum(TIME_WAITED_MICRO_FG)/1000000 tot_ni_wt_s from v$system_event where wait_class != 'Idle' )
SQL> /

DB_TIME_S DB_CPU_S TOT_NI_WT_S
---------- ---------- -----------
330165.527 231403.925 119942.952

1 row selected.
If you add up the DB CPU and the total non-idle wait time, the value is 351,346.877. Woops! 351K does not equal 330K. What happened on my Oracle Database 12c (12.1.0.2.0)? As I have demonstrated in this POSTING (which contains videos of this) and in my online seminar training HERE, many times DB Time does nearly equal DB CPU plus the non-idle wait time. But clearly in the above situation something does not seem quite right.

Checking DB Time On Your Systems
To demonstrate the possibility of a DB Time mismatch, I created a simple plsql tool. You can download this free tool or do an OraPub.com search for "db time tool". The tool, which is easily configurable, takes a number of samples over a period of time and displays the output.


Here is an example of the output.

OraPub DB Time Test v1a 26-Sep-2014. Enjoy but use at your own risk.
.
Starting to collect 11 180 second samples now...
All displayed times are in seconds.
.
anonymer Block abgeschlossen
..........................................................................
... RAW OUTPUT (keep the output for your records and analysis)
..........................................................................
.
sample#, db_time_delta_v , db_cpu_delta_v, tot_ni_wait_delta_v, derived_db_time_delta_v, diff_v, diff_pct_v
.
1, 128,4, 128,254, ,103, 128,357266, ,043, 0
2, 22,014, 3,883, 17,731, 21,614215, ,399, 1,8
3, 1,625, 1,251, ,003, 1,253703, ,371, 22,8
4, 13,967, 12,719, 1,476, 14,194999, -,228, -1,6
5, 41,086, 41,259, ,228, 41,486482, -,4, -1
6, 36,872, 36,466, ,127, 36,593884, ,278, ,8
7, 38,545, 38,71, ,137, 38,847459, -,303, -,8
8, 37,264, 37,341, ,122, 37,463525, -,199, -,5
9, 22,818, 22,866, ,102, 22,967141, -,149, -,7
10, 30,985, 30,614, ,109, 30,723831, ,261, ,8
11, 5,795, 5,445, ,513, 5,958586, -,164, -2,8
.
The test is complete.
.
All displayed times are in seconds.

The output is formatted to make it easy to statistically analyze. The far right column is percent difference between the reported DB Time and the calculated DB Time. In the above example, they are pretty close. Get the tool and try it out on your systems.

Some Actual Examples
I want to quickly show you four examples from a variety of systems. You can download all the data in the "analysis pack" HERE. The data, for each of the four systems, contains the raw DB Time Test output (like in the section above), the statistical numeric analysis output from the statistical package "R", the actual "R" script and the visual analysis using "smooth histograms" also created using "R."

Below is the statistical numeric summary:


About the columns: Only the "craig" system is mine and other are real production or DEV/QA systems. The statistical columns all reference the far right column of the DB Time Test Tool's output, which is the percent difference between the reported DB Time and the calculated DB Time. Each sample set consists of eleven 180 second samples. The P-Value greater than 0.05 means the reported and calculated DB Time differences are normally distributed. This is not important in this analysis, but gives me clues if there is a problem with the data collection.

As you can easily see, two of the system's "DB Times" difference is greater than 10% and one of them was over 20%. The data collected shows that something is not quite right... but that's about it.

What Does This Mean In Our Work?
Clearly something is not quite right. There are a number of possible reasons and this will be focus of my next few articles.

However, I want to say that even though the numbers don't match perfectly and sometimes they are way off, this does not negate the value of a time based analysis. Remember, we not trying to land a man on the moon. We try diagnosing performance to derive solutions that (usually) aim to reduce the database time. I suspect that in all four cases I show, we would not be misled.

But this does highlight the requirement to also analysis performance from a non-Oracle database centric perspective. I always look at the performance situation from an operating system perspective, an Oracle centric perspective and an application (think: SQL, processes, user experience, etc.) perspective. This "3 Circle" analysis will reduce the likelihood of making a tuning diagnosis mistake. So in case DB Time is completely messed up, by diagnosing performance from the other two "circles" you will know something is not right.

If you want to learn more about my "3-Circle" analysis, here are two resources:
  1. Paper. Total Performance Management. Do an OraPub search for "3 circle" and you'll find it.
  2. Online Seminar: Tuning Oracle Using An AWR Report. I go DEEP into an Oracle Time Based Analysis but keeping it day-to-day production system practical.
In my next few articles I will drill down into why there can be a "DB Time mismatch," what to do about it and how to use this knowledge to our advantage.

Enjoy your work! There is nothing quite like analyzing performance and tuning Oracle database systems!!

Craig.





Categories: DBA Blogs

OLTP type 64 compression and ‘enq: TX – allocate ITL entry’ on Exadata

Pythian Group - Mon, 2014-12-22 11:33

Recently we’ve seen a strange problem with the deadlocks at the client database on Exadata, Oracle version 11.2.0.4 . Wait events analysis showed that sessions were waiting for “enq: TX – allocate ITL entry” event. It was strange because there are at most two sessions making DMLs and at least two ITL slots are available in the affected tables blocks. I made some block dumps and found that affected blocks contain the OLTP-compressed data, Compression Type = 64 (DBMS_COMPRESSION Constants – Compression Types).  Actually table has the “compress for query high” attribute, but direct path inserts have never used, so I’m not expecting any compressed data here. Compression Type 64 is very specific type. Oracle migrates data out of HCC compression units into Type 64 compression blocks in case of updates of HCC compressed data. We made some tests and were able to reproduce Type 64 compression without direct path operations. Here is one of the test cases. MSSM tablespace has been used, but problem is reproducible with ASSM too.

create table z_tst(num number, rn number, name varchar2(200)) compress for query high partition by list(num)
(
partition p1 values(1),
partition p2 values(2));

Table created.

insert into z_tst select mod(rownum , 2) + 1, rownum, lpad('1',20,'a') from dual connect by level <= 2000;

2000 rows created.

commit;

Commit complete.

select dbms_compression.get_compression_type(user, 'Z_TST', rowid) comp, count(*)  cnt from Z_tst
group by dbms_compression.get_compression_type(user, 'Z_TST', rowid);

      COMP        CNT
---------- ----------
        64       2000

select  dbms_rowid.rowid_block_number(rowid) blockno, count(*) cnt from z_tst a
group by dbms_rowid.rowid_block_number(rowid);

   BLOCKNO        CNT
---------- ----------
      3586        321
      2561        679
      3585        679
      2562        321

select name, value from v$mystat a, v$statname b where a.statistic# = b.statistic# and lower(name) like '%compress%' and value != 0;

NAME                                                    VALUE
-------------------------------------------------- ----------
heap block compress                                        14
HSC OLTP Compressed Blocks                                  4
HSC Compressed Segment Block Changes                     2014
HSC OLTP Non Compressible Blocks                            2
HSC OLTP positive compression                              14
HSC OLTP inline compression                                14
EHCC Block Compressions                                     4
EHCC Attempted Block Compressions                          14

alter system dump datafile 16 block min 2561 block max 2561;

We can see that all rows are compressed by compression type 64. From the session statistics we can see that HCC had been in place before the data was migrated into OLTP Compressed Blocks. I think, this is not an expected behavior and there is should not be any compression involved at all. Let’s take a look into the block dump:

Block header dump:  0x04000a01
 Object id on Block? Y
 seg/obj: 0x6bfdc  csc: 0x06.f5ff8a1  itc: 2  flg: -  typ: 1 - DATA
     fsl: 0  fnx: 0x0 ver: 0x01

 Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x0055.018.0002cd54  0x00007641.5117.2f  --U-  679  fsc 0x0000.0f5ffb9a
0x02   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
bdba: 0x04000a01
data_block_dump,data header at 0x7fbb48919a5c
===============
tsiz: 0x1fa0
hsiz: 0x578
pbl: 0x7fbb48919a5c
     76543210
flag=-0----X-
ntab=2
nrow=680
frre=-1
fsbo=0x578
fseo=0x5b0
avsp=0x6
tosp=0x6
        r0_9ir2=0x1
        mec_kdbh9ir2=0x1
                      76543210
        shcf_kdbh9ir2=----------
                  76543210
        flag_9ir2=--R-LNOC      Archive compression: N
                fcls_9ir2[3]={ 0 32768 32768 }
                perm_9ir2[3]={ 0 2 1 }

It’s bit odd that avsp (available space) and tosp (total space) = 6 bytes. So there is no free space in the block at all, but I’m expecting to see 10% pctfee defaults here since it’s OLTP compression.
Let’s try to update two different rows in the same type 64 compressed block:

select rn from z_tst where DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) = 3586 and rownum <= 4;

        RN
----------
      1360
      1362
      1364
      1366
From the first session:
update z_tst set name = 'a' where rn = 1360;
From the second:
update z_tst set name = 'a' where rn = 1362;
-- waiting here

Second session waits on “enq: TX – allocate ITL entry” event.

Summary

In some cases HCC and subsequent OLTP, type 64 compression can take place even without direct path operations (probably a bug).

OLTP, type 64 compressed block, in contrast to regular OLTP compression, can have no free space after data load.

In case of DML operations, the whole type 64 compressed block gets locked (probably a bug).

Better not to set HCC attributes on segments until the real HCC compression operation.

 

Categories: DBA Blogs

Watch: HBase vs. Cassandra

Pythian Group - Mon, 2014-12-22 09:45

Every data platform has its value, and deciding which one will work best for your big data objectives can be tricky—Alex Gorbachev, Oracle ACE Director, Cloudera Champion of Big Data, and Chief Technology Officer at Pythian, has recorded a series of videos comparing the various big data platforms and presents use cases to help you identify which ones will best suit your needs.

“When we look at HBase and Cassandra, they can look very similar,” Alex says. “They’re both part of the NoSQL ecosystem.” Although they’re capable of handling very similar workloads, Alex explains that there are also quite a few differences. “Cassandra is designed from the ground up to handle very high, concurrent, write-intensive workloads.” HBase on the other hand, has its limitations in scalability, and may require a bit more thinking to achieve the same quality of service, Alex explains. Watch his video HBase vs. Cassandra for specific use cases.

Note: You may recognize this series, which was originally filmed back in 2013. After receiving feedback from our viewers that the content was great, but the video and sound quality were poor, we listened and re-shot the series.

Find the rest of the series here

 

Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s Big Data expertise.

Categories: DBA Blogs

Log Buffer #402, A Carnival of the Vanities for DBAs

Pakistan's First Oracle Blog - Sat, 2014-12-20 18:39
This Log Buffer edition hits the ball out of park by smashing yet another record of surfacing with a unique collection of blog posts from various database technologies. Enjoy!!!

Oracle:

EM12c and the Optimizer Statistics Console.
SUCCESS and FAILURE Columns in DBA_STMT_AUDIT_OPTS.
OBIEE and ODI on Hadoop : Next-Generation Initiatives To Improve Hive Performance.
Oracle 12.1.0.2 Bundle Patching.
Performance Issues with the Sequence NEXTVAL Call.

SQL Server:

GUIDs GUIDs everywhere, but how is my data unique?
Questions About T-SQL Transaction Isolation Levels You Were Too Shy to Ask.
Introduction to Advanced Transact SQL Stairway and Using the CROSS JOIN Operator.
Introduction to Azure SQL Database Scalability.
What To Do When the Import and Export Wizard Fails.

MySQL:

Orchestrator 1.2.9 GA released.
Making HAProxy 1.5 replication lag aware in MySQL.
Monitor MySQL Performance Interactively With VividCortex.
InnoDB’s multi-versioning handling can be Achilles’ heel.
Memory summary tables in Performance Schema in MySQL 5.7.

Also published here.
Categories: DBA Blogs

Exadata Patching Introduction

The Oracle Instructor - Sat, 2014-12-20 10:24

These I consider the most important points about Exadata Patching:

Where is the most recent information?

MOS Note 888828.1 is your first read whenever you think about Exadata Patching

What is to patch with which utility?

Exadata Patching

Expect quarterly bundle patches for the storage servers and the compute nodes. The other components (Infiniband switches, Cisco Ethernet Switch, PDUs) are less frequently patched and not on the picture therefore.

The storage servers have their software image (which includes Firmware, OS and Exadata Software)  exchanged completely with the new one using patchmgr. The compute nodes get OS (and Firmware) updates with dbnodeupdate.sh, a tool that accesses an Exadata yum repository. Bundle patches for the Grid Infrastructure and for the Database Software are being applied with opatch.

Rolling or non-rolling?

This the sensitive part! Technically, you can always apply the patches for the storage servers and the patches for compute node OS and Grid Infrastructure rolling, taking down only one server at a time. The RAC databases running on the Database Machine will be available during the patching. Should you do that?

Let’s focus on the storage servers first: Rolling patches are recommended only if you have ASM diskgroups with high redundancy or if you have a standby site to failover to in case. In other words: If you have a quarter rack without a standby site, don’t use rolling patches! That is because the DBFS_DG diskgroup that contains the voting disks cannot have high redundancy in a quarter rack with just three storage servers.

Okay, so you have a half rack or bigger. Expect one storage server patch to take about two hours. That summarizes to 14 hours (for seven storage servers) patching time with the rolling method. Make sure that management is aware about that before they decide about the strategy.

Now to the compute nodes: If the patch is RAC rolling applicable, you can do that regardless of the ASM diskgroup redundancy. If a compute node gets damaged during the rolling upgrade, no data loss will happen. On a quarter rack without a standby site, you put availability at risk because only two compute nodes are there and one could fail while the other is just down.

Why you will want to have a Data Guard Standby Site

Apart from the obvious reason for Data Guard – Disaster Recovery – there are several benefits associated to the patching strategy:

You can afford to do rolling patches with ASM diskgroups using normal redundancy and with RAC clusters that have only two nodes.

You can apply the patches on the standby site first and test it there – using the snapshot standby database functionality (and using Database Replay if you licensed Real Application Testing)

A patch set can be applied on the standby first and the downtime for end users can be reduced to the time it takes to do a switchover

A release upgrade can be done with a (Transient) Logical Standby, reducing again the downtime to the time it takes to do a switchover

I suppose this will be my last posting in 2014, so Happy Holidays and a Happy New Year to all of you :-)


Tagged: exadata
Categories: DBA Blogs

Log Buffer #402, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-12-19 09:15

This Log Buffer edition hits the ball out of park by smashing yet another record of surfacing with a unique collection of blog posts from various database technologies. Enjoy!!!

Oracle:

EM12c and the Optimizer Statistics Console.

SUCCESS and FAILURE Columns in DBA_STMT_AUDIT_OPTS.

OBIEE and ODI on Hadoop : Next-Generation Initiatives To Improve Hive Performance.

Oracle 12.1.0.2 Bundle Patching.

Performance Issues with the Sequence NEXTVAL Call.

SQL Server:

GUIDs GUIDs everywhere, but how is my data unique?

Questions About T-SQL Transaction Isolation Levels You Were Too Shy to Ask.

Introduction to Advanced Transact SQL Stairway and Using the CROSS JOIN Operator.

Introduction to Azure SQL Database Scalability.

What To Do When the Import and Export Wizard Fails.

MySQL:

Orchestrator 1.2.9 GA released.

Making HAProxy 1.5 replication lag aware in MySQL.

Monitor MySQL Performance Interactively With VividCortex.

InnoDB’s multi-versioning handling can be Achilles’ heel.

Memory summary tables in Performance Schema in MySQL 5.7.

Categories: DBA Blogs

Seasons's Greetings from the Oracle ISV Migration Center Team

WISHING YOU MUCH SUCCESS IN THE YEAR AHEAD. THANK YOU FOR YOUR CONTINUED PARTNERSHIP. ...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Partner Webcast – Oracle Private Cloud Infrastructure as a Service (IaaS) using Oracle Enterprise Manager 12c

Virtualization, cloud computing, massive growth in unstructured data, transaction volumes, and applications and demands for energy efficiency are all driving datacenters to rethink how IT resources...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Performance Issues with the Sequence NEXTVAL Call

Pythian Group - Thu, 2014-12-18 08:51

Is SELECTing from a sequence your Oracle Performance Problem? The answer to that question is: it might be!

You wouldn’t expect a sequence select to be a significant problem but recently we saw that it was—and in two different ways. The issue came to light when investigating a report performance issue on an Oracle 11.2.0.4 non-RAC database. Investigating the original report problem required an AWR analysis and a SQL trace (actually a 10046 level 12 trace – tracing the bind variables was of critical importance in troubleshooting the initial problem with the report).

 

First problem: if SQL_ID = 4m7m0t6fjcs5x appears in the AWR reports

SELECTing a sequence value using the NEXTVAL function is supposed to be a fairly lightweight process. The sequence’s last value is stored in memory and a certain definable number of values are pre-fetched and cached in memory (default is CACHE=20). However when those cached values are exhausted the current sequence value must be written to disk (so duplicate values aren’t given upon restarts after instance crashes). And that’s done via an update on the SYS.SEQ$ table. The resulting SQL_ID and statement for this recursive SQL is:

SQL_ID   = 4m7m0t6fjcs5x

SQL Text = update seq$ set increment$=:2, minvalue=:3, maxvalue=:4, cycle#=:5, order$=:6,
           cache=:7, highwater=:8, audit$=:9, flags=:10 where obj#=:1

 

This is recursive SQL and consequently it and the corresponding SQL_ID is consistent between databases and even between Oracle versions.

Hence seeing SQL_ID 4m7m0t6fjcs5x as one of the top SQL statements in the AWR report indicates a possible problem. In our case it was the #1 top statement in terms of cumulative CPU. The report would select a large number of rows and was using a sequence value and the NEXTVAL call to form a surrogate key.

So what can be done about this? Well like most SQL tuning initiatives one of the best ways to tune a statement is to run it less frequently. With SQL_ID 4m7m0t6fjcs5x that’s easy to accomplish by changing the sequence’s cache value.

In our case, seeing SQL_ID 4m7m0t6fjcs5x as the top SQL statement quickly lead us to check the sequence settings and saw that almost all sequences had been created with the NOCACHE option. Therefore no sequence values were being cached and an update to SEQ$ was necessary after every single NEXTVAL call. Hence the problem.

Caching sequence values adds the risk of skipped values (or a sequence gap due to the loss of the cached values) when the instance crashes. (Note, no sequence values are lost when the database is shutdown cleanly.)  However in this case, since the sequence is just being used as a surrogate key this was not a problem for the application.

Changing the sequences CACHE setting to 100 completely eliminated the problem, increased the overall report performance, and removed SQL_ID 4m7m0t6fjcs5x from the list of top SQL in AWR reports.

Lesson learned: if you ever see SQL_ID 4m7m0t6fjcs5x in any of the top SQL sections in an AWR or STATSPACK report, double check the sequence CACHE settings.

 

Next problem: significant overhead of tracing the sequence update

Part of investigating a bind variable SQL regression problem with the report required a SQL trace. The report was instrumented with:

alter session set events '10046 trace name context forever, level 12';

 

The tracing made the report run over six times longer. This caused the report to overrun it’s allocated execution window and caused other job scheduling and SLA problems.

Normally we’d expect some overhead of a SQL trace due to the synchronous writes to the trace file, but over a 500% increase was far more than expected. From the developer’s viewpoint the report was essentially just executing a single query. The reality is that it was slightly more complicated than that as the top level query accessed a view. Still the view was not overly complex and hence the developer believed that the report was query intensive. Not executing many queries: just the original top level call and the view SQL.

Again the issue is largely related to the sequence, recursive SQL from the sequence, and specifically statement 4m7m0t6fjcs5x.

Starting with an AWR SQL report of SQL_ID 4m7m0t6fjcs5x from two report executions, one with and one without SQL trace enabled showed:

Without tracing:
Elapsed Time (ms):      278,786
CPU Time (ms):          278,516
Executions:             753,956
Buffer Gets:          3,042,991
Disk Reads:                   0
Rows:                   753,956

With tracing:
Elapsed Time (ms):    2,362,227
CPU Time (ms):        2,360,111
Executions:             836,182
Buffer Gets:          3,376,096
Disk Reads:                   5
Rows:                   836,182

 

So when the report ran with tracing enabled it ran 4m7m0t6fjcs5x 836K times instead of 753K times during the previous non-traced run: a 10.9% increase due to underlying application data changes between the runs. Yet 2.36M ms vs 278K ms in both CPU and elapsed times: a 847% increase!

The question was then: could this really be due to the overhead of tracing or something else? And should all of those recursive SQL update statements materialize as CPU time in the AWR reports? To confirm this and prove it to the developers a simplified sequence performance test was performed on a test database:

The simplified test SQL was:

create sequence s;
declare
   x integer;
begin
   for i in 1 .. 5000000
   loop
      x := s.nextval;
   end loop;
end;
/

 

From AWR SQL reports on SQL_ID 4m7m0t6fjcs5x:

Without tracing:

Stat Name                                Statement   Per Execution % Snap
---------------------------------------- ---------- -------------- -------
Elapsed Time (ms)                            10,259            0.0     7.1
CPU Time (ms)                                 9,373            0.0     6.7
Executions                                  250,005            N/A     N/A
Buffer Gets                                 757,155            3.0    74.1
Disk Reads                                        0            0.0     0.0
Parse Calls                                       3            0.0     0.3
Rows                                        250,005            1.0     N/A


With tracing:

Stat Name                                Statement   Per Execution % Snap
---------------------------------------- ---------- -------------- -------
Elapsed Time (ms)                            81,158            0.3    20.0
CPU Time (ms)                                71,812            0.3    17.9
Executions                                  250,001            N/A     N/A
Buffer Gets                                 757,171            3.0    74.4
Disk Reads                                        0            0.0     0.0
Parse Calls                                       1            0.0     0.1
Rows                                        250,001            1.0     N/A

 

Same number of executions and buffer gets as would be expected but 7.66 times the CPU and 7.91 times the elapsed time just due to the SQL trace!  (Similar results to the 8.47 times increase we saw with the actual production database report execution.)

And no surprise, the resulting trace file is extremely large. As we would expect, since the sequence was created with the default CACHE value of 20 it’s recording each UPDATE with the set of binds followed by 20 NEXTVAL executions:

=====================
PARSING IN CURSOR #140264395012488 len=100 dep=0 uid=74 oct=47 lid=74 tim=1418680119565405 hv=152407152 ad='a52802e0' sqlid='dpymsgc4jb33h'
declare
   x integer;
begin
   for i in 1 .. 5000000
   loop
      x := s.nextval;
   end loop;
end;
END OF STMT
PARSE #140264395012488:c=0,e=256,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=0,tim=1418680119565401
=====================
PARSING IN CURSOR #140264395008592 len=26 dep=1 uid=74 oct=3 lid=74 tim=1418680119565686 hv=575612948 ad='a541eed8' sqlid='0k4rn80j4ya0n'
Select S.NEXTVAL from dual
END OF STMT
PARSE #140264395008592:c=0,e=64,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=3499163060,tim=1418680119565685
EXEC #140264395008592:c=0,e=50,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=3499163060,tim=1418680119565807
=====================
PARSING IN CURSOR #140264395000552 len=129 dep=2 uid=0 oct=6 lid=0 tim=1418680119566005 hv=2635489469 ad='a575c3a0' sqlid='4m7m0t6fjcs5x'
update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,cache=:7,highwater=:8,audit$=:9,flags=:10 where obj#=:1
END OF STMT
PARSE #140264395000552:c=0,e=66,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=4,plh=1935744642,tim=1418680119566003
BINDS #140264395000552:
 Bind#0
  oacdty=02 mxl=22(02) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=a52eb120  bln=22  avl=02  flg=09
  value=1
 Bind#1
  oacdty=02 mxl=22(02) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=a52eb132  bln=22  avl=02  flg=09
  value=1
 Bind#2
  oacdty=02 mxl=22(15) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=a52eb144  bln=22  avl=15  flg=09
  value=9999999999999999999999999999
 Bind#3
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=00 fl2=0001 frm=00 csi=00 siz=48 off=0
  kxsbbbfp=7f91d96ca6b0  bln=22  avl=01  flg=05
  value=0
 Bind#4
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=00 fl2=0001 frm=00 csi=00 siz=0 off=24
  kxsbbbfp=7f91d96ca6c8  bln=22  avl=01  flg=01
  value=0
 Bind#5
  oacdty=02 mxl=22(02) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=a52eb156  bln=22  avl=02  flg=09
  value=20
 Bind#6
  oacdty=02 mxl=22(05) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=a52eb168  bln=22  avl=05  flg=09
  value=5000021
 Bind#7
  oacdty=01 mxl=32(32) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=01 csi=178 siz=32 off=0
  kxsbbbfp=a52eb17a  bln=32  avl=32  flg=09
  value="--------------------------------"
 Bind#8
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=00 fl2=0001 frm=00 csi=00 siz=48 off=0
  kxsbbbfp=7f91d96ca668  bln=22  avl=02  flg=05
  value=8
 Bind#9
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=00 fl2=0001 frm=00 csi=00 siz=0 off=24
  kxsbbbfp=7f91d96ca680  bln=22  avl=04  flg=01
  value=86696
EXEC #140264395000552:c=1000,e=798,p=0,cr=1,cu=2,mis=0,r=1,dep=2,og=4,plh=1935744642,tim=1418680119566897
STAT #140264395000552 id=1 cnt=0 pid=0 pos=1 obj=0 op='UPDATE  SEQ$ (cr=1 pr=0 pw=0 time=233 us)'
STAT #140264395000552 id=2 cnt=1 pid=1 pos=1 obj=79 op='INDEX UNIQUE SCAN I_SEQ1 (cr=1 pr=0 pw=0 time=23 us cost=0 size=69 card=1)'
CLOSE #140264395000552:c=0,e=3,dep=2,type=3,tim=1418680119567042
FETCH #140264395008592:c=1000,e=1319,p=0,cr=1,cu=3,mis=0,r=1,dep=1,og=1,plh=3499163060,tim=1418680119567178
STAT #140264395008592 id=1 cnt=1 pid=0 pos=1 obj=86696 op='SEQUENCE  S (cr=1 pr=0 pw=0 time=1328 us)'
STAT #140264395008592 id=2 cnt=1 pid=1 pos=1 obj=0 op='FAST DUAL  (cr=0 pr=0 pw=0 time=1 us cost=2 size=0 card=1)'
CLOSE #140264395008592:c=0,e=1,dep=1,type=3,tim=1418680119567330
EXEC #140264395008592:c=0,e=19,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=3499163060,tim=1418680119567378
FETCH #140264395008592:c=0,e=14,p=0,cr=0,cu=0,mis=0,r=1,dep=1,og=1,plh=3499163060,tim=1418680119567425
CLOSE #140264395008592:c=0,e=1,dep=1,type=3,tim=1418680119567458
...
< Repeats #140264395008592 18 more times due to CACHE=20 >

 

From the trace, it’s apparent that not only is there the overhead of updating the SEQ$ table but maintaining the I_SEQ1 index as well. A tkprof on the test shows us the same information:

declare
   x int;
begin
   for i in 1..5000000 loop
      x := s.nextval;
   end loop;
end;

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          2          0           0
Execute      1    241.55     247.41          0     250003          0           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2    241.56     247.41          0     250005          0           1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 74

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  log file sync                                   1        0.01          0.01
  SQL*Net message to client                       1        0.00          0.00
  SQL*Net message from client                     1        0.00          0.00
********************************************************************************

SQL ID: 0k4rn80j4ya0n Plan Hash: 3499163060

Select S.NEXTVAL
from
 dual


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute 5000000     35.37      30.49          0          0          0           0
Fetch   5000000     50.51      45.81          0          0     250000     5000000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   10000001     85.88      76.30          0          0     250000     5000000

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 74     (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SEQUENCE  S (cr=1 pr=0 pw=0 time=910 us)
         1          1          1   FAST DUAL  (cr=0 pr=0 pw=0 time=2 us cost=2 size=0 card=1)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  latch free                                      1        0.00          0.00
********************************************************************************

SQL ID: 4m7m0t6fjcs5x Plan Hash: 1935744642

update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,
  cache=:7,highwater=:8,audit$=:9,flags=:10
where
 obj#=:1


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        0      0.00       0.00          0          0          0           0
Execute 250000     71.81      81.15          0     250003     507165      250000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   250000     71.81      81.15          0     250003     507165      250000

Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS   (recursive depth: 2)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  Disk file operations I/O                        1        0.00          0.00
  log file switch (checkpoint incomplete)         1        0.19          0.19
  log file switch completion                      4        0.20          0.75
********************************************************************************

So clearly we can see a lot of additional overhead when performing a SQL trace of the many calls to the sequence NEXTVAL function. Of course the overhead is due to recursive SQL and the synchronous write of the trace file. It just wasn’t obvious that a simple query could generate that much recursive DML and trace data.

 

Combining the two issues

The next question is what is the effect of the CACHE setting for the sequence as well as the different between a LEVEL 8 and LEVEL 12 trace. Using a similar PL/SQL test block but with only 100,000 executions on a lab database showed the following results measuring CPU time (in seconds):

Cache Size No Trace 10046 level 8 10046 level 12 0 31.94 58.71 94.57 20 7.53 15.29 20.13 100 4.85 13.36 13.50 1000 3.93 10.61 11.93 10000 3.70 10.96 12.20

Hence we can see that with even an extremely high CACHE setting for the sequence, the 10046 trace adds roughly 300% to 400% overhead for this one particular statement. And that the caching sweet-spot seems to be around 100.

 

Conclusions

We often take the Oracle sequence for granted assume that it’s an optimized and efficient internal structure—and for the most part it is. But depending on how it’s implemented, it can be problematic.

If we ever see SQL_ID 4m7m0t6fjcs5x as one of our worst performing SQL statements, we should double check the sequence configuration and usage. Was the CACHE value set low by design, or inadvertently? Is the risk of a sequence gap after an instance crash worth the overhead of a low CACHE value? Perhaps the settings need to be reconsidered and changed?

And a caution about enabling a SQL trace. It’s something we expect to add some overhead. But not 3x to 10x which may make the tracing process unreasonable.  Of course the tracing overhead will be dependent on the actual workload.  But for those that are sequence NEXTVAL heavy, don’t underestimate the underlying recursive SQL as the overhead can be significant—and much more than one may think.

Categories: DBA Blogs

ORA-28043: Invalid Bind Credentials for DB-OID Connection

Pythian Group - Thu, 2014-12-18 08:46

Have you ever encountered this error connecting to a DB using global authentication against OID? Was re-registration a temporary workaround, but the issue came back after some time? Check out this solution for ORA-28043: invalid bind credentials for DB-OID connection.

During a long project which included changing human account’s authentication method from local to global on several databases, users started to report ORA-28043 after a couple of days.

$ sqlplus rambo@orcl

SQL*Plus: Release 11.2.0.3.0 Production on Tue Nov 4 07:28:03 2014 

Copyright (c) 1982, 2011, Oracle. All rights reserved. 

Enter password: 

ERROR: 

ORA-28043: invalid bind credentials for DB-OID connection 

Since some of these were production assets, we tried to restore the service as soon as possible. The fastest workaround we found was to re-register the DBs using DBCA:

$ dbca -silent -configureDatabase -sourceDB orcl -unregisterWithDirService true -dirServiceUserName cn=orcladmin -dirServicePassword ****** -walletPassword ******

Preparing to Configure Database

6% complete

13% complete

66% complete

Completing Database Configuration

100% complete

Look at the log file /e00/oracle/cfgtoollogs/dbca/orcl/orcl.log" for further details.

$ dbca -silent -configureDatabase -sourceDB orcl -registerWithDirService true -dirServiceUserName cn=orcladmin -dirServicePassword ****** -walletPassword ******

Preparing to Configure Database

6% complete

13% complete

66% complete

Completing Database Configuration

100% complete

Look at the log file "/e00/oracle/cfgtoollogs/dbca/orcl/orcl.log" for further details.

Good news: the service was restored quickly. Bad news: the issue came back after a couple of days. We started a deeper investigation which included opening a SR in My Oracle Support. Luckily, we found the real culprit for this error very quickly: PASSWORD EXPIRATION. These were the commands they provided us to verify that the wallet couldn’t bind to the directory:

$ mkstore -wrl . -list 

Oracle Secret Store Tool : Version 11.2.0.3.0 - Production 

Copyright (c) 2004, 2011, Oracle and/or its affiliates. All rights reserved. 

Enter wallet password:xxx 

Oracle Secret Store entries: 

ORACLE.SECURITY.DN 

ORACLE.SECURITY.PASSWORD 

$ mkstore -wrl . -viewEntry ORACLE.SECURITY.DN -viewEntry ORACLE.SECURITY.PASSWORD 

Oracle Secret Store Tool : Version 11.2.0.3.0 - Production 

Copyright (c) 2004, 2011, Oracle and/or its affiliates. All rights reserved. 

Enter wallet password: xxx 

ORACLE.SECURITY.DN = cn=ORCL,cn=OracleContext,DC=ppl,DC=com 

ORACLE.SECURITY.PASSWORD = Z8p9a1j1 

$ ldapbind -h oidserver -p 3060 -D cn=ORCL,cn=OracleContext,DC=ppl,DC=com -w Z8p9a1j1 

ldap_bind: Invalid credentials 

ldap_bind: additional info: Password Policy Error :9000: GSL_PWDEXPIRED_EXCP :Your Password has expired. Please contact the Administrator to change your password. 

Oracle’s recommendation was to set “pwdmaxage” attribute to 0. We achieved this by changing the value from the GUI, under Security/Password Policy/Password Expiry Time

Note that for OID versions older than 10.0.4, changing the parameter’s value to zero doesn’t work due to Bug 3334767. Instead, you can place a very large value.

Categories: DBA Blogs

Create Histograms On Columns That Already Have One

Oracle in Action - Tue, 2014-12-16 05:00

RSS content

The default value of METHOD_OPT from  10g onwards is ‘FOR ALL COLUMNS SIZE AUTO’.

The definition of AUTO as per Oracle documentation is  :
AUTO: Oracle determines the columns to collect histograms based on data distribution and the workload of the columns.

This basically implies that Oracle will automatically  create histograms on those  columns which have skewed data distribution and there are  SQL statements  referencing those columns.

However, this gives rise to the problem is that Oracle generates too many  unnecessary histograms .

Let’s demonstrate:

– Create a table with skewed data distribution in two columns

SQL>drop table hr.skewed purge;

create table hr.skewed
( empno number,
job_id varchar2(10),
salary number);

insert into hr.skewed select employee_id, job_id, salary
from hr.employees;

– On gathering statistics for the table using default options, it can be seen that histogram is not gathered on any column although data
distribution in columns JOB_ID and SALARY is skewed

SQL>exec dbms_stats.gather_table_stats('HR','SKEWED');

col table_name for a10
col column_name for a10
select TABLE_NAME,COLUMN_NAME,HISTOGRAM
from dba_tab_columns where table_name = 'SKEWED';

TABLE_NAME COLUMN_NAM HISTOGRAM
---------- ---------- ---------------
SKEWED SALARY NONE
SKEWED JOB_ID NONE
SKEWED EMPNO NONE

– Let’s now issue some queries querying the table based on  the  three columns in the table followed by statistics gathering to verify that histograms get automatically created only on columns with skewed data distribution.

– No histogram gets created if column EMPNO is queried which
has data distributed uniformly

SQL>select * from hr.skewed where empno = 100;
exec dbms_stats.gather_table_stats('HR','SKEWED');

col table_name for a10
col column_name for a10

select TABLE_NAME,COLUMN_NAME,HISTOGRAM
from dba_tab_columns where table_name = 'SKEWED';

TABLE_NAME COLUMN_NAM HISTOGRAM
---------- ---------- ---------------
SKEWED SALARY NONE
SKEWED JOB_ID NONE
SKEWED EMPNO NONE

– A histogram gets created on JOB_ID column as soon as we search  for records with a JOB_ID as data distribution is non-uniform in JOB_ID column

SQL>select * from hr.skewed where job_id = 'CLERK';
exec dbms_stats.gather_table_stats('HR','SKEWED');

col table_name for a10
col column_name for a10

select TABLE_NAME,COLUMN_NAME,HISTOGRAM
from dba_tab_columns where table_name = 'SKEWED';

TABLE_NAME COLUMN_NAM HISTOGRAM
---------- ---------- ---------------
SKEWED SALARY NONE
SKEWED JOB_ID FREQUENCY
SKEWED EMPNO NONE

– A histogram gets created on SALARY column when search is made for  employees drawing salary more than 10000 as data distribution is non-uniform in SALARY column.

SQL>select * from hr.skewed where salary < 10000;
exec dbms_stats.gather_table_stats('HR','SKEWED');

col table_name for a10
col column_name for a10
select TABLE_NAME,COLUMN_NAME,HISTOGRAM
from dba_tab_columns where table_name = 'SKEWED';

TABLE_NAME COLUMN_NAM HISTOGRAM
---------- ---------- ---------------
SKEWED SALARY FREQUENCY
SKEWED JOB_ID FREQUENCY
SKEWED EMPNO NONE

Thus gathering statistics using default options, manually or as part of the automatic maintenance task,  might lead to creation of histograms  on all such columns  which have  skewed data distribution and  had been  part of the search clause even once. That is, Oracle  makes even the histograms you didn’t ask for.  Some of the histograms might not be needed by the application and hence are undesirable as computing histograms is a resource intensive operation and moreover they might  degrade the performance as a result of their interaction with bind peeking.

Solution
Employ FOR ALL COLUMNS SIZE REPEAT option of METHOD_OPT parameter  which prevents deletion of existing histograms and collects histograms only on the columns that already have histograms.

First step is to eliminate unwanted histograms and have histograms only on the desired columns.

Well, there are two options:

OPTION-I: Delete histograms from unwanted columns and use REPEAT option henceforth which Collects histograms only on the columns that already have histograms.

– Delete unwanted histogram for SALARY column

SQL>exec dbms_stats.gather_table_stats('HR','SKEWED', -
METHOD_OPT => 'for columns salary size 1');

-- Verify that histogram for salary column has been deleted

col table_name for a10
col column_name for a10

select TABLE_NAME,COLUMN_NAME,HISTOGRAM
from dba_tab_columns where table_name = 'SKEWED';

TABLE_NAME COLUMN_NAM HISTOGRAM
---------- ---------- ---------------
SKEWED SALARY NONE
SKEWED JOB_ID FREQUENCY
SKEWED EMPNO NONE

– Issue a SQL with  salary column in where clause and verify that gathering  stats using repeat  option retains histogram on JOB_ID column and does not cause histogram to be created on salary column.

SQL>select * from hr.skewed where salary < 10000;

exec dbms_stats.gather_table_stats('HR','SKEWED',-
METHOD_OPT => 'for columns salary size REPEAT');

col table_name for a10
col column_name for a10

select TABLE_NAME,COLUMN_NAME,HISTOGRAM
from dba_tab_columns where table_name = 'SKEWED';

TABLE_NAME COLUMN_NAM HISTOGRAM
---------- ---------- ---------------
SKEWED SALARY NONE
 SKEWED JOB_ID FREQUENCY
SKEWED EMPNO NONE

OPTION-II:   Wipe out all histograms and manually add only the desired ones. Use REPEAT option henceforth which Collects histograms only on the columns that already have one.

– Delete histograms on all columns 

SQL>exec dbms_stats.gather_table_stats('HR','SKEWED',-
METHOD_OPT => 'for all columns size 1');

– Verify that histograms on all columns have been dropped

SQL>col table_name for a10
col column_name for a10

select TABLE_NAME,COLUMN_NAME,HISTOGRAM
from dba_tab_columns where table_name = 'SKEWED';

TABLE_NAME COLUMN_NAM HISTOGRAM
---------- ---------- ---------------
SKEWED SALARY NONE
 SKEWED JOB_ID NONE
 SKEWED EMPNO NONE

– Create histogram only on the desired JOB_ID column

SQL>exec dbms_stats.gather_table_stats('HR','SKEWED',-
METHOD_OPT => 'for columns JOB_ID size AUTO');

– Verify that histogram has been created on JOB_ID

SQL>col table_name for a10
col column_name for a10

select TABLE_NAME,COLUMN_NAME,HISTOGRAM
from dba_tab_columns where table_name = 'SKEWED';

TABLE_NAME COLUMN_NAM HISTOGRAM
---------- ---------- ---------------
SKEWED SALARY NONE
SKEWED JOB_ID FREQUENCY
SKEWED EMPNO NONE

- Verify that gathering  stats using repeat  option creates histogram only on JOB_ID column on which it already exists

SQL>exec dbms_stats.gather_table_stats('HR','SKEWED',-
METHOD_OPT => 'for columns salary size REPEAT');

SQL>col table_name for a10
col column_name for a10
select TABLE_NAME,COLUMN_NAME,HISTOGRAM
from dba_tab_columns where table_name = 'SKEWED';

TABLE_NAME COLUMN_NAM HISTOGRAM
---------- ---------- ---------------
SKEWED SALARY NONE
SKEWED JOB_ID FREQUENCY
SKEWED EMPNO NONE

That is, now Oracle will no longer make histograms you didn’t ask for.

– Finally, change the preference for METHOD_OPT parameter of automatic stats gathering job from default value of AUTO to REPEAT so that it will gather histograms only for the columns already having one.

–  Get Current value –

SQL> select dbms_stats.get_prefs ('METHOD_OPT') from dual;

DBMS_STATS.GET_PREFS('METHOD_OPT')
-----------------------------------------------------------------------
FOR ALL COLUMNS SIZE AUTO

– Set preference to REPEAT–

SQL> exec dbms_stats.set_global_prefs ('METHOD_OPT','FOR ALL COLUMNS SIZE REPEAT');

– Verify –

SQL> select dbms_stats.get_prefs ('METHOD_OPT') from dual;

DBMS_STATS.GET_PREFS('METHOD_OPT')
-----------------------------------------------------------------------
FOR ALL COLUMNS SIZE REPEAT

From  now onwards, gathering  of statistics, manually or automatically will not create any new histograms while retaining  all the existing ones.

I hope this post is useful.

Happy reading….

References:

https://blogs.oracle.com/optimizer/entry/how_does_the_method_opt
http://www.pythian.com/blog/stabilize-oracle-10gs-bind-peeking-behaviour/
https://richardfoote.wordpress.com/2008/01/04/dbms_stats-method_opt-default-behaviour-changed-in-10g-be-careful/

———————————————————————————————–

Related Links:

Home

Database Index
Tuning Index

———————————————————————————————



Tags:  

Del.icio.us
Digg

Comments:  2 comments on this item
You might be interested in this:  
Copyright © ORACLE IN ACTION [Create Histograms On Columns That Already Have One], All Right Reserved. 2014.

The post Create Histograms On Columns That Already Have One appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Watch: Hadoop vs. Riak

Pythian Group - Mon, 2014-12-15 09:14

Every data platform has its value, and deciding which one will work best for your big data objectives can be tricky—Alex Gorbachev, Oracle ACE Director, Cloudera Champion of Big Data, and Chief Technology Officer at Pythian, has recorded a series of videos comparing the various big data platforms and presents use cases to help you identify which ones will best suit your needs.

“Riak and Hadoop are quite different data platforms,” Alex says. “Hadoop is actually the system that would process the data that Riak is collecting.” Learn how the two systems are complementary rather than competitive by watching Alex’s video Hadoop vs. Riak.

Note: You may recognize this series, which was originally filmed back in 2013. After receiving feedback from our viewers that the content was great, but the video and sound quality were poor, we listened and re-shot the series.

Find the rest of the series here

 

Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s Big Data expertise.

Categories: DBA Blogs

Partner Webcast – Oracle AppAdvantage powered by Oracle Fusion Middleware

Modern organizations require applications to seamlessly extend, integrate, become more agile, and embrace new business imperatives, including social, mobile, cloud, and big data. ...

We share our skills to maximize your revenue!
Categories: DBA Blogs

AZORA – Arizona Oracle User Group meeting January 20th

Bobby Durrett's DBA Blog - Fri, 2014-12-12 16:25

AZORA is planning a meeting January 20th.  Here is the link to RSVP: url

Hope to see you there. :)

– Bobby

Categories: DBA Blogs

Log Buffer #401, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-12-12 09:00

This Log Buffer Edition goes right through the fields of salient database blog posts and comes out with something worth reading.


Oracle:

Extract SQL full text from SQL Monitor html.

Disruption: Are Hot Brands Breaking the Rules?

Understanding Flash: Unpredictable Write Performance.

The caveats of running .sql scripts with GUI tools.

File Encoding in the Next Generation Outline Extractor.

SQL Server:

Arshad Ali discusses how to use CTE and the ranking function to access or query data from previous or subsequent rows.

SSRS – Report for Stored Procedure with Multiple Values Passed.

Continuous Delivery for Databases: Microservices, Team Structures, and Conway‘s Law.

Scripting SQL Server databases with SMO using EnforceScriptingOptions.

How to troubleshoot SSL encryption issues in SQL Server.

MySQL:

MySQL 5.7: only_full_group_by Improved, Recognizing Functional Dependencies, Enabled by Default!

MaxScale, manual control, external monitors and notification methods.

MySQL 5.7: only_full_group_by Improved, Recognizing Functional Dependencies, Enabled by Default!

Recover MySQL root password without restarting MySQL (no downtime!)

Oracle DBAs have has the luxury of their V$ variables for a long time while we MySQL DBAs pretended we were not envious.

Categories: DBA Blogs

Impressions from #ukoug_tech14

The Oracle Instructor - Thu, 2014-12-11 10:15

ACC Liverpool

The Oracle circus went to Liverpool this year for the annual conference of the UK Oracle User Group and it was a fantastic event there! Top speakers and a very knowledgeable audience too, I was really impressed by the quality we have experienced. Together with my friends and colleagues Iloon and Joel, I was waving the flag for Oracle University again – and it was really fun to do so :-)

The 3 of us

The 3 of us

One little obstacle was that I actually did many presentations and roundtables. So less time for me to listen to the high quality talks of the other speakers…

Joel and I hosted three roundtables:

About Exadata, where we had amongst others Dan Norris (Member of the Platform Integration MAA Team, Oracle) and Jason Arneil (Solutions Architect, e-DBA) contributing

Exadata Roundtable

Exadata Roundtable, Jason and Dan on my left side, Joel and Iloon on my right

About Grid Infrastructure & RAC, where Ian Cookson (Product Manager  Clusterware, Oracle) took many questions from the audience. We could have had Markus Michalewicz also if I only would have told him the day before during the party night – I’m still embarrassed about that.

About Data Guard, where Larry Carpenter (Master product Manager Data Guard and Maximum Availability Architecture, Oracle) took all the questions as usual. AND he hit me for the article about the Active Data Guard underscore parameter, so I think I will remove it…

Iloon delivered her presentation about Apex for DBA Audience, which was very much appreciated and attracted a big crowd again, same as in Nürnberg before.

Joel had two talks on Sunday already: Managing Sequences in a RAC Environment (This is actually a more complex topic than you may think!) and Oracle Automatic Parallel Execution (Obviously complex stuff)

I did two presentations as well: The Data Guard Broker – Why it is recommended and Data Guard 12c New Features in Action

Both times, the UKOUG was so kind to give me very large rooms, and I can say that they haven’t looked empty although I faced tough competition by other interesting talks. This is from the first presentation:

Uwe Hesse

A big THANK YOU goes out to all the friendly people of UKOUG who made this possible and maintained the great event Tech14 was! And also to the bright bunch of Oracle colleagues and Oracle techies (speakers and attendees included) that gave me good company there: You guys are the best! Looking forward to see you at the next conference :-)


Tagged: #ukoug_tech14
Categories: DBA Blogs

Oracle University Expert Summit 2015 in Dubai

The Oracle Instructor - Wed, 2014-12-10 08:45

Oracle University Expert Summit Dubai 2015

Together with Craig Shallahamer, Lucas Jellema, Mark Rittman, Martin Bach, Pete Finnigan and Tim Fox, I will be presenting in Dubai. My topic is Minimizing Downtime with Rolling Upgrade using Data Guard

Click on the picture for details, please!

Hope to see you there :-)


Categories: DBA Blogs

Call for Papers for the O’Reilly MySQL Conference

Pythian Group - Tue, 2014-12-09 14:35

The call for papers for the O’Reilly MySQL Conference is now open, and closes October 25th.  Submit your proposal now at http://en.oreilly.com/mysql2011/user/proposal/propose/cfp/126!

Categories: DBA Blogs

Final Videos of Open DB Camp Online:

Pythian Group - Tue, 2014-12-09 12:11

The final videos from Open DB Camp back in May in Sardinia, Italy are now online.  The full matrix of sessions, videos and slides can be found on the schedule page.

Hands on JDBC by Sandro Pinna – video

“MySQL Plugins, What are They? How you can use them to do wonders” by Sergei Golubchek of MariaDBvideo

The State of Open Source Databases by Kaj Arnö of SkySQL – video

Coming soon, videos from OSCon Data!

Categories: DBA Blogs