Skip navigation.

Pythian Group

Syndicate content
Official Pythian Blog - Love Your Data
Updated: 10 hours 13 min ago

Log Buffer #402, A Carnival of the Vanities for DBAs

Fri, 2014-12-19 09:15

This Log Buffer edition hits the ball out of park by smashing yet another record of surfacing with a unique collection of blog posts from various database technologies. Enjoy!!!

Oracle:

EM12c and the Optimizer Statistics Console.

SUCCESS and FAILURE Columns in DBA_STMT_AUDIT_OPTS.

OBIEE and ODI on Hadoop : Next-Generation Initiatives To Improve Hive Performance.

Oracle 12.1.0.2 Bundle Patching.

Performance Issues with the Sequence NEXTVAL Call.

SQL Server:

GUIDs GUIDs everywhere, but how is my data unique?

Questions About T-SQL Transaction Isolation Levels You Were Too Shy to Ask.

Introduction to Advanced Transact SQL Stairway and Using the CROSS JOIN Operator.

Introduction to Azure SQL Database Scalability.

What To Do When the Import and Export Wizard Fails.

MySQL:

Orchestrator 1.2.9 GA released.

Making HAProxy 1.5 replication lag aware in MySQL.

Monitor MySQL Performance Interactively With VividCortex.

InnoDB’s multi-versioning handling can be Achilles’ heel.

Memory summary tables in Performance Schema in MySQL 5.7.

Categories: DBA Blogs

Performance Issues with the Sequence NEXTVAL Call

Thu, 2014-12-18 08:51

Is SELECTing from a sequence your Oracle Performance Problem? The answer to that question is: it might be!

You wouldn’t expect a sequence select to be a significant problem but recently we saw that it was—and in two different ways. The issue came to light when investigating a report performance issue on an Oracle 11.2.0.4 non-RAC database. Investigating the original report problem required an AWR analysis and a SQL trace (actually a 10046 level 12 trace – tracing the bind variables was of critical importance in troubleshooting the initial problem with the report).

 

First problem: if SQL_ID = 4m7m0t6fjcs5x appears in the AWR reports

SELECTing a sequence value using the NEXTVAL function is supposed to be a fairly lightweight process. The sequence’s last value is stored in memory and a certain definable number of values are pre-fetched and cached in memory (default is CACHE=20). However when those cached values are exhausted the current sequence value must be written to disk (so duplicate values aren’t given upon restarts after instance crashes). And that’s done via an update on the SYS.SEQ$ table. The resulting SQL_ID and statement for this recursive SQL is:

SQL_ID   = 4m7m0t6fjcs5x

SQL Text = update seq$ set increment$=:2, minvalue=:3, maxvalue=:4, cycle#=:5, order$=:6,
           cache=:7, highwater=:8, audit$=:9, flags=:10 where obj#=:1

 

This is recursive SQL and consequently it and the corresponding SQL_ID is consistent between databases and even between Oracle versions.

Hence seeing SQL_ID 4m7m0t6fjcs5x as one of the top SQL statements in the AWR report indicates a possible problem. In our case it was the #1 top statement in terms of cumulative CPU. The report would select a large number of rows and was using a sequence value and the NEXTVAL call to form a surrogate key.

So what can be done about this? Well like most SQL tuning initiatives one of the best ways to tune a statement is to run it less frequently. With SQL_ID 4m7m0t6fjcs5x that’s easy to accomplish by changing the sequence’s cache value.

In our case, seeing SQL_ID 4m7m0t6fjcs5x as the top SQL statement quickly lead us to check the sequence settings and saw that almost all sequences had been created with the NOCACHE option. Therefore no sequence values were being cached and an update to SEQ$ was necessary after every single NEXTVAL call. Hence the problem.

Caching sequence values adds the risk of skipped values (or a sequence gap due to the loss of the cached values) when the instance crashes. (Note, no sequence values are lost when the database is shutdown cleanly.)  However in this case, since the sequence is just being used as a surrogate key this was not a problem for the application.

Changing the sequences CACHE setting to 100 completely eliminated the problem, increased the overall report performance, and removed SQL_ID 4m7m0t6fjcs5x from the list of top SQL in AWR reports.

Lesson learned: if you ever see SQL_ID 4m7m0t6fjcs5x in any of the top SQL sections in an AWR or STATSPACK report, double check the sequence CACHE settings.

 

Next problem: significant overhead of tracing the sequence update

Part of investigating a bind variable SQL regression problem with the report required a SQL trace. The report was instrumented with:

alter session set events '10046 trace name context forever, level 12';

 

The tracing made the report run over six times longer. This caused the report to overrun it’s allocated execution window and caused other job scheduling and SLA problems.

Normally we’d expect some overhead of a SQL trace due to the synchronous writes to the trace file, but over a 500% increase was far more than expected. From the developer’s viewpoint the report was essentially just executing a single query. The reality is that it was slightly more complicated than that as the top level query accessed a view. Still the view was not overly complex and hence the developer believed that the report was query intensive. Not executing many queries: just the original top level call and the view SQL.

Again the issue is largely related to the sequence, recursive SQL from the sequence, and specifically statement 4m7m0t6fjcs5x.

Starting with an AWR SQL report of SQL_ID 4m7m0t6fjcs5x from two report executions, one with and one without SQL trace enabled showed:

Without tracing:
Elapsed Time (ms):      278,786
CPU Time (ms):          278,516
Executions:             753,956
Buffer Gets:          3,042,991
Disk Reads:                   0
Rows:                   753,956

With tracing:
Elapsed Time (ms):    2,362,227
CPU Time (ms):        2,360,111
Executions:             836,182
Buffer Gets:          3,376,096
Disk Reads:                   5
Rows:                   836,182

 

So when the report ran with tracing enabled it ran 4m7m0t6fjcs5x 836K times instead of 753K times during the previous non-traced run: a 10.9% increase due to underlying application data changes between the runs. Yet 2.36M ms vs 278K ms in both CPU and elapsed times: a 847% increase!

The question was then: could this really be due to the overhead of tracing or something else? And should all of those recursive SQL update statements materialize as CPU time in the AWR reports? To confirm this and prove it to the developers a simplified sequence performance test was performed on a test database:

The simplified test SQL was:

create sequence s;
declare
   x integer;
begin
   for i in 1 .. 5000000
   loop
      x := s.nextval;
   end loop;
end;
/

 

From AWR SQL reports on SQL_ID 4m7m0t6fjcs5x:

Without tracing:

Stat Name                                Statement   Per Execution % Snap
---------------------------------------- ---------- -------------- -------
Elapsed Time (ms)                            10,259            0.0     7.1
CPU Time (ms)                                 9,373            0.0     6.7
Executions                                  250,005            N/A     N/A
Buffer Gets                                 757,155            3.0    74.1
Disk Reads                                        0            0.0     0.0
Parse Calls                                       3            0.0     0.3
Rows                                        250,005            1.0     N/A


With tracing:

Stat Name                                Statement   Per Execution % Snap
---------------------------------------- ---------- -------------- -------
Elapsed Time (ms)                            81,158            0.3    20.0
CPU Time (ms)                                71,812            0.3    17.9
Executions                                  250,001            N/A     N/A
Buffer Gets                                 757,171            3.0    74.4
Disk Reads                                        0            0.0     0.0
Parse Calls                                       1            0.0     0.1
Rows                                        250,001            1.0     N/A

 

Same number of executions and buffer gets as would be expected but 7.66 times the CPU and 7.91 times the elapsed time just due to the SQL trace!  (Similar results to the 8.47 times increase we saw with the actual production database report execution.)

And no surprise, the resulting trace file is extremely large. As we would expect, since the sequence was created with the default CACHE value of 20 it’s recording each UPDATE with the set of binds followed by 20 NEXTVAL executions:

=====================
PARSING IN CURSOR #140264395012488 len=100 dep=0 uid=74 oct=47 lid=74 tim=1418680119565405 hv=152407152 ad='a52802e0' sqlid='dpymsgc4jb33h'
declare
   x integer;
begin
   for i in 1 .. 5000000
   loop
      x := s.nextval;
   end loop;
end;
END OF STMT
PARSE #140264395012488:c=0,e=256,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=0,tim=1418680119565401
=====================
PARSING IN CURSOR #140264395008592 len=26 dep=1 uid=74 oct=3 lid=74 tim=1418680119565686 hv=575612948 ad='a541eed8' sqlid='0k4rn80j4ya0n'
Select S.NEXTVAL from dual
END OF STMT
PARSE #140264395008592:c=0,e=64,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=3499163060,tim=1418680119565685
EXEC #140264395008592:c=0,e=50,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=3499163060,tim=1418680119565807
=====================
PARSING IN CURSOR #140264395000552 len=129 dep=2 uid=0 oct=6 lid=0 tim=1418680119566005 hv=2635489469 ad='a575c3a0' sqlid='4m7m0t6fjcs5x'
update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,cache=:7,highwater=:8,audit$=:9,flags=:10 where obj#=:1
END OF STMT
PARSE #140264395000552:c=0,e=66,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=4,plh=1935744642,tim=1418680119566003
BINDS #140264395000552:
 Bind#0
  oacdty=02 mxl=22(02) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=a52eb120  bln=22  avl=02  flg=09
  value=1
 Bind#1
  oacdty=02 mxl=22(02) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=a52eb132  bln=22  avl=02  flg=09
  value=1
 Bind#2
  oacdty=02 mxl=22(15) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=a52eb144  bln=22  avl=15  flg=09
  value=9999999999999999999999999999
 Bind#3
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=00 fl2=0001 frm=00 csi=00 siz=48 off=0
  kxsbbbfp=7f91d96ca6b0  bln=22  avl=01  flg=05
  value=0
 Bind#4
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=00 fl2=0001 frm=00 csi=00 siz=0 off=24
  kxsbbbfp=7f91d96ca6c8  bln=22  avl=01  flg=01
  value=0
 Bind#5
  oacdty=02 mxl=22(02) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=a52eb156  bln=22  avl=02  flg=09
  value=20
 Bind#6
  oacdty=02 mxl=22(05) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=a52eb168  bln=22  avl=05  flg=09
  value=5000021
 Bind#7
  oacdty=01 mxl=32(32) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=01 csi=178 siz=32 off=0
  kxsbbbfp=a52eb17a  bln=32  avl=32  flg=09
  value="--------------------------------"
 Bind#8
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=00 fl2=0001 frm=00 csi=00 siz=48 off=0
  kxsbbbfp=7f91d96ca668  bln=22  avl=02  flg=05
  value=8
 Bind#9
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=00 fl2=0001 frm=00 csi=00 siz=0 off=24
  kxsbbbfp=7f91d96ca680  bln=22  avl=04  flg=01
  value=86696
EXEC #140264395000552:c=1000,e=798,p=0,cr=1,cu=2,mis=0,r=1,dep=2,og=4,plh=1935744642,tim=1418680119566897
STAT #140264395000552 id=1 cnt=0 pid=0 pos=1 obj=0 op='UPDATE  SEQ$ (cr=1 pr=0 pw=0 time=233 us)'
STAT #140264395000552 id=2 cnt=1 pid=1 pos=1 obj=79 op='INDEX UNIQUE SCAN I_SEQ1 (cr=1 pr=0 pw=0 time=23 us cost=0 size=69 card=1)'
CLOSE #140264395000552:c=0,e=3,dep=2,type=3,tim=1418680119567042
FETCH #140264395008592:c=1000,e=1319,p=0,cr=1,cu=3,mis=0,r=1,dep=1,og=1,plh=3499163060,tim=1418680119567178
STAT #140264395008592 id=1 cnt=1 pid=0 pos=1 obj=86696 op='SEQUENCE  S (cr=1 pr=0 pw=0 time=1328 us)'
STAT #140264395008592 id=2 cnt=1 pid=1 pos=1 obj=0 op='FAST DUAL  (cr=0 pr=0 pw=0 time=1 us cost=2 size=0 card=1)'
CLOSE #140264395008592:c=0,e=1,dep=1,type=3,tim=1418680119567330
EXEC #140264395008592:c=0,e=19,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=3499163060,tim=1418680119567378
FETCH #140264395008592:c=0,e=14,p=0,cr=0,cu=0,mis=0,r=1,dep=1,og=1,plh=3499163060,tim=1418680119567425
CLOSE #140264395008592:c=0,e=1,dep=1,type=3,tim=1418680119567458
...
< Repeats #140264395008592 18 more times due to CACHE=20 >

 

From the trace, it’s apparent that not only is there the overhead of updating the SEQ$ table but maintaining the I_SEQ1 index as well. A tkprof on the test shows us the same information:

declare
   x int;
begin
   for i in 1..5000000 loop
      x := s.nextval;
   end loop;
end;

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          2          0           0
Execute      1    241.55     247.41          0     250003          0           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2    241.56     247.41          0     250005          0           1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 74

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  log file sync                                   1        0.01          0.01
  SQL*Net message to client                       1        0.00          0.00
  SQL*Net message from client                     1        0.00          0.00
********************************************************************************

SQL ID: 0k4rn80j4ya0n Plan Hash: 3499163060

Select S.NEXTVAL
from
 dual


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute 5000000     35.37      30.49          0          0          0           0
Fetch   5000000     50.51      45.81          0          0     250000     5000000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   10000001     85.88      76.30          0          0     250000     5000000

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 74     (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SEQUENCE  S (cr=1 pr=0 pw=0 time=910 us)
         1          1          1   FAST DUAL  (cr=0 pr=0 pw=0 time=2 us cost=2 size=0 card=1)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  latch free                                      1        0.00          0.00
********************************************************************************

SQL ID: 4m7m0t6fjcs5x Plan Hash: 1935744642

update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,
  cache=:7,highwater=:8,audit$=:9,flags=:10
where
 obj#=:1


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        0      0.00       0.00          0          0          0           0
Execute 250000     71.81      81.15          0     250003     507165      250000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   250000     71.81      81.15          0     250003     507165      250000

Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS   (recursive depth: 2)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  Disk file operations I/O                        1        0.00          0.00
  log file switch (checkpoint incomplete)         1        0.19          0.19
  log file switch completion                      4        0.20          0.75
********************************************************************************

So clearly we can see a lot of additional overhead when performing a SQL trace of the many calls to the sequence NEXTVAL function. Of course the overhead is due to recursive SQL and the synchronous write of the trace file. It just wasn’t obvious that a simple query could generate that much recursive DML and trace data.

 

Combining the two issues

The next question is what is the effect of the CACHE setting for the sequence as well as the different between a LEVEL 8 and LEVEL 12 trace. Using a similar PL/SQL test block but with only 100,000 executions on a lab database showed the following results measuring CPU time (in seconds):

Cache Size No Trace 10046 level 8 10046 level 12 0 31.94 58.71 94.57 20 7.53 15.29 20.13 100 4.85 13.36 13.50 1000 3.93 10.61 11.93 10000 3.70 10.96 12.20

Hence we can see that with even an extremely high CACHE setting for the sequence, the 10046 trace adds roughly 300% to 400% overhead for this one particular statement. And that the caching sweet-spot seems to be around 100.

 

Conclusions

We often take the Oracle sequence for granted assume that it’s an optimized and efficient internal structure—and for the most part it is. But depending on how it’s implemented, it can be problematic.

If we ever see SQL_ID 4m7m0t6fjcs5x as one of our worst performing SQL statements, we should double check the sequence configuration and usage. Was the CACHE value set low by design, or inadvertently? Is the risk of a sequence gap after an instance crash worth the overhead of a low CACHE value? Perhaps the settings need to be reconsidered and changed?

And a caution about enabling a SQL trace. It’s something we expect to add some overhead. But not 3x to 10x which may make the tracing process unreasonable.  Of course the tracing overhead will be dependent on the actual workload.  But for those that are sequence NEXTVAL heavy, don’t underestimate the underlying recursive SQL as the overhead can be significant—and much more than one may think.

Categories: DBA Blogs

ORA-28043: Invalid Bind Credentials for DB-OID Connection

Thu, 2014-12-18 08:46

Have you ever encountered this error connecting to a DB using global authentication against OID? Was re-registration a temporary workaround, but the issue came back after some time? Check out this solution for ORA-28043: invalid bind credentials for DB-OID connection.

During a long project which included changing human account’s authentication method from local to global on several databases, users started to report ORA-28043 after a couple of days.

$ sqlplus rambo@orcl

SQL*Plus: Release 11.2.0.3.0 Production on Tue Nov 4 07:28:03 2014 

Copyright (c) 1982, 2011, Oracle. All rights reserved. 

Enter password: 

ERROR: 

ORA-28043: invalid bind credentials for DB-OID connection 

Since some of these were production assets, we tried to restore the service as soon as possible. The fastest workaround we found was to re-register the DBs using DBCA:

$ dbca -silent -configureDatabase -sourceDB orcl -unregisterWithDirService true -dirServiceUserName cn=orcladmin -dirServicePassword ****** -walletPassword ******

Preparing to Configure Database

6% complete

13% complete

66% complete

Completing Database Configuration

100% complete

Look at the log file /e00/oracle/cfgtoollogs/dbca/orcl/orcl.log" for further details.

$ dbca -silent -configureDatabase -sourceDB orcl -registerWithDirService true -dirServiceUserName cn=orcladmin -dirServicePassword ****** -walletPassword ******

Preparing to Configure Database

6% complete

13% complete

66% complete

Completing Database Configuration

100% complete

Look at the log file "/e00/oracle/cfgtoollogs/dbca/orcl/orcl.log" for further details.

Good news: the service was restored quickly. Bad news: the issue came back after a couple of days. We started a deeper investigation which included opening a SR in My Oracle Support. Luckily, we found the real culprit for this error very quickly: PASSWORD EXPIRATION. These were the commands they provided us to verify that the wallet couldn’t bind to the directory:

$ mkstore -wrl . -list 

Oracle Secret Store Tool : Version 11.2.0.3.0 - Production 

Copyright (c) 2004, 2011, Oracle and/or its affiliates. All rights reserved. 

Enter wallet password:xxx 

Oracle Secret Store entries: 

ORACLE.SECURITY.DN 

ORACLE.SECURITY.PASSWORD 

$ mkstore -wrl . -viewEntry ORACLE.SECURITY.DN -viewEntry ORACLE.SECURITY.PASSWORD 

Oracle Secret Store Tool : Version 11.2.0.3.0 - Production 

Copyright (c) 2004, 2011, Oracle and/or its affiliates. All rights reserved. 

Enter wallet password: xxx 

ORACLE.SECURITY.DN = cn=ORCL,cn=OracleContext,DC=ppl,DC=com 

ORACLE.SECURITY.PASSWORD = Z8p9a1j1 

$ ldapbind -h oidserver -p 3060 -D cn=ORCL,cn=OracleContext,DC=ppl,DC=com -w Z8p9a1j1 

ldap_bind: Invalid credentials 

ldap_bind: additional info: Password Policy Error :9000: GSL_PWDEXPIRED_EXCP :Your Password has expired. Please contact the Administrator to change your password. 

Oracle’s recommendation was to set “pwdmaxage” attribute to 0. We achieved this by changing the value from the GUI, under Security/Password Policy/Password Expiry Time

Note that for OID versions older than 10.0.4, changing the parameter’s value to zero doesn’t work due to Bug 3334767. Instead, you can place a very large value.

Categories: DBA Blogs

Watch: Hadoop vs. Riak

Mon, 2014-12-15 09:14

Every data platform has its value, and deciding which one will work best for your big data objectives can be tricky—Alex Gorbachev, Oracle ACE Director, Cloudera Champion of Big Data, and Chief Technology Officer at Pythian, has recorded a series of videos comparing the various big data platforms and presents use cases to help you identify which ones will best suit your needs.

“Riak and Hadoop are quite different data platforms,” Alex says. “Hadoop is actually the system that would process the data that Riak is collecting.” Learn how the two systems are complementary rather than competitive by watching Alex’s video Hadoop vs. Riak.

Note: You may recognize this series, which was originally filmed back in 2013. After receiving feedback from our viewers that the content was great, but the video and sound quality were poor, we listened and re-shot the series.

Find the rest of the series here

 

Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s Big Data expertise.

Categories: DBA Blogs

Log Buffer #401, A Carnival of the Vanities for DBAs

Fri, 2014-12-12 09:00

This Log Buffer Edition goes right through the fields of salient database blog posts and comes out with something worth reading.


Oracle:

Extract SQL full text from SQL Monitor html.

Disruption: Are Hot Brands Breaking the Rules?

Understanding Flash: Unpredictable Write Performance.

The caveats of running .sql scripts with GUI tools.

File Encoding in the Next Generation Outline Extractor.

SQL Server:

Arshad Ali discusses how to use CTE and the ranking function to access or query data from previous or subsequent rows.

SSRS – Report for Stored Procedure with Multiple Values Passed.

Continuous Delivery for Databases: Microservices, Team Structures, and Conway‘s Law.

Scripting SQL Server databases with SMO using EnforceScriptingOptions.

How to troubleshoot SSL encryption issues in SQL Server.

MySQL:

MySQL 5.7: only_full_group_by Improved, Recognizing Functional Dependencies, Enabled by Default!

MaxScale, manual control, external monitors and notification methods.

MySQL 5.7: only_full_group_by Improved, Recognizing Functional Dependencies, Enabled by Default!

Recover MySQL root password without restarting MySQL (no downtime!)

Oracle DBAs have has the luxury of their V$ variables for a long time while we MySQL DBAs pretended we were not envious.

Categories: DBA Blogs

Call for Papers for the O’Reilly MySQL Conference

Tue, 2014-12-09 14:35

The call for papers for the O’Reilly MySQL Conference is now open, and closes October 25th.  Submit your proposal now at http://en.oreilly.com/mysql2011/user/proposal/propose/cfp/126!

Categories: DBA Blogs

Final Videos of Open DB Camp Online:

Tue, 2014-12-09 12:11

The final videos from Open DB Camp back in May in Sardinia, Italy are now online.  The full matrix of sessions, videos and slides can be found on the schedule page.

Hands on JDBC by Sandro Pinna – video

“MySQL Plugins, What are They? How you can use them to do wonders” by Sergei Golubchek of MariaDBvideo

The State of Open Source Databases by Kaj Arnö of SkySQL – video

Coming soon, videos from OSCon Data!

Categories: DBA Blogs

Postgresql 9.1 – Part 1: General Features

Tue, 2014-12-09 12:00
General scope

Postgresql 9.1 runs over the theme “features, innovation and extensibility” and it really does. This version was born to overcome Postgresql 9.0 ‘s limitations and known bugs in replication. If you are developing over 9.0, it’s time to think seriously about preparing your code for Postgresql 9.1.

The intent of this series of posts are not to be another release features posts. I offer a vision based on my personal experience and focus on the features that I saw exciting for the most of the projects where I’m involved. If you want to read an excellent general article about the new features of this version, web to [2].

At the moment of this post, the last PostgreSQL version is 9.1.1 . It includes 11 commits to fix GIST memory leaks, VACUUM improvements, catalog fixes and others. A description of the minor release can be check at [3].

The main features included are:

  • Synchronous Replication
  • Foreign Data Support
  • Per Column collation support
  • True SSI (Serializable Snapshot Isolation)
  • Unlogged tables for ephemeral data
  • Writable Common Table Expressions
  • K-nearest-neighbor added to GIST indexes
  • Se-linux integration with the SECURITY LEVEL command
  • Update the PL/Python server-side language
  • To come: PGXN Client for install extensions easily from the command line. More information: http://pgxn.org/faq/  The source will be onhttps://github.com/pgxn/pgxn-client

Some of these features could be considered minor, but many think they are very cool while using 9.1  in their environments.

Considerations before migrating

If you are an old Pg user, you may already know the migration risks listed on the next page. Still, I advise that you note and carefully learn about these risks. Many users freeze their developments to older versions simply because they didn’t know how to solve new issues. The most notable case is when 8.3 stopped using implicit casts for some datatypes and many queries didn’t work as a result.

There are some important changes that could affect your queries, so take a pen and note:

  • The default value of standard_conforming_strings is now turned on by default. That means that backslashes are normal characters (which is the SQL standard behavior). So, if you have backslashes in your SQL code, you must add E’’ strings. For example: E’Don’t’
  • Function-style and attribute-style data type casts were disallowed for composite types. If you have code like value_composite.text or text(value_composite), you will need to use CAST or :: operator.
  • Whereas before the checks were skipped, domains are now based on arrays when they are updated, which results in a rechecking of the constraints.
  • String_to_array function returns now an empty array for a zero-length string (before it returned NULL). The same function splits into characters if you use the NULL separator.
  • The inclusion of the INSTEAD OF action for triggers will require you to recheck the logic of your triggers.
  • If you are an actual 9.0 replication user, you may know that in 9.1 you can control the side effects of VACUUM operations during big queries execution and replication. This is a really important improvement. Basically, if you run a big query in the slave server and the master starts a VACUUM process, the slave server can request the master postpone the cleanup of death rows that are being used by the query.
Brief description of main features

Don’t worry about the details, we’ll cover each feature in future posts.

  • Synchronous Replication
    • This feature enhances the durability of the data. Only one server can be synchronous with the master, the rest of the replicated servers will be asynchronous. If the actual synchronous server goes down, another server will become synchronous (using a list of servers insynchronous_standby_names).  Failover is not automatic, so you must use external tools to activate the standby sync server, one of the most popular is pgpool [4].
  • Foreign Data Support
    • The feature of Foreign Data Wrappers has been included since 8.4, but now it is possible to reach data from any database where a plugin exists. Included in the contribs, is a file called file_fwd, which connects CSV files to a linked table. Basically it provides an interface to connect to external data. In my opinion, this is perhaps one of the most useful features of this versions, especially if you have different data sources in your environment.
  • Serializable Snapshot Isolation
    • This new level of serialization is the strictest. Postgres now supports READ COMMITED, REPEATABLE READ (old serializable) and SERIALIZABLE. It uses predicate locking to keep the lock if the write would have an impact on the result. You will not need explicit locks to use this level, due to the automatic protection provided.
  • Unlogged tables
    • Postgres uses the WAL log to have a log of all the data changes to prevent data loss and guarantee consistency in the event of a crash, but it consumes resources and sometimes we have data that we can recover from other sources or that is ephemeral. In these cases, creation of unlogged tables allows the database to have tables without logging into the WAL, reducing the writes to disk. Otherwise, this data will not be replicated, due to the mechanism of replication used by Postgres (through WAL records shipping).
  • Writable Common Table Expressions
    • CTE was included in 8.4 version, but in this version, it was improved to allow you to use writes inside the CTE (WITH clause). This could save a lot of code in your functions.
  • K-nearest-neighbor added to GIST indexes
    • Postgres supports multiple types of indexes; one of them is GiST (Generalized Search Tree). With 9.1, we can define a ‘distance’ for datatypes and use it for with a GiST index. Right now, this feature is implemented for point, pg_trgm contrib and others btree_gist datatypes. The operator for distance is <-> . Another feature you will enjoy is that LIKE and ILIKE operators can use the tgrm index without scanning the whole table.
  • SE-Linux integration
    • Postgres is now the first database to be fully integrated with military security-grade. SECURITY LABEL applies a security label to a database object. This facility is intended to allow integration with label-based mandatory access control (MAC) systems such as SE-Linux instead of the more traditional access control – discretionary with users and groups. (DAC).

References:

[1] http://www.postgresql.org/docs/9.1/static/release-9-1.html
[2] http://wiki.postgresql.org/wiki/What%27s_new_in_PostgreSQL_9.1
[3] http://www.postgresql.org/docs/9.1/static/release-9-1-1.html
[4] http://pgpool.projects.postgresql.org/

Categories: DBA Blogs

PalominoDB Percona Live: London Slides are up!

Tue, 2014-12-09 11:35

Percona Live: London was a rousing success for PalominoDB.  I was sad that I could not attend, but I got a few people who sent “hellos” to me via my coworkers.  But on to the most important stuff — slides from our presentations are online!

René Cannao spoke about MySQL Backup and Recovery Tools and Techniques (description) – slides (PDF)

 

Jonathan delivered a 3-hour tutorial about Advanced MySQL Scaling Strategies for Developers (description) – slides (PDF)

Enjoy!

Categories: DBA Blogs

A (BIG) Trick Listing Windows Updates Using PowerShell

Mon, 2014-12-08 10:29

If you (like me) are using Microsoft.Update.Session for listing the installed windows updates, you might be surprised about something I spent a lot of time today.

Our dream team here is deploying a very cool project for a client where we automated all the install/controlling  of the windows updates in an environment. After some weeks, another script is run in another environment and gets the differences between these environments and just install in this environment those updates that are  differences.  It means that was tested and approved. We will blog about it.

Well, one of our functions get the installed updates in the computer passed as parameter, but in an specific server  a few KB were being listed  two times. It would not  be weird if the UpdatedID and the RevisionNumber was not the same. How can the same update with the same KB, UPdateID and RevisionNumber installed two times ? Ha! It cannot .

First let’s take a look in a part of my code :

$session = [activator]::CreateInstance([type]::GetTypeFromProgID(“Microsoft.Update.Session”,$ComputerName))
$us = $session.CreateUpdateSearcher()
$qtd = $us.GetTotalHistoryCount()
$hot = $us.QueryHistory(0, $qtd)
foreach ($Upd in $hot) {
$OutPut = New-Object -Type PSObject -Prop @{
‘ComputerName’=$computername
‘UpdateDate’=$Upd.date
‘KB’=[regex]::match($Upd.Title,’KB(\d+)’)
‘UpdateTitle’=$Upd.title
‘UpdateDescription’=$Upd.Description
‘SupportUrl’=$Upd.SupportUrl
‘UpdateId’=$Upd.UpdateIdentity.UpdateId
‘RevisionNumber’=$Upd.UpdateIdentity.RevisionNumber
}
Write-Output $OutPut
}

and the output was :

ComputerName : DEATHSTAR
UpdateDate : 8/6/2014 2:15:36 PM
RevisionNumber : 200
SupportUrl : http://support.microsoft.com
UpdateTitle : Security Update for Windows Server 2012 R2 (KB2961072)
KB : KB2961072
UpdateDescription : A security issue has been identified in a Microsoft software product that could affect your system. You can help
protect your system by installing this update from Microsoft. For a complete listing of the issues that are included
in this update, see the associated Microsoft Knowledge Base article. After you install this update, you may have to
restart your system.
UpdateId : f9180040-e423-4fab-9a5b-78c46e9db72c

ComputerName : DEATHSTAR
UpdateDate : 8/6/2014 2:47:19 PM
RevisionNumber : 200
SupportUrl : http://support.microsoft.com
UpdateTitle : Security Update for Windows Server 2012 R2 (KB2961072)
KB : KB2961072
UpdateDescription : A security issue has been identified in a Microsoft software product that could affect your system. You can help
protect your system by installing this update from Microsoft. For a complete listing of the issues that are included
in this update, see the associated Microsoft Knowledge Base article. After you install this update, you may have to
restart your system.
UpdateId : f9180040-e423-4fab-9a5b-78c46e9db72c

As you can see the it have a difference in the time..a few minutes between them.

Well. as I am outputting a psobject  in he function   I decide to use the live object  and changed the output ho have more data  to analyse

$session = [activator]::CreateInstance([type]::GetTypeFromProgID(“Microsoft.Update.Session”,$ComputerName))

$us = $session.CreateUpdateSearcher()
$qtd = $us.GetTotalHistoryCount()
$hot = $us.QueryHistory(0, $qtd)
$hot

and then the force awakens : Take a look at the output in Bold :

Operation : 1

ResultCode : 2

HResult : 0
Date : 8/6/2014 2:15:36 PM
UpdateIdentity : System.__ComObject
Title : Security Update for Windows Server 2012 R2 (KB2961072)
Description : A security issue has been identified in a Microsoft software product that could affect your system. You can help
protect your system by installing this update from Microsoft. For a complete listing of the issues that are included
in this update, see the associated Microsoft Knowledge Base article. After you install this update, you may have to
restart your system.
UnmappedResultCode : 0
ClientApplicationID : AutomaticUpdatesWuApp
ServerSelection : 1
ServiceID :
UninstallationSteps : System.__ComObject
UninstallationNotes : This software update can be removed by selecting View installed updates in the Programs and Features Control Panel.
SupportUrl : http://support.microsoft.com

Operation : 1
ResultCode : 4
HResult : -2145099757
Date : 8/6/2014 2:47:19 PM
UpdateIdentity : System.__ComObject
Title : Security Update for Windows Server 2012 R2 (KB2961072)
Description : A security issue has been identified in a Microsoft software product that could affect your system. You can help
protect your system by installing this update from Microsoft. For a complete listing of the issues that are included
in this update, see the associated Microsoft Knowledge Base article. After you install this update, you may have to
restart your system.
UnmappedResultCode : -2145099757
ClientApplicationID : AutomaticUpdatesWuApp
ServerSelection : 1
ServiceID :
UninstallationSteps : System.__ComObject
UninstallationNotes : This software update can be removed by selecting View installed updates in the Programs and Features Control Panel.
SupportUrl : http://support.microsoft.com
Categories : System.__ComObject

Can you see the difference ?  Can you feel the force ?

The second  one was tried and failed to install (result code 4) The codes are :

0 = Not Started
1 = In Progress
2  = Succeeded
3 = Succeeded With Errrors

4 = Failed

5 = Aborted

It means that the  Com Object also list the updates  that were tried to install and failed or in any situation describe above. IF you have the same update trisd to install 6 times failed and 1 successfully it will show to you 7 times in the list.

You need to filter the resultcode to get only the successfully updates – resultcode = 2  (in my case)  :

So I change my code to :

$session = [activator]::CreateInstance([type]::GetTypeFromProgID(“Microsoft.Update.Session”,$ComputerName))
$us = $session.CreateUpdateSearcher()
$qtd = $us.GetTotalHistoryCount()
$hot = $us.QueryHistory(0, $qtd)
foreach ($Upd in $hot) {
 if ($Upd.operation -eq 1 -and $Upd.resultcode -eq 2) {
$OutPut = New-Object -Type PSObject -Prop @{…………………………….

Remember… If it is PowerCool, it is PowerShell!

 

Categories: DBA Blogs

Watch: Hadoop vs. HBase

Mon, 2014-12-08 09:58

Every data platform has its value, and deciding which one will work best for your big data objectives can be tricky—Alex Gorbachev, Oracle ACE Director, Cloudera Champion of Big Data, and Chief Technology Officer at Pythian, has recorded a series of videos comparing the various big data platforms and presents use cases to help you identify which ones will best suit your needs.

“…It’s actually not quite fair comparing them,” Alex says. “HBase is part of the Hadoop ecosystem… You could see them living with each other in the same cluster.” Learn how HBase and Hadoop can work together by watching Alex’s video Hadoop vs. HBase.

Note: You may recognize this series, which was originally filmed back in 2013. After receiving feedback from our viewers that the content was great, but the video and sound quality were poor, we listened and re-shot the series.

Find the rest of the series here

 

Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s Big Data expertise.

Categories: DBA Blogs

Log Buffer #400, A Carnival of the Vanities for DBAs

Fri, 2014-12-05 09:40

Another centurion mark achieved by the Log Buffer as it reaches 400. Freshness and uniqueness of Log Buffer still is as youthful as was with the edition 1. Enjoy the gems of Oracle, SQL Server and MySQL.

Oracle:

What Cloud Infrastructure Will Best Deliver?

Adaptive Case Management 12c and ADF Human Tasks.

What Does “Backup Restore Throttle Speed” Wait Mean?

All You Need, and Ever Wanted to Know About the Dynamic Rolling Year.

Using grant connect through to manage database links.

The Future of Oracle Forms Straight From the Source’s Mouth.

SQL Server:

Create a repository of all your database devices and stay informed about changes in their size and usage.

When a hospital’s mission-critical database fails at Christmas, disaster for the hospital – and its hapless DBA – seems certain. With less than an hour to spare before catastrophe, can the DBA Team save the day?

How do you use SQL Server, and how do you expect this to change next year?

How can you get a list of columns that have changed within a trigger in T-SQL? How can you see what bits are set within a varbinary or integer? How would you pass a bitmap parameter to a system stored procedure?

Have you ever wanted to run a query across every database on a server with the convenience of a stored procedure? If so, Microsoft provided a stored procedure to do so. It’s unreliable, outdated, and somewhat obfuscated, though. Let’s improve on it!

MySQL:

Thanks, Oracle, for fixing the stupid and dangerous SET GLOBAL sql_log_bin!

Auto-bootstrapping an all-down cluster with Percona XtraDB Cluster.

Proposal to deprecate collation_database and character_set_database settings.

Puppet is a powerful automation tool that helps administrators manage complex server setups centrally. You can use Puppet to manage MariaDB.

Tips from the trenches for over-extended MySQL DBAs.

Categories: DBA Blogs

Join Us For a Networking Event at UKOUG

Fri, 2014-12-05 09:25
UKOUG event photo

Ask not what you can do for your data. Ask what your data can do for you!

Join us for an informal networking event alongside Rittman Mead on Monday December 8th during UKOUG. We will be discussing how to leverage data to drive your organization’s success. Come meet with peers and industry experts, Mark Rittman and Jon Mead of Rittman Mead, and Marc Fielding and Christo Kutrovsky of Pythian. The networking event will take place at PanAm Bar and Restaurant in Liverpool from 6-8 PM, and will include drinks and light refreshments.

Please be sure to RSVP to the event here—we hope to see you there! Find more information about Pythian’s speaking sessions here.

Questions? Please contact Elliot Zissman, Director of Sales at zissman@pythian.com.

Categories: DBA Blogs

What Does “Backup Restore Throttle Speed” Wait Mean?

Thu, 2014-12-04 11:29

After migrating a 10g database to 11g, I asked the Application team to start their tests in order to validate that everything was working as expected. I decided to keep an eye on OEM’s Top Activity page while they were running the most mportant job. I already knew what kind of “colors” I would  find because I had checked its behavior in the former version. Suddenly, a strange kind of wait appeared on my screen: it was my first encounter with Backup Restore Throttle Speed.

 

OEM graph 2

I had never seen this wait before. It was listed in a user’s session so its name really confused me. No RMAN operations were running at that time. FRA was almost empty. I checked Oracle’s documentation and My Oracle Support. I found nothing but one Community post from 24-SEP-2013 with no conclusions. In the meantime, the job ended and I got the confirmation that everything was well, even faster than in the old version. Weird, very weird. It was time to review the PL/SQL code.

After reading lots of lines, a function inside the package caught my attention:

Sleep (l_master_rec.QY_FIRST_WAIT_MIN * 60);

Since the job was using a log table to keep track of its execution, I was able to match the wait time with this function pretty quickly. This code was inside the function’s DDL:

for i in 1 .. trunc( seconds_to_sleep/600 )
loop
sys.DBMS_BACKUP_RESTORE.SLEEP( 600 );
end loop;
sys.DBMS_BACKUP_RESTORE.SLEEP( seconds_to_sleep-trunc(seconds_to_sleep/
600)*600 );

Finally I found the reason for this wait (and the explanation for its backup/restore related name): DBMS_BACKUP_RESTORE.SLEEP. As described in MOS note “How to Suspend Code Execution In a PL/SQL Application (Doc ID 1296382.1)”, the package was used to pause job’s execution while waiting for another task to be finished.

Lastly, it’s worth noting that OEM did not graph this wait on the 10g database but it was always there.

Categories: DBA Blogs

Watch: Hadoop vs. Cassandra

Mon, 2014-12-01 10:53

Every data platform has its value, and deciding which one will work best for your big data objectives can be tricky—Alex Gorbachev, Oracle ACE Director, Cloudera Champion of Big Data, and Chief Technology Officer at Pythian, has recorded a series of videos comparing the various big data platforms and presents use cases to help you identify which ones will best suit your needs.

“Hadoop is generally deployed in a single data center, multi-RAC deployment, but they’re all reasonably geographically co-located with each other,” Alex explains. Cassandra on the other hand, “…is frequently deployed in a very distributed fashion… Somewhere in Asia, Europe, North America… So you end up with a very fault-tolerant environment.” Learn how the two platforms compare by watching Alex’s video Hadoop vs. Cassandra.

Note: You may recognize this series, which was originally filmed back in 2013. After receiving feedback from our viewers that the content was great, but the video and sound quality were poor, we listened and re-shot the series.

Find the rest of the series here

 

Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s Big Data expertise.

Categories: DBA Blogs

Log Buffer #399, A Carnival of the Vanities for DBAs

Fri, 2014-11-28 09:52

This Log Buffer Edition brings some freshly picked blog posts right out of the gardens of Oracle, SQL Server and MySQL.

Oracle:

Suppose you have a process A that you want to run faster. This process doesn’t commit (much), so it doesn’t wait on log file sync. However, there is another multi-threaded process, B, that commits very frequently, and spends a lot of time on “log file sync”.

New Solaris 11 CPU package to install and track CVE security fixes.

Upgrade/Migrate/Consolidate to Oracle 12c and Parallel Multitenant Upgrade Internals.

CVE metadata in Solaris IPS packages for improved Compliance reporting.

Why Your Supply Chain Needs Science?

SQL Server:

SQL Server expert David Poole discusses how teams can work together and share templates in Management Studio.

Explaining The SQL Server 2014 Analysis Services Data Mining Model Lift Chart.

By this stage, you should be familiar with the basics of SQL Server indexes. We’ve discussed what an Index actually is, as well as some of the most common types you’re likely to encounter. Now that we’ve seen some simple examples of how Indexes can be useful, we’re going to delve deeper into nonclustered indexes, as we’ll see how they can improve the performance of more complex queries.

The Project Deployment Model introduced in SSIS 2012 speeds up the deployment of database projects in which there may be hundreds of SSIS packages per project. Not only that, but deployments can be configured differently for each environments such as test and staging, and there are now ways of monitoring the status and performance of packages and of versioning the SSIS Catalog.

A technique to deal with lack of metadata for stored procedures when used with SSIS.

MySQL:

How to mess up your data using ONE command in MySQL/Galera.

High season is coming, how do you make sure that MySQL will handle the increased load? Stress tests could help with that, but it’s not a good idea to run them in a production environment. In this case Select_scan, Select_full_join and other MySQL counters could quickly give you an idea of how many queries are not performing well and could cause a performance degradation as the load goes up.

Nasty MySQL Replication Bugs that Affect Upgrade to 5.6.

(More) Secure local passwords in MySQL 5.6 and up.

Alternatives for chunking bulk deletes in common_schema.

Categories: DBA Blogs

Pythian at UKOUG 14

Tue, 2014-11-25 13:39

Will you be joining us at the UKOUG Conference and Exhibition in Liverpool, UK? Over 200 world-class speakers and industry experts will be in attendance, including some of our very own.

Michael Abbey, Oracle ACE and Team Lead at Pythian notes that Pythian’s attendance is important, not only for the company and its employees, but for the Oracle community as a whole. “Pythian’s presence at UKOUG this year is the next chapter in an ongoing participation in database and EIS technical events around the world,” Michael explains. “We pride ourselves on presence on all seven continents and our appearances in Liverpool are strategic to the worldwide user community, as are all the locations we frequent every year.The user group community is one of the fundamental building blocks for technical resources as they hone their skills to better serve their clients and the masses in general. Since its founding, Pythian has been a strategic and financial support organization feeding talent to many of the world’s largest technical shows.”

“You will see a number of presenters from the Pythian suite of experts including Oracle ACEs, Oracle ACE Directors, and members of the OakTable Network. We look forward to catching up with new acquaintances and rekindling existing relationships.” In the meantime, you can find their speaking sessions down below.

UPDATE: Join us for an informal networking event alongside Rittman Mead on Monday December 8th during UKOUG. We will be discussing how to leverage data to drive your organization’s success. Come meet with peers and industry experts, Mark Rittman and Jon Mead of Rittman Mead, and Marc Fielding and Christo Kutrovsky of Pythian. The networking event will take place at PanAm Bar and Restaurant in Liverpool from 6-8 PM, and will include drinks and light refreshments.

Please be sure to RSVP to the event—we hope to see you there!

 

RMAN: The Necessary Basics

Presented by Michael Abbey | Monday December 8, 2014 — 9:00-9:50 AM

This session, best suited for attendees just getting started with RMAN, the basic skills to write consistent backups and perform recovery activities from day one. Highlighted in this masterclass will be the following: RMAN architecture, using a catalog, backups to disk, backups to tape, recovery (complete and incomplete), database duplication, and tips and tricks.

About Michael: Michael Abbey is a seasoned and experienced presenter on the Oracle Database CORE technology. He first cut his teeth on V3 in 1986 and it has been a whirlwind of Oracle since then. Michael co-authored the first work in the Oracle Press series in 1994.

 

Why Use OVM for Oracle Database?
Presented by Francisco Munoz Alvarez | Monday December 8, 2014 — 5:00-5:50 PM

Vibrant session about OVM, that will explain how, when and why use this product for Virtualization. It will also give an overview of how Revera is currently using this product in NZ (Biggest OVM Farm in the ANZ region) and show benchmark results between Bare Metal, OVM and ESX concluding with some tips and showing the scalability and break point of load of the Virtualization solutions.

Come and discover the answers for the following questions:

  • Does an Oracle Database perform well on a virtualized environment?
  • What virtualization technology is more stable and allows an Oracle database to perform faster?
  • What is the performance difference between using a bare metal and a virtualized guest?
  • Is it safe to run a production database in a virtualized environment?

About Francisco:  Working out of Pythian’s Australian office in Macquarie Park, Francisco Munoz Alvarez is Vice President and Managing Director of Service Delivery in Asia Pacific, overseeing its regional expansion effort. Francisco previously served as Chief Technology Officer at Database Integrated Solutions Ltd and has more than two decades’ experience in Oracle databases. As President of the Chilean Oracle Users Group and founder of the Oracle Technology Network (OTN) tours in Latin America and Asia Pacific, he is best known for his evangelist work with the Oracle community.

 

Big Data with Exadata
Presented by Christo Kutrovsky | Tuesday December 9, 2014 — 5:30-6:20 PM

In this presentation, Oracle ACE Christo Kutrovsky will discuss common big data use cases and how they can be implemented efficiently with Exadata. Attendees will learn how Exadata actually delivers most of the benefits touted by newer big data technologies, and can often be the right platform for data scalability.

About Christo: Christo Kutrovsky is an Oracle ACE in Pythian’s Advanced Technology Consulting Group. With a deep understanding of databases, application memory, and input/output interactions, he is an expert at optimizing the performance of the most complex infrastructures.  A dynamic speaker, Christo has delivered presentations at the IOUG, the UKOUG, the Rocky Mountain Oracle Users Group, Oracle Open World, and other industry conferences.

 

Measuring Performance in Oracle Solaris and Oracle Linux
Presented by Christo Kutrovsky | Wednesday December 10, 2014 —9:00-9:50 AM

You can’t improve what you can’t measure. If you want to get the most value from your database, you need to start with the basics: are you using your hardware and operating systems efficiently? Attend this session to learn how to measure system utilization in the Linux and Oracle Solaris operating systems and how to use this information for tuning and capacity planning.

About Christo: Christo Kutrovsky is an Oracle ACE in Pythian’s Advanced Technology Consulting Group. With a deep understanding of databases, application memory, and input/output interactions, he is an expert at optimizing the performance of the most complex infrastructures.  A dynamic speaker, Christo has delivered presentations at the IOUG, the UKOUG, the Rocky Mountain Oracle Users Group, Oracle Open World, and other industry conferences.

 

Lessons Learned in Implementing Oracle Access Manager 11g with Forms, Reports, and Discoverer
Presented by Sudeep Raj and Maris Elsins | Wednesday December 10, 2014 — 12:30-1:20 PM

Support for SSO has ended in December 2011. To take advantage of the latest security enhancements, it’s always recommended for customers to upgrade their system to the latest and the greatest version of the product i.e OAM/OID 11g, very important to stay on the supported configurations. This session will give you an opportunity to understand how OAM 11g can be configured with Forms/Reports/Discoverer 11g, integration with external directory service like Microsoft AD and discuss about the upgrade considerations for customers planning to upgrade from 10g SSO/OID to OAM 11g and OID 11g.

About Sudeep: Sudeep Raj is a Team Lead/Oracle Applications Database Consultant at Pythian, managing a group of expert DBAs spread across the globe. With nearly a decade of experience as an Apps DBA, he has been involved in and led multiple Oracle E-Business Suite 11i/R12 implementations, maintenance, migration and upgrade projects. Sudeep Raj is an OCP certified professorial and holds a Bachelor of Engineering degree in Computer Science.

About Maris: Recently awarded the Oracle ACE designation, Maris Elsins is a Lead Database Consultant at Pythian. He is a blogger and frequent speaker at many Oracle related conferences like Collaborate, UKOUG, and LVOUG where he is a board member. Maris is an exceptional trouble shooter and enjoys learning why things behave the way they do.

 

Optimizing and Simplifying Complex SQL with Advanced Grouping
Presented by Jared Still | Wednesday December 10, 2014 — 3:30-4:20 PM

This presentation will show how these features can be used to simplify SQL that was previously quite complex by reducing the amount of code needed and improving readability, and perhaps most importantly, greatly optimizing the performance of SQL statements.

About Jared: Jared Still is a Senior Database Consultant at Pythian. His experience includes working with Oracle databases beginning with version 7.0. While Oracle has expanded to encompass many aspects of the application environment, Jared’s focus has been on the database itself and related infrastructure.

 

Oracle RAC — Designing Applications for Scalability
Presented by Christo Kutrovsky | Wednesday December 10, 2014 — 3:30-4:20 PM

Oracle Real Application Clusters (RAC) promises 100% transparent active-active clustering technology – true horizontal scaling, but does it work in all cases? This presentation explores the challenges with Oracle’s active-active solution and how to solve them from both database side and application side. Both conceptual design and highly practical solutions are explored.

About Christo: Christo Kutrovsky is an Oracle ACE in Pythian’s Advanced Technology Consulting Group. With a deep understanding of databases, application memory, and input/output interactions, he is an expert at optimizing the performance of the most complex infrastructures.  A dynamic speaker, Christo has delivered presentations at the IOUG, the UKOUG, the Rocky Mountain Oracle Users Group, Oracle Open World, and other industry conferences.

 

Database as a Service on the Oracle Database Appliance Platform
Presented by Marc Fielding and Maris Elsins | Wednesday December 10, 2014 — 3:30-4:20 PM

Oracle Database Appliance provides a robust, highly-available, cost-effective, and surprisingly scalable platform for database as a service environment. By leveraging Oracle Enterprise Manager’s self-service features, databases can be provisioned on a self-service basis to a cluster of Oracle Database Appliance machines. Discover how multiple ODA devices can be managed together to provide both high availability and incremental, cost-effective scalability. Hear real-world lessons learned from successful database consolidation implementations.

About Marc: Marc Fielding is a passionate and creative problem solver, drawing on deep understanding of the full enterprise application stack to identify the root cause of problems, and to implement effective and sustainable solutions. He has extensive experience implementing Oracle’s engineered system portfolio, including leading one of the first enterprise Oracle Exadata implementations. Marc has a strong background in performance tuning and high availability.

About Maris: Recently awarded the Oracle ACE designation, Maris Elsins is a Lead Database Consultant at Pythian. He is a blogger and frequent speaker at many Oracle related conferences like Collaborate, UKOUG, and LVOUG where he is a board member. Maris is an exceptional trouble shooter and enjoys learning why things behave the way they do.

 

Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s Oracle expertise or read some of our Oracle-related blog posts.

Categories: DBA Blogs

Log Buffer #398, A Carnival of the Vanities for DBAs

Mon, 2014-11-24 09:10

This Log Buffer Edition covers some informative and interesting posts from Oracle, SQL Server and the MySQL.

Oracle:

If you’re not using Toad DBA Suite, it’s sometimes hard to find solutions. Somebody wanted to know how to find indexes that aren’t that aren’t indirect. Indirect indexes are those created for a primary key because a primary key column or set of columns are both not null and uniquely constrained. You can’t drop a unique index for a primary key without dropping the primary key constraint that indirectly created it.

At the NoCOUG fall conference at the eBay town hall in San Jose, we got a first-hand look at the workings of the most complex database environments in the world.

Is there a feature you would like added to Inventory or Inventory Items? Tired of logging Service Requests with Support to request product enhancements? Well those days are over. You can now submit Enhancement Requests (ER’s) for Logistics (Inventory) and/or Inventory Items (APC/PLM/PIM) directly in their respective Communities.

Oracle Database 12c : EXPAND_SQL_TEXT, APPROX_COUNT_DISTINCT, Session Sequences and Temporary Undo.

Integrating Cordova Plugin with Oracle MAF – iOS Calendar Plugin by Chris Muir.

SQL Server:

Learn how to invoke SSRS reports from an SSIS package after the data load is completed.

Questions About Using TSQL to Import Excel Data You Were Too Shy to Ask.

This article shows a step-by-step guide to move the distribution database to a new SQL Server instance.

Monitoring Azure SQL Database.

Stairway to SQL Server Agent – Level 2: Job Steps and Subsystems.

Where in the Application Should Data Validation be Done?

MySQL:

Creating JSON documents with MariaDB.

Geographic replication with MySQL and Galera.

Sys Schema for MySQL 5.6 and MySQL 5.7.

Logging with MySQL: Error-Logging to Syslog & EventLog.

Multi-source Replication with Galera Cluster for MySQL.

Categories: DBA Blogs

Watch: Hadoop vs. Oracle Exadata

Mon, 2014-11-24 09:05

Every data platform has its value, and deciding which one will work best for your big data objectives can be tricky—Alex Gorbachev, Oracle ACE Director, Cloudera Champion of Big Data, and Chief Technology Officer at Pythian, has recorded a series of videos comparing the various big data platforms and presents use cases to help you identify which ones will best suit your needs.

“Obviously there’s a big difference in cost,” Alex explains. “However, you would require significantly more engineering effort invested in Hadoop—so it’s a matter of scale when a solution like Hadoop becomes cost efficient.” Learn more key differentiators by watching Alex’s video Hadoop vs. Oracle Exadata.

Note: You may recognize this series, which was originally filmed back in 2013. After receiving feedback from our viewers that the content was great, but the video and sound quality were poor, we listened and re-shot the series.

Find the rest of the series here

 

Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s Big Data expertise.

Categories: DBA Blogs

New leafmisscount Option in Flex Cluster 12c

Fri, 2014-11-21 09:40

In the Oracle RAC previous version we have worked with the option “misscount” of crsctl, which applies for all the servers because there was only one kind of servers, but in the new version of Oracle (12c) we have a new kind of RAC configuration, the “Flex Cluster”. With Flex Cluster we have two kinds of servers “Hub nodes” and “Leaf Nodes”, the Hub Nodes have the same concept of the normal nodes in the previous versions, however the Leaf Nodes are different. I will let Oracle Documentation define these servers:

Hub Nodes are similar to Oracle Grid Infrastructure nodes in an Oracle Clusterware standard Cluster configuration: they are tightly connected, and have direct access to shared storage. In an Oracle Flex Cluster configuration, shared storage can be provisioned to Leaf Nodes independent of the Oracle Grid Infrastructure.

Leaf Nodes are different from standard Oracle Grid Infrastructure nodes, in that they do not require direct access to shared storage, but instead request data through Hub Nodes. Hub Nodes can run in an Oracle Flex Cluster configuration without having any Leaf Nodes as cluster member nodes, but Leaf Nodes must be members of a cluster that includes at least one Hub Node.

Reference: https://docs.oracle.com/database/121/CWADD/bigcluster.htm#CWADD92560

With new kind of servers, we also have a new option to configure the maximum time that a Leaf node can be tolerated with heartbeat issues. This crsctl option is “leafmisscount” and it’s exactly the same than the “misscount” for Hub Nodes, but this is for Leaf Nodes.

Let’s see how does it work:

 My Flex Cluster

Configuration

[root@dg1 ~]# crsctl check cluster -all

**************************************************************

dg1:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

**************************************************************

dg2:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

**************************************************************

dg3:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

**************************************************************

[root@dg1 ~]#

 

[root@dg1 ~]# crsctl get node role status -all

Node ‘dg1′ active role is ‘hub’

Node ‘dg2′ active role is ‘hub’

Node ‘dg3′ active role is ‘leaf’

[root@dg1 ~]#

Checking the current value:

[grid@dg1 ~]$ crsctl get css leafmisscount

CRS-4678: Successful get leafmisscount 30 for Cluster Synchronization Services.

[grid@dg1 ~]$

How does it work with 30 secs:

[grid@dg2 ~]$ date

Fri Nov 14 16:03:54 EST 2014

–Stop connectivity between Leaf Node
and Hub Nodes

[root@dg3 ~]# date

Fri Nov 14 16:04:01 EST 2014

[root@dg3 ~]# crsctl check crs

CRS-4638: Oracle High Availability Services is online

CRS-4535: Cannot communicate with Cluster Ready Services

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

[root@dg3 ~]# date

Fri Nov 14 16:04:29 EST 2014

[root@dg3 ~]# crsctl check crs

CRS-4638: Oracle High Availability Services is online

CRS-4535: Cannot communicate with Cluster Ready Services

CRS-4530: Communications failure contacting Cluster Synchronization Services daemon

CRS-4534: Cannot communicate with Event Manager

[root@dg3 ~]#

Now let’s increase the value to 600 sec:

[grid@dg1 ~]$ crsctl set css leafmisscount 600

CRS-4684: Successful set of parameter leafmisscount to 600 for Cluster Synchronization Services.

[grid@dg1 ~]$

Let’s confirm the value:

[grid@dg2 ~]$ crsctl get css leafmisscount

CRS-4678: Successful get leafmisscount 600 for Cluster Synchronization Services.

[grid@dg2 ~]$

The maximum value we can specify is 600:

[grid@dg1 ~]$ crsctl set css leafmisscount 3600

CRS-1671: The value for parameter leafmisscount is outside the allowed range of 1 to 600

[grid@dg1 ~]$

Let’s see how does it work with 600:

[root@dg3 ~]# date

Fri Nov 14 16:19:54 EST 2014

–Stop connectivity between leaf
node and hub nodes

[root@dg3 ~]# date

Fri Nov 14 16:24:01 EST 2014

[root@dg3 ~]# crsctl check crs

CRS-4638: Oracle High Availability Services is online

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

[root@dg3 ~]# date

Fri Nov 14 16:25:18 EST 2014

[root@dg3 ~]# crsctl check crs

CRS-4638: Oracle High Availability Services is online

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

[root@dg3 ~]# date

Fri Nov 14 16:27:30 EST 2014

[root@dg3 ~]# crsctl check crs

CRS-4638: Oracle High Availability Services is online

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

[root@dg3 ~]# date

Fri Nov 14 16:28:08 EST 2014

[root@dg3 ~]# crsctl check crs

CRS-4638: Oracle High Availability Services is online

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

[root@dg3 ~]# date

Fri Nov 14 16:29:04 EST 2014

[root@dg3 ~]# crsctl check crs

CRS-4638: Oracle High Availability Services is online

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

[root@dg3 ~]# date

Fri Nov 14 16:29:50 EST 2014

[root@dg3 ~]# crsctl check crs

CRS-4638: Oracle High Availability Services is online

CRS-4535: Cannot communicate with Cluster Ready Services

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

[root@dg3 ~]# date

Fri Nov 14 16:30:10 EST 2014

[root@dg3 ~]# crsctl check crs

CRS-4638: Oracle High Availability Services is online

CRS-4535: Cannot communicate with Cluster Ready Services

CRS-4530: Communications failure contacting Cluster Synchronization Services daemon

CRS-4534: Cannot communicate with Event Manager

[root@dg3 ~]#

Categories: DBA Blogs