Skip navigation.

DBA Blogs

Inserting into a table with potentially long rows

Hemant K Chitale - Fri, 2015-01-09 09:25
Note :  This post builds on blog posts by Nikolay Savvinov and Jonathan Lewis.


Nikolay Savvinov and Jonathan Lewis have identified that when you have a table with a potentially large row size -- where the theoretical maximum row length exceeds the block size -- redo overheads are significantly greater for multi-row inserts.

First, I demonstrate with a table with a small potential row size. A table with 1 number column and 3 columns of a total max length of 30 characters.  The actual data inserted is also very little.  The first row has the values (1,'1','1','1'), the second row has the values (2,'2','2','2') and so on to (1000,'1000','1000','1000) for a total of 1,000 rows. :

SQL> drop table hkc_test_small_row_size purge;
drop table hkc_test_small_row_size purge
*
ERROR at line 1:
ORA-00942: table or view does not exist


SQL>
SQL> create table hkc_test_small_row_size
2 (id_column number, data_col_1 varchar2(10), data_col_2 varchar2(10), data_col_3 varchar2(10));

Table created.

SQL> create unique index hkc_tlrs_undx on hkc_test_small_row_size(id_column);

Index created.

SQL>
SQL>
SQL>
SQL> select n.name, s.value At_Beginning
2 from v$statname n, v$sesstat s
3 where n.statistic#=s.statistic#
4 and n.name in ('redo entries', 'redo size')
5 and s.sid = (select distinct sid from v$mystat)
6 order by 1
7 /

NAME AT_BEGINNING
---------------------------------------------------------------- ------------
redo entries 102
redo size 23896

SQL>
SQL> insert into hkc_test_small_row_size
2 select rownum+4000, to_char(rownum), to_char(rownum), to_char(rownum)
3 from dual
4 connect by level < 1001
5 /

1000 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> select n.name, s.value Normal_Insert
2 from v$statname n, v$sesstat s
3 where n.statistic#=s.statistic#
4 and n.name in ('redo entries', 'redo size')
5 and s.sid = (select distinct sid from v$mystat)
6 order by 1
7 /

NAME NORMAL_INSERT
---------------------------------------------------------------- -------------
redo entries 154
redo size 92488

SQL>
SQL>
SQL>
SQL> insert /*+ APPEND */ into hkc_test_small_row_size
2 select rownum, to_char(rownum), to_char(rownum), to_char(rownum)
3 from dual
4 connect by level < 1001
5 /

1000 rows created.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> select n.name, s.value APPEND_Insert
2 from v$statname n, v$sesstat s
3 where n.statistic#=s.statistic#
4 and n.name in ('redo entries', 'redo size')
5 and s.sid = (select distinct sid from v$mystat)
6 order by 1
7 /

NAME APPEND_INSERT
---------------------------------------------------------------- -------------
redo entries 252
redo size 193396

SQL>
SQL>
SQL> drop table hkc_test_small_row_size purge;

Table dropped.

SQL>

Thus, we can see that, for the "SMALL_ROW_SIZE" table, the redo entries for 1000 of these small rows  :
Simple Insert of 1,000 rows :   52 redo entries and 68,592 bytes.
Direct Path (APPEND) of 1,000 rows : 98 redo entries and 100,908 bytes.


Next, I demonstrate with a  table with a large potential row size -- exceeding the database block size.  Exactly the same data set is inserted  --  The first row has the values (1,'1','1','1'), the second row has the values (2,'2','2','2') and so on to (1000,'1000','1000','1000) for a total of 1,000 rows.

SQL> drop table hkc_test_large_row_size purge;
drop table hkc_test_large_row_size purge
*
ERROR at line 1:
ORA-00942: table or view does not exist


SQL>
SQL> create table hkc_test_large_row_size
2 (id_column number, data_col_1 varchar2(4000), data_col_2 varchar2(4000), data_col_3 varchar2(4000));

Table created.

SQL> create unique index hkc_tlrs_undx on hkc_test_large_row_size(id_column);

Index created.

SQL>
SQL>
SQL>
SQL> select n.name, s.value At_Beginning
2 from v$statname n, v$sesstat s
3 where n.statistic#=s.statistic#
4 and n.name in ('redo entries', 'redo size')
5 and s.sid = (select distinct sid from v$mystat)
6 order by 1
7 /

NAME AT_BEGINNING
---------------------------------------------------------------- ------------
redo entries 102
redo size 23900

SQL>
SQL>
SQL> insert into hkc_test_large_row_size
2 select rownum+4000, to_char(rownum), to_char(rownum), to_char(rownum)
3 from dual
4 connect by level < 1001
5 /

1000 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> select n.name, s.value Normal_Insert
2 from v$statname n, v$sesstat s
3 where n.statistic#=s.statistic#
4 and n.name in ('redo entries', 'redo size')
5 and s.sid = (select distinct sid from v$mystat)
6 order by 1
7 /

NAME NORMAL_INSERT
---------------------------------------------------------------- -------------
redo entries 2145
redo size 526320

SQL>
SQL>
SQL> insert /*+ APPEND */ into hkc_test_large_row_size
2 select rownum, to_char(rownum), to_char(rownum), to_char(rownum)
3 from dual
4 connect by level < 1001
5 /

1000 rows created.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> select n.name, s.value APPEND_Insert
2 from v$statname n, v$sesstat s
3 where n.statistic#=s.statistic#
4 and n.name in ('redo entries', 'redo size')
5 and s.sid = (select distinct sid from v$mystat)
6 order by 1
7 /

NAME APPEND_INSERT
---------------------------------------------------------------- -------------
redo entries 2243
redo size 627228

SQL>
SQL>
SQL> drop table hkc_test_large_row_size purge;

Table dropped.

SQL>

Thus, we can see that, for the "LARGE_ROW_SIZE" table, the redo entries for 1000 of these actually small rows :
Simple Insert of 1,000 rows :   2,043 redo entries and 502,420 bytes.
Direct Path (APPEND) of 1,000 rows : 98 redo entries and 100,908 bytes.

Therefore, a simple (regular) Insert into such a table where the *potential* row size (not necessarily the actual row size) exceeds the block size is very expensive.  Apparently, the optimization to insert multiple rows into a single block with minimal redo is not invoked when Oracle thinks that the row may exceed the block size.  It switches to a row-by-row insert.  This is evident in the LARGE_ROW_SIZE case where 2,043 redo entries is more than 1000 rows + 1000 index entries.  The SMALL_ROW_SIZE had only 52 redo entries.
Remember : This happens when
(a) The potential row size exceeds the block size (irrespective of the actual row size which may be very few bytes)
AND
(b) a regular (non-Direct) Insert is used.
In such cases, a Direct Path Insert works out better.  Although there are obvious downsides to doing Direct Path Inserts -- the HighWaterMark keeps extending, irrespective of Deletes that may have been issued.
.
.
.


Categories: DBA Blogs

Histograms Tidbits

Pakistan's First Oracle Blog - Fri, 2015-01-09 03:00
Make sure histograms exist on columns with uneven data distributions to ensure that the optimizer makes the best choice between indexes and table scans.




For range scans on data that is not uniformly distributed, the optimizers’ decisions will be improved by the presence of a histogram.

Histograms increase the accuracy of the optimizer’s cost calculations but increase the overhead of statistics collections. It’s usually worth creating histograms for columns where you believe the data will have an irregular distribution, and where the column is involved in WHERE or JOIN expressions.

CREATING HISTOGRAMS WITH METHOD_OPT

The METHOD_OPT option controls how column level statistics, in particular histograms, are created. The default value is ‘FOR ALL COLUMNS SIZE AUTO’,

which enables Oracle to choose the columns that will have a histogram collected and set the appropriate histogram bucket size.
Categories: DBA Blogs

Oracle GoldenGate Processes – Part 4 – Replicat

DBASolved - Thu, 2015-01-08 15:00

The replicat process is the apply process within the Oracle GoldenGate environment.  The replicat is responsible for reading the remote trail files and applying the data found in cronilogical order.  This ensures that the data is applied in the same order it was captured.  

Until recently there was only one version of a replicat, that version was the classic version.  As of 12.1.2.0, there are now three distinct versions of a replicat.  These replicat types are:

  • Classic Mode
  • Coordinated Mode
  • Integrated Mode

Each of on these modes provide some sort of benefit depending on the database being applied to.  Oracle is pushing everyone to a more integrated approach; however, you have to be on database version 11.2.0.4 at a minimum.  

To configure the replicat process is similar to the extract and data pump processes.

Adding a Replicat:

From GGSCI (classic):

$ cd $OGG_HOME
$ ./ggsci
GGSCI> add replicat REP, exttrail ./dirdat/rt

Note:  The add command is assuming that you already have a checkpoint table configured in the target environment.

Edit Replicat parameter file:

From GGSCI:

$ cd $OGG_HOME
$ ./ggsci
GGSCI> edit params REP

From Command Line:

$ cd $OGG_HOME
$ cd ./dirprm
$ vi REP.prm

Example of Replicat Parameter file:

REPLICAT REP
SETENV (ORACLE_HOME=”/u01/app/oracle/product/11.2.0/db_3″)
SETENV (ORACLE_SID=”orcl”)
USERID ggate, PASSWORD ggate
ASSUMETARGETDEFS
DISCARDFILE ./dirrpt/REP.dsc, append, megabytes 500
WILDCARDRESOLVE IMMEDIATE
MAP SCOTT.*, TARGET SCOTT.*;

Start the Replicat process:

Depending on if you are starting the replicat for the first time or not; how you start is going to be similar yet different.

To star the Replicat after an inital load:

$ cd $OGG_HOME
$ ./ggsci
GGSCI> start replicat REP, atcsn [ current_scn ]

Note: The current_scn needs be obtained from the source database prior to doing the inital load of data to the target.  This ensure the consistancy of the data and provides a starting point for the replicat to start applying data from.

To start Replicat normally:

$ cd $OGG_HOME
$ ./ggsci
GGSCI> start replicat REP

Stop the Replicat process:

Stop replicat normally:

$ cd $OGG_HOME
$ ./ggsci
GGSCI> stop replicat REP

Enjoy!

about.me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Oracle GoldenGate Processes – Part 3 – Data Pump

DBASolved - Thu, 2015-01-08 14:00

The Data Pump group is an secondary extract group that is used to help send data over a network.  Although a data pump is another extract group, don’t confuse it with the primary extrat group. The main purpose of the data pump extract is to write the captured data over the network to the remote trail files on the target system.  

Note: if the data pump is not confgured then the primary extract group will write directly to the remote trail file.

Why would you want to use a data pump process?  Two advantage of using a data pump are:

  • Protects against network failures
  • Helping to consolidate data, from multiple sources, into a single remote trail file

The data pump process, just as the extract process, uses a parameter file to run the process.  The parameter file can be edited either before or after adding the process to the Oracle GoldenGate environment.

Adding a Data Pump:

From GGSCI:

$ cd $OGG_HOME
$ ./ggsci
GGSCI> add extract PMP, exttrailsource ./dirdat/lt
GGSCI> add rmttrail ./dirdat/rt, extract PMP, megabytes 200

Note: In the example above notice that the data pump is reading from ./dirdat/lt and then writing to ./dirdat/rt on the remote server.

Edit Data Pump parameter file:

From GGSCI:

$ cd $OGG_HOME
$ ./ggsci
GGSCI> edit params PMP

From Command Line:

$ cd $OGG_HOME
$ cd ./dirprm
$ vi ./PMP.prm

Example of Data Pump parameter file:

EXTRACT PMP

PASSTHRU

RMTHOST 172.15.10.10, MGRPORT 15000, COMPRESS

RMTTRAIL ./dirdat/rt

TABLE SCOTT.*;

Start/Stop the Data Pump:

Start the Data Pump:

$ cd $OGG_HOME
$ ./ggsci
GGSCI> start extract PMP

Stop the Data Pump:

$ cd $OGG_HOME
$ ./ggsci
GGSCI> stop extract PMP

Enjoy!

about.me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Oracle GoldenGate Processes – Part 2 – Extract

DBASolved - Thu, 2015-01-08 13:00

The extract process of Oracle GoldenGate is used to perform change data capture from the source database.  The extract can be used to read the online transaction log (in Oracle the online redo logs) or the associated archive logs.  The data that is extracted from the source database is then placed into an trail file (another topic for a post) for shipping to the apply sided. 

To configure an extract process there needs to be a parameter file associated with it.  The parameter file can be edited after adding the extract to the Oracle GoldenGate environment if needed.  In order to configure an extract it needs to be added to the environment and assigned a local trail file location.

Adding an Extract and Local Trail File:

Using GGSCI:

$ cd $OGG_HOME
$ ./ggsci
GGSCI> add extract EXT, tranlog, begin now
GGSCI> add exttrail ./dirdat/lt, extract EXT, megabytes 200

Note: The trail file is required to be prefixed with a two letter prefix.  

Edit Extract Parameter File:

From GGSCI:

$ cd $OGG_HOME
$ ./ggsci
GGSCI> edit params EXT

Edit Extract Parameter File from Command Line:

$ cd $OGG_HOME
$ cd ./dirprm
$ vi ./ext.prm

Example of Extract Parameter File:

EXTRACT EXT
USERID ggate, PASSWORD ggate 
TRANLOGOPTIONS DBLOGREADER
SETENV (ORACLE_HOME=”/u01/app/oracle/product/11.2.0/db_3″)
SETENV (ORACLE_SID=”orcl”)
WARNLONGTRANS 1h, CHECKINTERVAL 30m
EXTTRAIL ./dirdat/lt
WILDCARDRESOLVE IMMEDIATE
TABLE SCOTT.*;

Start/Stop the Extract:

Start Extract:

$ cd $OGG_HOME
$ ./ggsci
GGSCI> start extract EXT

Stop Extract:

$ cd $OGG_HOME
$ ./ggsci
GGSCI> stop extract EXT

Enjoy!

about.me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Oracle GoldenGate Processes – Part 1 – Manager

DBASolved - Thu, 2015-01-08 12:00

Oracle GoldenGate is made up of processes that are used to ensure replication.  These processes include a manager process, an extract and a replicat process. The primise of this post is to focus on the manager process.

In order to configure and run an Oracle GoldenGate enviornment, a manager process must be running on all the source, target, and intermediary servers in the configuration.  This is due to the manager process being the controller process for the Oracle GoldenGate processes, allocates ports and performs file maintenance.  The manager process peforms the following functions within the Oracle GoldenGate isntance:

  • Starts Processes
  • Starts Dynamic Processes
  • Start the Controller process
  • Manage the port numbers for the processes
  • Trail File Management
  • Create reports for events, error and threshold

Note: There is only one manager processes per Oracle GoldenGate instance.

Configure Manager Process
Before a manager process can be started it needs to be configured.  There are many different parameters than can be used in the manager parameter file, but the only required one is PORT parameter.  The default port for a manager is 7809.  In order to edit the manager parameter file, it can be done either from the command line or from within the GGSCI utility.

Edit via command line:

$ cd $OGG_HOME/dirprm
$ vi mgr.prm

Edit via GGSCI:

GGSCI> edit params mgr

Example of a Manager parameter file:

PORT 15000
AUTOSTART ER *
AUTORESTART ER * , RETRIES 5, WAITMINUTES 5
PURGEOLDEXTRACTS ./dirdat/lt*, USECHECKPOINTS, MINKEEPHOURS 2

Start the Manager Process

Starting the manager process is pretty simple.  The process can be started either from the command line or from the GGSCI utility.

Start from command line:

$ cd $OGG_HOME
$ mgr paramfile ./dirprm/mgr.prm reportfile ./dirrpt/mgr.rpt

Start from GGSCI:

$ cd $OGG_HOME
$ ./ggsci
GGSCI> start manager  
or
GGSCI> start mgr

Stop the Manager Process

Stopping the manager process is pretty simple as well (from the GGSCI).

Stop from GGSCI:

$ cd $OGG_HOME
$ ./ggsci
GGSCI> stop manager [ ! ]

Enjoy!

about.me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Oracle University Leadership Circle 2015

The Oracle Instructor - Thu, 2015-01-08 04:56

I’m in the Leadership Circle again :-) That is a quarterly Corporate Award for the best instructors worldwide according to customer feedback.

Oracle University has generally high quality standards, so it is hard to stand out individually. The more I am proud to be listed together with these great colleagues:

Oracle University Leadership Circle 2015


Categories: DBA Blogs

Series of SaaS Implementation Workshops for EMEA Partners

We are pleased to announce a series of different SaaS/Cloud Implementation Workshops. Oracle will organize several Workshops between January and June 2015. It will be possible to join the...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Career Day – Oracle Database Administrator

Bobby Durrett's DBA Blog - Tue, 2015-01-06 16:10

I will be talking at my daughter’s high school for career day on Monday, explaining my job as an Oracle Database Administrator.  Wish me luck!

The funny thing is that no one understands what Oracle DBAs do, unless they are one or work closely with one.  I have a feeling that my talk is going to fall flat, but if it helps one of the students in any way it will be worth it.

To me the best thing about being an Oracle DBA is that you can do a pretty interesting and technically challenging job and companies that are not technology centric will still hire you to do it.  I’ve always been interested in computer technology but have worked in non-technical companies my entire career – mainly a non-profit ministry and a food distribution company.  Neither companies make computers or sell software!

My other thought is how available computer technology is to students today.  Oracle, in one of the company’s more brilliant moves, made all of its software available for download so students can try out the very expensive software for free.  Plus all the manuals are available online.  What is it like to grow up as a student interested in computer technology in the age of the internet?  I can’t begin to compare it to my days in the 1980s when I was in high school and college.  Did we even have email?  I guess we must have but I can’t remember using it much.  Today a student who owns a laptop and has an internet connection has a world of technology at their fingertips far beyond what I had at their age.

Hopefully I wont bore the students to tears talking about being an Oracle DBA.  They probably still won’t know what it really is after I’m done.  But at least they will know that such a job exists, and maybe that will be helpful to them.

– Bobby

P.S.  There were over 100 students there.  They were pretty polite with only a little talking.  Here is a picture of myself on the left, my daughter in the center, and a coworker who also spoke at the career day on the right.

careerday





Categories: DBA Blogs

Performance Problems with Dynamic Statistics in Oracle 12c

Pythian Group - Tue, 2015-01-06 09:55

I’ve been making some tests recently with the new Oracle 12.1.0.2 In-Memory option and have been faced with an unexpected  performance problem.  Here is a test case:

create table tst_1 as
with q as (select 1 from dual connect by level <= 100000)
select rownum id, 12345 val, mod(rownum,1000) ref_id  from q,q
where rownum <= 200000000;

Table created.

create table tst_2 as select rownum ref_id, lpad(rownum,10, 'a') name, rownum || 'a' name2</pre>
from dual connect by level <= 1000;

Table created.

begin
dbms_stats.gather_table_stats(
ownname          => user,
tabname          =>'TST_1',
method_opt       => 'for all columns size 1',
degree => 8
);
dbms_stats.gather_table_stats(
ownname          => user,
tabname          =>'TST_2',
method_opt       => 'for all columns size 1'
);
end;
/
PL/SQL procedure successfully completed.

alter table tst_1 inmemory;

Table altered.

select count(*) from tst_1;

COUNT(*)
----------
200000000

Waiting for in-memory segment population:

select segment_name, bytes, inmemory_size from v$im_segments;

SEGMENT_NAME         BYTES INMEMORY_SIZE

--------------- ---------- -------------

TST_1           4629463040    3533963264

Now let’s make a simple two table join:

select name, sum(val) from tst_1 a, tst_2 b where a.ref_id = b.ref_id and name2='50a'
group by name;

Elapsed: 00:00:00.17

Query runs pretty fast. Execution plan has the brand new vector transformation

Execution Plan
----------------------------------------------------------
Plan hash value: 213128033

--------------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name                     | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |                          |     1 |    54 |  7756  (21)| 00:00:01 |
|   1 |  TEMP TABLE TRANSFORMATION        |                          |       |       |            |          |
|   2 |   LOAD AS SELECT                  | SYS_TEMP_0FD9D66FA_57B2B |       |       |            |          |
|   3 |    VECTOR GROUP BY                |                          |     1 |    24 |     5  (20)| 00:00:01 |
|   4 |     KEY VECTOR CREATE BUFFERED    | :KV0000                  |     1 |    24 |     5  (20)| 00:00:01 |
|*  5 |      TABLE ACCESS FULL            | TST_2                    |     1 |    20 |     4   (0)| 00:00:01 |
|   6 |   HASH GROUP BY                   |                          |     1 |    54 |  7751  (21)| 00:00:01 |
|*  7 |    HASH JOIN                      |                          |     1 |    54 |  7750  (21)| 00:00:01 |
|   8 |     VIEW                          | VW_VT_377C5901           |     1 |    30 |  7748  (21)| 00:00:01 |
|   9 |      VECTOR GROUP BY              |                          |     1 |    13 |  7748  (21)| 00:00:01 |
|  10 |       HASH GROUP BY               |                          |     1 |    13 |  7748  (21)| 00:00:01 |
|  11 |        KEY VECTOR USE             | :KV0000                  |   200K|  2539K|  7748  (21)| 00:00:01 |
|* 12 |         TABLE ACCESS INMEMORY FULL| TST_1                    |   200M|  1716M|  7697  (21)| 00:00:01 |
|  13 |     TABLE ACCESS FULL             | SYS_TEMP_0FD9D66FA_57B2B |     1 |    24 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - filter("NAME2"='50a')
   7 - access("ITEM_5"=INTERNAL_FUNCTION("C0") AND "ITEM_6"="C2")
  12 - inmemory(SYS_OP_KEY_VECTOR_FILTER("A"."REF_ID",:KV0000))
       filter(SYS_OP_KEY_VECTOR_FILTER("A"."REF_ID",:KV0000))

Note
-----
   - vector transformation used for this statement

After having such impressive performance I’ve decided to run the query in parallel:

select /*+ parallel(8) */ name, sum(val) from tst_1 a, tst_2 b
where a.ref_id = b.ref_id and name2='50a'
group by name;

Elapsed: 00:01:02.55

Query elapsed time suddenly dropped from 0.17 seconds to the almost 1 minute and 3 seconds. But the second execution runs in 0.6 seconds.
The new plan is:

Execution Plan
----------------------------------------------------------
Plan hash value: 3623951262

-----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |          |     1 |    29 |  1143  (26)| 00:00:01 |        |      |            |
|   1 |  PX COORDINATOR                     |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)               | :TQ10001 |     1 |    29 |  1143  (26)| 00:00:01 |  Q1,01 | P->S | QC (RAND)  |
|   3 |    HASH GROUP BY                    |          |     1 |    29 |  1143  (26)| 00:00:01 |  Q1,01 | PCWP |            |
|   4 |     PX RECEIVE                      |          |     1 |    29 |  1143  (26)| 00:00:01 |  Q1,01 | PCWP |            |
|   5 |      PX SEND HASH                   | :TQ10000 |     1 |    29 |  1143  (26)| 00:00:01 |  Q1,00 | P->P | HASH       |
|   6 |       HASH GROUP BY                 |          |     1 |    29 |  1143  (26)| 00:00:01 |  Q1,00 | PCWP |            |
|*  7 |        HASH JOIN                    |          |   200K|  5664K|  1142  (26)| 00:00:01 |  Q1,00 | PCWP |            |
|   8 |         JOIN FILTER CREATE          | :BF0000  |     1 |    20 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
|*  9 |          TABLE ACCESS FULL          | TST_2    |     1 |    20 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
|  10 |         JOIN FILTER USE             | :BF0000  |   200M|  1716M|  1069  (21)| 00:00:01 |  Q1,00 | PCWP |            |
|  11 |          PX BLOCK ITERATOR          |          |   200M|  1716M|  1069  (21)| 00:00:01 |  Q1,00 | PCWC |            |
|* 12 |           TABLE ACCESS INMEMORY FULL| TST_1    |   200M|  1716M|  1069  (21)| 00:00:01 |  Q1,00 | PCWP |            |
-----------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   7 - access("A"."REF_ID"="B"."REF_ID")
   9 - filter("NAME2"='50a')
  12 - inmemory(SYS_OP_BLOOM_FILTER(:BF0000,"A"."REF_ID"))
       filter(SYS_OP_BLOOM_FILTER(:BF0000,"A"."REF_ID"))

Note
-----
   - dynamic statistics used: dynamic sampling (level=AUTO)
   - Degree of Parallelism is 8 because of hint

We can see a Bloom filter instead of key vector, but this is not the issue. Problem is coming from the “dynamic statistics used: dynamic sampling (level=AUTO)” note.
In 10046 trace file I’ve found nine dynamic sampling queries and one of them was this one:

SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring
  optimizer_features_enable(default) no_parallel result_cache(snapshot=3600)
  */ SUM(C1)
FROM
 (SELECT /*+ qb_name("innerQuery")  */ 1 AS C1 FROM (SELECT /*+
  NO_VECTOR_TRANSFORM ORDERED */ "A"."VAL" "ITEM_1","A"."REF_ID" "ITEM_2"
  FROM "TST_1" "A") "VW_VTN_377C5901#0", (SELECT /*+ NO_VECTOR_TRANSFORM
  ORDERED */ "B"."NAME" "ITEM_3","B"."REF_ID" "ITEM_4" FROM "TST_2" "B" WHERE
  "B"."NAME2"='50a') "VW_VTN_EE607F02#1" WHERE ("VW_VTN_377C5901#0"."ITEM_2"=
  "VW_VTN_EE607F02#1"."ITEM_4")) innerQuery

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1     43.92      76.33          0          5          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3     43.92      76.33          0          5          0           0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 64     (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  RESULT CACHE  56bn7fg7qvrrw1w8cmanyn3mxr (cr=0 pr=0 pw=0 time=0 us)
         0          0          0   SORT AGGREGATE (cr=0 pr=0 pw=0 time=8 us)
         0          0          0    HASH JOIN  (cr=0 pr=0 pw=0 time=4 us cost=159242 size=2600000 card=200000)
 200000000  200000000  200000000     TABLE ACCESS INMEMORY FULL TST_1 (cr=3 pr=0 pw=0 time=53944537 us cost=7132 size=800000000 card=200000000)
         0          0          0     TABLE ACCESS FULL TST_2 (cr=0 pr=0 pw=0 time=3 us cost=4 size=9 card=1)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  asynch descriptor resize                        1        0.00          0.00
  Disk file operations I/O                        1        0.00          0.00
  CSS initialization                              1        0.00          0.00
  CSS operation: action                           1        0.00          0.00
  direct path write temp                       6267        0.02         30.37
********************************************************************************

Vector transformation is disabled, inefficient table order is fixed by the ORDERING hint and we are waiting for hash table creation based on huge TST_1 table.
Dynamic statistics feature has been greatly improved in Oracle 12c  with the support for joins and group by predicates. This is why we have such join during the parse time. Next document has the”Dynamic Statistics (previously known as dynamic sampling)” section inside: Understanding Optimizer Statistics with Oracle Database 12c where the new functionality is described.

Let’s make a simpler test:

select /*+ parallel(2) */ ref_id, sum(val) from tst_1 a group by ref_id;

Execution Plan
----------------------------------------------------------
Plan hash value: 2527371111

---------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |          |  1000 |  9000 |  7949  (58)| 00:00:01 |        |      |            |
|   1 |  PX COORDINATOR                   |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)             | :TQ10001 |  1000 |  9000 |  7949  (58)| 00:00:01 |  Q1,01 | P->S | QC (RAND)  |
|   3 |    HASH GROUP BY                  |          |  1000 |  9000 |  7949  (58)| 00:00:01 |  Q1,01 | PCWP |            |
|   4 |     PX RECEIVE                    |          |  1000 |  9000 |  7949  (58)| 00:00:01 |  Q1,01 | PCWP |            |
|   5 |      PX SEND HASH                 | :TQ10000 |  1000 |  9000 |  7949  (58)| 00:00:01 |  Q1,00 | P->P | HASH       |
|   6 |       HASH GROUP BY               |          |  1000 |  9000 |  7949  (58)| 00:00:01 |  Q1,00 | PCWP |            |
|   7 |        PX BLOCK ITERATOR          |          |   200M|  1716M|  4276  (21)| 00:00:01 |  Q1,00 | PCWC |            |
|   8 |         TABLE ACCESS INMEMORY FULL| TST_1    |   200M|  1716M|  4276  (21)| 00:00:01 |  Q1,00 | PCWP |            |
---------------------------------------------------------------------------------------------------------------------------

Note
-----
   - dynamic statistics used: dynamic sampling (level=AUTO)
   - Degree of Parallelism is 2 because of hint

We can see a “dynamic statistics used” note again. It’s a simple query without predicates with the single table with pretty accurate statistics. From my point of view, here is no reason for dynamic sampling at all.
Automatic dynamic sampling was introduced in 11G Release 2. Description of this feature can be found in this document: Dynamic sampling and its impact on the Optimizer.
“From Oracle Database 11g Release 2 onwards the optimizer will automatically decide if dynamic sampling will be useful and what dynamic sampling level will be used for SQL statements executed in parallel. This decision is based on size of the tables in the statement and the complexity of the predicates”.
Looks like algorithm has been changed in 12c and dynamic sampling is triggered in a broader set of use cases.
This behavior can be disabled at statement, session or system level using the fix control for the bug 7452863. For example,
ALTER SESSION SET “_fix_control”=’7452863:0′;

Summary

Dynamic statistics has been enhanced in Oracle 12c, but this can lead to a longer parse time.
Automatic dynamic statistics is used more often in 12c which can lead to a parse time increase in the more cases than before.

Categories: DBA Blogs

Troubleshooting a Multipath Issue

Pythian Group - Tue, 2015-01-06 09:37

Multipathing allows to configure multiple paths from servers to storage arrays. It provides I/O failover and load balancing. Linux uses device mapper kernel framework to support multipathing.

In this post I will explain the steps taken to troubleshoot a multipath issue. This should provide an glimpse into the tools and technology involved. Problem was reported in a RHEL6 system in which a backup software is complaining that the device from which /boot is mounted does not exist.

Following is the device. You can see the device name is a wwid.

# df
Filesystem 1K-blocks Used Available Use% Mounted on
[..]
/dev/mapper/3600508b1001c725ab3a5a49b0ad9848ep1
198337 61002 127095 33% /boot

File /dev/mapper/3600508b1001c725ab3a5a49b0ad9848ep1 is missing under /dev/mapper.

# ll /dev/mapper/
total 0
crw-rw—- 1 root root 10, 58 Jul 9 2013 control
lrwxrwxrwx 1 root root 7 Jul 9 2013 mpatha -> ../dm-1
lrwxrwxrwx 1 root root 7 Jul 9 2013 mpathap1 -> ../dm-2
lrwxrwxrwx 1 root root 7 Jul 9 2013 mpathb -> ../dm-0
lrwxrwxrwx 1 root root 7 Jul 9 2013 mpathc -> ../dm-3
lrwxrwxrwx 1 root root 7 Jul 9 2013 mpathcp1 -> ../dm-4
lrwxrwxrwx 1 root root 7 Jul 9 2013 mpathcp2 -> ../dm-5
lrwxrwxrwx 1 root root 7 Jul 9 2013 vgroot-lvroot -> ../dm-6
lrwxrwxrwx 1 root root 7 Jul 9 2013 vgroot-lvswap -> ../dm-7

From /ect/fstab, it is found that UUID of the device is specified.

UUID=6dfd9f97-7038-4469-8841-07a991d64026 /boot ext4 defaults 1 2

From blkid, we can see the device associated with the UUID. blkid command prints the attributes of all block device in the system.

# blkid
/dev/mapper/mpathcp1: UUID=”6dfd9f97-7038-4469-8841-07a991d64026″ TYPE=”ext4″

Remounting the /boot mount point shows user friendly name /dev/mapper/mpathcp1.

# df
Filesystem 1K-blocks Used Available Use% Mounted on
[..]
/dev/mapper/mpathcp1 198337 61002 127095 33% /boot

From this far, we can understand that the system is booting with wwid as device name. But later the device name is converted into user friendly name. In multipath configuration user_friendly_names is enabled.

# grep user_friendly_names /etc/multipath.confuser_friendly_names yes

As per Red Hat documentation,

“When the user_friendly_names option in the multipath configuration file is set to yes, the name of a multipath device is of the form mpathn. For the Red Hat Enterprise Linux 6 release, n is an alphabetic character, so that the name of a multipath device might be mpatha or mpathb. In previous releases, n was an integer.”

As the system is mounting the right disk after booting up, problem should be with the user friendly name configuration in initramfs. Extracting the initramfs file and checking the multipath configuration shows that user_friendly_names parameter is enabled.

# cat initramfs/etc/multipath.conf
defaults {
user_friendly_names yes

Now the interesting point is that, /etc/multipath/bindings is missing in initramfs. But the file is in the system. /etc/multipath/bindings file is used to refer wwid with alias.

# cat /etc/multipath/bindings
# Multipath bindings, Version : 1.0
# NOTE: this file is automatically maintained by the multipath program.
# You should not need to edit this file in normal circumstances.
#
# Format:
# alias wwid
#
mpathc 3600508b1001c725ab3a5a49b0ad9848e
mpatha 36782bcb0005dd607000003b34ef072be
mpathb 36782bcb000627385000003ab4ef14636

initramfs can be created using dracut command.

# dracut -v -f test.img 2.6.32-131.0.15.el6.x86_64 2> /tmp/test.out

Building a test initramfs file shows that a newly created initramfs is including /etc/multipath/bindings.

# grep -ri bindings /tmp/test.out
I: Installing /etc/multipath/bindings

So this is what is happening,
When system boots up, initramfs looks for /etc/multipath/bindings for aliases in initramfs to use for user friendly names. But it could not find it and and uses wwid. After system boots up /etc/multipath/bindings is present and device names are changed to user friendly names.

Looks like the /etc/multipath/bindings file is created after kernel installation and initrd generation. This might have happened as multipath configuration was done after kernel installation. Even if the system root device is not on multipath, it is possible for multipath to be included in the initrd. For example, this can happen of the system root device is on LVM. This should be the reason why multupath.conf was included in the initramfs and not /etc/multipath/bindings.

To solve the issue we can to rebuild the initrd and restart the system. Re-installing existing kernel or installing new kernel would also fix the issue as the initrd would be rebuilt in both cases..

# dracut -v -f 2.6.32-131.0.15.el6.x86_64
Categories: DBA Blogs

Access Oracle GoldenGate JAgent XML from browser

DBASolved - Tue, 2015-01-06 09:26

There are many different ways of monitoirng Oracle GoldenGate; I have posted about many of these in earlier blog posts.  Additionally, I have talked about the different ways of monitoring Oracle GoldenGate at a few conferences as well.  (The slides can be found on my slideshare site if wanted).  In both my blog and presentations I highlight many different approaches; yet I forgot one that I think is really cool!  This one was shown to me by an Oracle Product Manager before Oracle Open World 2014 back in October (yes, I’m just now getting around to writing about it).  

This approach is using the Oracle GoldenGate Manager (port) to view a user friendly version of the XML that is passed by the Oracle Monitor Agent (JAgent) to monitoring tools like Oracle Enterprise Manager or Oracle GoldenGate Director.  This approach will not work with older versions of the JAgent.

Note: The Oracle Monitor Agent (JAgent) used in this approach is version 12.1.3.0.  It can be found here.  

Note: There is a license requirement to use this approach since this is part of the Management Pack for Oracle GoldenGate.  Contact you local sales rep for more info.

After the Oracle Monitor Agent (JAgent) is configured for your environment, the XML can be accessed via any web browser.  Within my test enviornment, I have servers named OEL and FRED.  The URLs needed to to view this cool feature are:

OEL:
http://oel.acme.com:15000/groups

FRED:
http://fred.acme.com:15000/groups

As you can see, by using the port number (15000) of the Manager process, I can directly tap into the information being feed to the management tools for monitoring.  The “groups” directory places you at the top level of the monitoring stack.  By clicking on a process groups, this will take you down into the process group and show additional items being monitored by the JAgent.

In this example, you are looking at the next level down for the process EXT on OEL.  At this point, you can see what is available: monitoring points, messages, status changes and associated files for the extract process.

OEL:
http://oel.acme.com:15000/groups/EXT


Digging further into the stack, you can see what files are associated with the process.  (This is an easy way to identify parameter files without having to go directly to the command line).

OEL:
http://oel.acme.com:15000/groups/EXT/files

OEL:
http://oel.acme.com:15000/groups/EXT/files/dirprm



As you can see, the new Oracle Monitor Agent (JAgent) provides you another way of viewing your Oracle GoldenGate environment without needing direct access to the server.  Although this is a cool way of looking at a Oracle GoldenGate environment, it does not replace traditionall monitoring approaches.  

Cool Tip: The OS tool “curl” can be used to dump similar XML output to a file (showed to me by the product team).

$ curl --silent http://oel.acme.com:15000/registry | xmllint --format -

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="/style/registry.xsl"?>
<registry xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://oel.acme.com:15000/schema/registry.xsd">
<process name="PMP" type="4" status="3"/>
<process name="EXT" type="2" mode="1" status="3"/>
<process name="MGR" type="1" status="3"/>
</registry>

In my opinion, many of the complants about the original version of the JAgent have been addressed with the latest release of the Oracle Monitor Agent (JAgent).  Give it a try!
 
Enjoy!

about.me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Who is a DBA Leader?

Pakistan's First Oracle Blog - Tue, 2015-01-06 06:00
Sitting behind a big mahogany table, smoking Cuban Cigar, glaring at the person sitting across, one hand taking the receive of black phone to right ear, and the other hand getting the mobile phone off the left ear can be the image of a DBA boss in any white elephant government outfit, but it certainly cannot work in organization made up of professionals like database administrators. And if such image or similar image is working in any such company then that company is not great. It's as simple as that.






So who is DBA leader? The obvious answer is the person who leads a team of database administrators. Sounds simple enough, but it takes a lot to be a true leader. There are many DBA bosses at various layers, DBA managers at various layers, but being a DBA leader is very different. If you are a DBA leader, then you should be kinda worshiped. If you work in a team which has a DBA leader, then you are a very lucky person.

A DBA leader is the one who leads by an example. He walks the talk. He is the doer and not just talker. He inspires, motivates, and energizes the team members to follow him and then exceed his example. For instance, when client asks to improve that performance issue in the RAC cluster, the DBA leader would first jump in at the problem and start collaborating with team. He would analyze the problem, would present his potential solutions or at least line of action. He would engage the team and would listen to them. He won't just assing the problem to somebody, then disappear, and come back at 5pm asking about status. DBA leader is not super human, so he will get problems of which he won't have any knowledge. He will research the the problem with team and will learn and grow with them. That approach would electrify the team.

A DBA leader is a grateful person. He doesn't seem to thank his team enough for doing a great job. When under the able leadership of the DBA leader, team would reach to a solution, then regardless of his contribution, a DBA leader would make his team look awesome. That will generate immense prestige for the DBA leader at the same time, while making team looking great. Team would cherish the fact that solution was reached after deep insights of the DBA leader, and yet leader gave credit to them.

A DBA leader is the one who is always there. He falls before the team falls, and doesn't become aloof when things don't go well. Things will go wrong and crisis will come. In such situations, responsibility is shared and DBA leader doesn't shirk from it. In the team of DBA leader, there are no scapegoats.

A leader of DBAs keeps both big piture and ther details in perspective at the same time. He provides the vision and lives the vision from the front. He learns and then he leads. He does all of this and does it superbly and that is why he is the star and such a rare commodity, and that is why he he is the DBA LEADER.

Categories: DBA Blogs

MySQL Locking

Kubilay Çilkara - Tue, 2015-01-06 03:53
MySQL and Row Level Locking? Or why are you getting the error:

ERROR 1205 (HY000) : Lock wait timeout exceeded; try restarting transaction

You get the error because your allocated time to hold a DML lock in a transaction exceeds the set limit. Usually the default limit to hold a DML row lock, set by innodb_lock_wait_timeout db parameter, is 50 seconds. If your transaction doesn't commit/rollback within 50 seconds you will get this error. We don't want to hold locks for longer than 50 seconds anyway, throughput would be affected.

And yes MySQL in innodb uses row level locking. Since MySQL 5.1+ (time Oracle took over) it does row level locking in its InnoDB tables. That means only the rows which are selected . . . FOR UPDATE . . . are locked and not the  whole table. To see the threads (sessions) which are locking other threads and which queries are locking, use the following INFORMATION_SCHEMA dictionary SQL query as DBA.  You will be able to see blockers and waiters of transactions  waiting on locks. Run it as is using INFORMATION_SCHEMA schema, no modifications.

Use this SQL query to monitor locks and transactions and note that query will return data only when there are locks!


SELECT 
    r.trx_id AS wating_trx_id,
    r.trx_mysql_thread_id AS waiting_thread,
    TIMESTAMPDIFF(SECOND,
        r.trx_wait_started,
        CURRENT_TIMESTAMP) AS wait_time,
    r.trx_query AS waiting_query,
    l.lock_table AS waiting_table_lock,
    b.trx_id AS blocking_trx_id,
    b.trx_mysql_thread_id AS blocking_thread,
    SUBSTRING(p.host,
        1,
        INSTR(p.host, ':') - 1) AS blocking_host,
    SUBSTRING(p.host,
        INSTR(p.host, ':') + 1) AS blocking_port,
    IF(p.command = 'Sleep', p.time, 0) AS idle_in_trx,
    b.trx_query AS blocking_query
FROM
    information_schema.innodb_lock_waits AS w
        INNER JOIN
    information_schema.innodb_trx AS b ON b.trx_id = w.blocking_trx_id
        INNER JOIN
    information_schema.innodb_trx AS r ON r.trx_id = w.requesting_trx_id
        INNER JOIN
    information_schema.innodb_locks AS l ON w.requested_lock_id - l.lock_id
        LEFT JOIN
    information_schema.PROCESSLIST AS p ON p.id = b.trx_mysql_thread_id
ORDER BY wait_time DESC;

Categories: DBA Blogs

WARNING!!! Maximum Load 800 lbs

Pythian Group - Mon, 2015-01-05 13:45

Is it just my recently discovered dumbness or am I really on to something here? I just returned home from a ritzy shopping mall and had to ride down an elevator with a brimming shopping trolly filled with items which my companion miraculously managed to buy from every sale from every corner of that mall. Anyway before I recall that shopping bill and weep copiously, I just wanted to share something which always bugs me when I use elevators.

Inside the elevators, there are only two things to do. Either stare at the faces of other riders and enjoy the embarrassment of shifting eyes from face to face or the other option is to look onto the walls. Like others, I always select the latter option. One thing which every elevator says to me is something like ‘Maximum Load 800 lbs’ or any other number.

Now that not only baffles me but also creates a passive panic. I quickly calculate my weight, then frantically and stealthily glance at other bodies inside and then guess the total weight in there. More often than not, it turns out that the weight of bodies plus their heavyset trollies exceed or teeter on the edge of the warning lbs.

On this very moment, the other elevator hobbits notice my ashen face, violently trembling body, sunken eyes and follow them to the warning. Some get the point and follow the ritual of getting panicked, some try to recall emergency ambulance number after watching us, and some just don’t give the flying heck.

My question here is why on Earth they write it inside the elevator when its too late to do anything about it. Do they really write it seriously or they have just assumed that never that weight would be reached? I personally know a group of people including me who for sure can easily break that weight record. They all live nearby and use the same shopping center and elevator of course. There is every chance that one day they all will come on same time and use that elevator. What happens then? Or I am just paranoid beyond cure? May be if elevator guys would write it outside it, and place a weighing machine on entry of elevator. The machine would calculate the weight as people would move in, and will close its door as soon as it reaches near it.

Interestingly enough, I have seen databases being used as elevators by the applications and data loaders. Applications and data loading without any consideration of design, scalability or performance or capacity management just throng the database. Unlike elevators, I have seen databases break down or coming to grinding halt due to runaway or in other words over-weight applications.

That panics me in the same way as elevators do.

Categories: DBA Blogs

Partner Webcast – Oracle Enterprise Metadata Management and its impact on Oracle Partner business

On the 20th of October 2014, Oracle announced the general availability of Oracle Enterprise Metadata Management (OEMM), Oracle's comprehensive Metadata Management technology for Data Governance....

We share our skills to maximize your revenue!
Categories: DBA Blogs

Script for Exadata I/O Report

Pakistan's First Oracle Blog - Mon, 2015-01-05 00:02
select function_name,sum(SMALL_READ_MEGABYTES)/1024 SM_Read_GB,
sum(SMALL_WRITE_MEGABYTES)/1024 SM_Write_GB,
sum(LARGE_READ_MEGABYTES)/1024 LG_Read_GB, sum(LARGE_WRITE_MEGABYTES)/1024 LG_Write_GB,
sum(LARGE_READ_REQS) LG_Read_Requests,
sum(LARGE_Write_REQS) LG_Write_Requests
from v$iostat_function_detail
group by function_name;
Categories: DBA Blogs

Log Buffer #404, A Carnival of the Vanities for DBAs

Pakistan's First Oracle Blog - Mon, 2015-01-05 00:01

With new year already in fast gear, bloggers are sparing no stone unturned to come up with innovative ideas. This Log Buffer edition is keeping pace with them as always.
Oracle:

While playing with 12c Scott tried the upgrade to the DEFAULT column syntax that now allows sequences.
This is an age old question and of course the answer depends on how you say “SQL”.
Happy New Year! Upgraded 12.1.0.1 Grid Infrastructure to 12.1.0.2 and applied the Oct 2014 PSU. Had an error during rootupgrade.sh as well, due to the ASM spfile being on disk instead of on ASM diskgroup.
If you (already) created your first Oracle Service Bus 12c application/project with SOAP webservices and tried to deploy it to your IntegratedWeblogic server you might be familiar with this error.
Using Drag-Drop functionality in af:treeTable to move data between nodes.

SQL Server:

Hadoop has been making a lot of noise in the Big Data world.
Lets look at two different ways of creating an HDInsight Cluster: Creating an HDInsight Cluster through Azure Management Portal, and creating an HDInsight Cluster through Windows Azure PowerShell.
Why you need test driven development.
SQL Server Data Import System to Alert For Missed Imports.
Create stunning visualizations with Power View in 20 minutes or less!

MySQL:

So assume you just uploaded the certificate you use to identify yourself to the MySQL server to Github or some other place it doesn’t belong…and there is no undelete.
MySQL Plugin for Oracle Enterprise Manager on VirtualBox: installation gotchas.
MariaDB slave restore using GTID & xtrabackup bug.
How small changes impact complex systems – MySQL example.
In this post, Louis talk about MHA GTID behavior, we test different cases and find something is different from previous versions.

Also Published at Pythian Blog.
Categories: DBA Blogs

Partner Webcast – Extending Oracle Applications with Oracle Fusion Middleware Platform

Oracle Fusion Middleware technologies can help strengthen your investments in Oracle Applications, as we’ve discussed on the Oracle AppAdvantage program, by delivering a superior experience,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Log Buffer #404, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-01-02 09:35

With new year already in fast gear, bloggers are sparing no stone unturned to come up with innovative ideas. This Log Buffer edition is keeping pace with them as always.

Oracle:

While playing with 12c Scott tried the upgrade to the DEFAULT column syntax that now allows sequences.

This is an age old question and of course the answer depends on how you say “SQL”.

Happy New Year! Upgraded 12.1.0.1 Grid Infrastructure to 12.1.0.2 and applied the Oct 2014 PSU. Had an error during rootupgrade.sh as well, due to the ASM spfile being on disk instead of on ASM diskgroup.

If you (already) created your first Oracle Service Bus 12c application/project with SOAP webservices and tried to deploy it to your IntegratedWeblogic server you might be familiar with this error.

Using Drag-Drop functionality in af:treeTable to move data between nodes.

SQL Server:

Hadoop has been making a lot of noise in the Big Data world.

Lets look at two different ways of creating an HDInsight Cluster: Creating an HDInsight Cluster through Azure Management Portal, and creating an HDInsight Cluster through Windows Azure PowerShell.

Why you need test driven development.

SQL Server Data Import System to Alert For Missed Imports.

Create stunning visualizations with Power View in 20 minutes or less!

MySQL:

So assume you just uploaded the certificate you use to identify yourself to the MySQL server to Github or some other place it doesn’t belong…and there is no undelete.

MySQL Plugin for Oracle Enterprise Manager on VirtualBox: installation gotchas.

MariaDB slave restore using GTID & xtrabackup bug.

How small changes impact complex systems – MySQL example.

In this post, Louis talk about MHA GTID behavior, we test different cases and find something is different from previous versions.

Categories: DBA Blogs