Skip navigation.

DBA Blogs

Trace Files -- 11b : Using DBMS_SQLDIAG to trace the Optimization of an SQL Statement

Hemant K Chitale - Sun, 2016-02-07 07:46
My previous blogpost covered using ALTER SYSTEM/SESSION to set tracing for a specific SQL_ID that has been determined in advance.   The SQL may be executed in the future after the ALTER SYSTEM/SESSION.

Here is a method for an SQL that has already been executed.

SQL> select count(*) from all_objects_many_list
2 where created > sysdate-365;

COUNT(*)
----------
25548

SQL> begin
2 dbms_sqldiag.dump_trace(p_sql_id=>'b086mzzp82x7w',
3 p_component=>'Optimizer',
4 p_file_id=>'OPT_TRACE_b086mzzp82x7w');
5 end;
6 /

PL/SQL procedure successfully completed.

SQL>
SQL> select value from v$diag_info where name = 'Default Trace File';

VALUE
------------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_2987_OPT_TRACE_b086mzzp82x7w.trc

SQL>


Let's review the trace file.

Registered qb: SEL$1 0x99b9000 (PARSER)
---------------------
QUERY BLOCK SIGNATURE
---------------------
signature (): qb_name=SEL$1 nbfros=1 flg=0
fro(0): flg=4 objn=35014 hint_alias="ALL_OBJECTS_MANY_LIST"@"SEL$1"

SPM: statement not found in SMB
SPM: statement not a candidate for auto-capture

**************************
Automatic degree of parallelism (ADOP)
**************************
Automatic degree of parallelism is disabled: Parameter.

PM: Considering predicate move-around in query block SEL$1 (#0)
**************************
Predicate Move-Around (PM)
**************************
OPTIMIZER INFORMATION

******************************************
----- Current SQL Statement for this session (sql_id=0b1t991khf449) -----
/* SQL Analyze(148,0) */ select count(*) from all_objects_many_list
where created > sysdate-365
----- PL/SQL Stack -----
----- PL/SQL Call Stack -----
object line object
handle number name
0x6be0d530 145 package body SYS.DBMS_SQLTUNE_INTERNAL
0x6be0d530 12098 package body SYS.DBMS_SQLTUNE_INTERNAL
0x6be6b738 1229 package body SYS.DBMS_SQLDIAG
0x6becaea8 2 anonymous block
*******************************************
Legend
The following abbreviations are used by optimizer trace.
CBQT - cost-based query transformation
JPPD - join predicate push-down
OJPPD - old-style (non-cost-based) JPPD
FPD - filter push-down
PM - predicate move-around
CVM - complex view merging
SPJ - select-project-join
SJC - set join conversion
SU - subquery unnesting
OBYE - order by elimination
OST - old style star transformation
ST - new (cbqt) star transformation
CNT - count(col) to count(*) transformation
JE - Join Elimination
JF - join factorization
CBY - connect by
SLP - select list pruning
DP - distinct placement
qb - query block
LB - leaf blocks
DK - distinct keys
LB/K - average number of leaf blocks per key
DB/K - average number of data blocks per key
CLUF - clustering factor
NDV - number of distinct values
Resp - response cost
Card - cardinality
Resc - resource cost
NL - nested loops (join)
SM - sort merge (join)
HA - hash (join)
CPUSPEED - CPU Speed
IOTFRSPEED - I/O transfer speed
IOSEEKTIM - I/O seek time
SREADTIM - average single block read time
MREADTIM - average multiblock read time
MBRC - average multiblock read count
MAXTHR - maximum I/O system throughput
SLAVETHR - average slave I/O throughput
dmeth - distribution method
1: no partitioning required
2: value partitioned
4: right is random (round-robin)
128: left is random (round-robin)
8: broadcast right and partition left
16: broadcast left and partition right
32: partition left using partitioning of right
64: partition right using partitioning of left
256: run the join in serial
0: invalid distribution method
sel - selectivity
ptn - partition
***************************************
PARAMETERS USED BY THE OPTIMIZER
********************************
*************************************
PARAMETERS WITH ALTERED VALUES
******************************
Compilation Environment Dump
is_recur_flags = 8
Bug Fix Control Environment


*************************************
PARAMETERS WITH DEFAULT VALUES
******************************
Compilation Environment Dump
optimizer_mode_hinted = false
optimizer_features_hinted = 0.0.0
parallel_execution_enabled = true
parallel_query_forced_dop = 0
parallel_dml_forced_dop = 0
parallel_ddl_forced_degree = 0
parallel_ddl_forced_instances = 0
_query_rewrite_fudge = 90
optimizer_features_enable = 11.2.0.4
_optimizer_search_limit = 5
.... continued to a long list of parameters ........
....................................................
Bug Fix Control Environment
fix 3834770 = 1
fix 3746511 = enabled
fix 4519016 = enabled
.... continued to a long list of bug fixes ........
...................................................
***************************************
PARAMETERS IN OPT_PARAM HINT
****************************
***************************************
Column Usage Monitoring is ON: tracking level = 1
***************************************

Considering Query Transformations on query block SEL$1 (#0)
**************************
Query transformations (QT)
**************************
JF: Checking validity of join factorization for query block SEL$1 (#0)
JF: Bypassed: not a UNION or UNION-ALL query block.
ST: not valid since star transformation parameter is FALSE
TE: Checking validity of table expansion for query block SEL$1 (#0)
TE: Bypassed: No partitioned table in query block.
CBQT bypassed for query block SEL$1 (#0): no complex view, sub-queries or UNION (ALL) queries.
CBQT: Validity checks failed for 0b1t991khf449.
CSE: Considering common sub-expression elimination in query block SEL$1 (#0)
*************************
Common Subexpression elimination (CSE)
*************************
CSE: CSE not performed on query block SEL$1 (#0).
OBYE: Considering Order-by Elimination from view SEL$1 (#0)
***************************
Order-by elimination (OBYE)
***************************
OBYE: OBYE bypassed: no order by to eliminate.
CVM: Considering view merge in query block SEL$1 (#0)
OJE: Begin: find best directive for query block SEL$1 (#0)
OJE: End: finding best directive for query block SEL$1 (#0)
CNT: Considering count(col) to count(*) on query block SEL$1 (#0)
*************************
Count(col) to Count(*) (CNT)
*************************
CNT: COUNT() to COUNT(*) not done.
query block SEL$1 (#0) unchanged
Considering Query Transformations on query block SEL$1 (#0)
**************************
Query transformations (QT)
**************************
JF: Checking validity of join factorization for query block SEL$1 (#0)
JF: Bypassed: not a UNION or UNION-ALL query block.
ST: not valid since star transformation parameter is FALSE
TE: Checking validity of table expansion for query block SEL$1 (#0)
TE: Bypassed: No partitioned table in query block.
CBQT bypassed for query block SEL$1 (#0): no complex view, sub-queries or UNION (ALL) queries.
CBQT: Validity checks failed for 0b1t991khf449.
CSE: Considering common sub-expression elimination in query block SEL$1 (#0)
*************************
Common Subexpression elimination (CSE)
*************************
CSE: CSE not performed on query block SEL$1 (#0).
SU: Considering subquery unnesting in query block SEL$1 (#0)
********************
Subquery Unnest (SU)
********************
SJC: Considering set-join conversion in query block SEL$1 (#0)
*************************
Set-Join Conversion (SJC)
*************************
SJC: not performed
PM: Considering predicate move-around in query block SEL$1 (#0)
**************************
Predicate Move-Around (PM)
**************************
PM: PM bypassed: Outer query contains no views.
PM: PM bypassed: Outer query contains no views.
query block SEL$1 (#0) unchanged
FPD: Considering simple filter push in query block SEL$1 (#0)
"ALL_OBJECTS_MANY_LIST"."CREATED">SYSDATE@!-365
try to generate transitive predicate from check constraints for query block SEL$1 (#0)
finally: "ALL_OBJECTS_MANY_LIST"."CREATED">SYSDATE@!-365

apadrv-start sqlid=398332482954924169
:
call(in-use=2008, alloc=16344), compile(in-use=56240, alloc=59416), execution(in-use=2456, alloc=4032)

*******************************************
Peeked values of the binds in SQL statement
*******************************************

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT COUNT(*) "COUNT(*)" FROM "HEMANT"."ALL_OBJECTS_MANY_LIST" "ALL_OBJECTS_MANY_LIST" WHERE "ALL_OBJECTS_MANY_LIST"."CREATED">SYSDATE@!-365
kkoqbc: optimizing query block SEL$1 (#0)

:
call(in-use=2056, alloc=16344), compile(in-use=57320, alloc=59416), execution(in-use=2456, alloc=4032)

kkoqbc-subheap (create addr=0x7f4409c4fb18)
****************
QUERY BLOCK TEXT
****************
select count(*) from all_objects_many_list
where created > sysdate-365
---------------------
QUERY BLOCK SIGNATURE
---------------------
signature (optimizer): qb_name=SEL$1 nbfros=1 flg=0
fro(0): flg=0 objn=35014 hint_alias="ALL_OBJECTS_MANY_LIST"@"SEL$1"

-----------------------------
SYSTEM STATISTICS INFORMATION
-----------------------------
Using NOWORKLOAD Stats
CPUSPEEDNW: 937 millions instructions/sec (default is 100)
IOTFRSPEED: 4096 bytes per millisecond (default is 4096)
IOSEEKTIM: 10 milliseconds (default is 10)
MBRC: NO VALUE blocks (default is 8)


And here is the actual information about how the Costing is done and Execution Plan determined.

***************************************
BASE STATISTICAL INFORMATION
***********************
Table Stats::
Table: ALL_OBJECTS_MANY_LIST Alias: ALL_OBJECTS_MANY_LIST
#Rows: 7197952 #Blks: 98279 AvgRowLen: 93.00 ChainCnt: 0.00
Index Stats::
Index: ALL_OBJ_M_L_CRTD_NDX Col#: 7
LVLS: 2 #LB: 19093 #DK: 1232 LB/K: 15.00 DB/K: 351.00 CLUF: 432893.00
Access path analysis for ALL_OBJECTS_MANY_LIST
***************************************
SINGLE TABLE ACCESS PATH
Single Table Cardinality Estimation for ALL_OBJECTS_MANY_LIST[ALL_OBJECTS_MANY_LIST]
Column (#7): CREATED(
AvgLen: 8 NDV: 1232 Nulls: 0 Density: 0.000812 Min: 2455803 Max: 2457343
Table: ALL_OBJECTS_MANY_LIST Alias: ALL_OBJECTS_MANY_LIST
Card: Original: 7197952.000000 Rounded: 1313133 Computed: 1313133.42 Non Adjusted: 1313133.42
Access Path: TableScan
Cost: 27174.11 Resp: 27174.11 Degree: 0
Cost_io: 26619.00 Cost_cpu: 6242311042
Resp_io: 26619.00 Resp_cpu: 6242311042
Access Path: index (index (FFS))
Index: ALL_OBJ_M_L_CRTD_NDX
resc_io: 5173.00 resc_cpu: 4598699894
ix_sel: 0.000000 ix_sel_with_filters: 1.000000
Access Path: index (FFS)
Cost: 5581.95 Resp: 5581.95 Degree: 1
Cost_io: 5173.00 Cost_cpu: 4598699894
Resp_io: 5173.00 Resp_cpu: 4598699894
Access Path: index (IndexOnly)
Index: ALL_OBJ_M_L_CRTD_NDX
resc_io: 3486.00 resc_cpu: 287452140
ix_sel: 0.182432 ix_sel_with_filters: 0.182432
Cost: 3511.56 Resp: 3511.56 Degree: 1
Best:: AccessPath: IndexRange
Index: ALL_OBJ_M_L_CRTD_NDX
Cost: 3511.56 Degree: 1 Resp: 3511.56 Card: 1313133.42 Bytes: 0

***************************************


OPTIMIZER STATISTICS AND COMPUTATIONS
***************************************
GENERAL PLANS
***************************************
Considering cardinality-based initial join order.
Permutations for Starting Table :0
Join order[1]: ALL_OBJECTS_MANY_LIST[ALL_OBJECTS_MANY_LIST]#0
***********************
Best so far: Table#: 0 cost: 3511.5623 card: 1313133.4203 bytes: 10505064
***********************
(newjo-stop-1) k:0, spcnt:0, perm:1, maxperm:2000

*********************************
Number of join permutations tried: 1
*********************************
Enumerating distribution method (advanced)

Trying or-Expansion on query block SEL$1 (#0)
Transfer Optimizer annotations for query block SEL$1 (#0)
id=0 frofkks[i] (index start key) predicate="ALL_OBJECTS_MANY_LIST"."CREATED">SYSDATE@!-365
Final cost for query block SEL$1 (#0) - All Rows Plan:
Best join order: 1
Cost: 3511.5623 Degree: 1 Card: 1313133.0000 Bytes: 10505064
Resc: 3511.5623 Resc_io: 3486.0000 Resc_cpu: 287452140
Resp: 3511.5623 Resp_io: 3486.0000 Resc_cpu: 287452140
kkoqbc-subheap (delete addr=0x7f4409c4fb18, in-use=26384, alloc=32840)
kkoqbc-end:
:
call(in-use=8664, alloc=49288), compile(in-use=59856, alloc=63560), execution(in-use=2456, alloc=4032)

kkoqbc: finish optimizing query block SEL$1 (#0)
apadrv-end
:
call(in-use=8664, alloc=49288), compile(in-use=60768, alloc=63560), execution(in-use=2456, alloc=4032)


Starting SQL statement dump

user_id=87 user_name=HEMANT module=SQL*Plus action=
sql_id=0b1t991khf449 plan_hash_value=1689651126 problem_type=3
----- Current SQL Statement for this session (sql_id=0b1t991khf449) -----
/* SQL Analyze(148,0) */ select count(*) from all_objects_many_list
where created > sysdate-365
----- PL/SQL Stack -----
----- PL/SQL Call Stack -----
object line object
handle number name
0x6be0d530 145 package body SYS.DBMS_SQLTUNE_INTERNAL
0x6be0d530 12098 package body SYS.DBMS_SQLTUNE_INTERNAL
0x6be6b738 1229 package body SYS.DBMS_SQLDIAG
0x6becaea8 2 anonymous block
sql_text_length=96
sql=/* SQL Analyze(148,0) */ select count(*) from all_objects_many_list
where created > sysdate-365
----- Explain Plan Dump -----
----- Plan Table -----

============
Plan Table
============
-------------------------------------------------+-----------------------------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time |
-------------------------------------------------+-----------------------------------+
| 0 | SELECT STATEMENT | | | | 3512 | |
| 1 | SORT AGGREGATE | | 1 | 8 | | |
| 2 | INDEX RANGE SCAN | ALL_OBJ_M_L_CRTD_NDX| 1282K | 10M | 3512 | 00:00:43 |
-------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
2 - access("CREATED">SYSDATE@!-365)

Content of other_xml column
===========================
db_version : 11.2.0.4
parse_schema : HEMANT
plan_hash : 1689651126
plan_hash_2 : 1742296710
Outline Data:
/*+
BEGIN_OUTLINE_DATA

*** 2016-02-07 21:29:15.838
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('11.2.0.4')
DB_VERSION('11.2.0.4')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
INDEX(@"SEL$1" "ALL_OBJECTS_MANY_LIST"@"SEL$1" ("ALL_OBJECTS_MANY_LIST"."CREATED"))
END_OUTLINE_DATA
*/

Optimizer state dump:
Compilation Environment Dump
optimizer_mode_hinted = false
optimizer_features_hinted = 0.0.0
parallel_execution_enabled = true
parallel_query_forced_dop = 0
parallel_dml_forced_dop = 0
parallel_ddl_forced_degree = 0
parallel_ddl_forced_instances = 0
_query_rewrite_fudge = 90
optimizer_features_enable = 11.2.0.4
_optimizer_search_limit = 5
...... long list of optimizer parameters ...........
.....................................................
Bug Fix Control Environment
fix 3834770 = 1
fix 3746511 = enabled
fix 4519016 = enabled
...... long list of Bug Fixes .......................
.....................................................

Query Block Registry:
SEL$1 0x99b9000 (PARSER) [FINAL]

:
call(in-use=11728, alloc=49288), compile(in-use=90704, alloc=155568), execution(in-use=6408, alloc=8088)

End of Optimizer State Dump
Dumping Hints
=============
====================== END SQL Statement Dump ======================


So, this is also a supported method.  This DBMS_SQLDIAG package is available from 11.2
Unfortunately, however, DUMP_TRACE is not documented !  (see the 11.2 documentation on DBMS_SQLDIAG).

Note : If the SQL Statement and/or Plan have already been aged out / purged from the Shared Pool , a DUMP_TRACE would, obviously, not be able to print anything.
.
.
.

Categories: DBA Blogs

SQL On The Edge #8 – SQL Server Mobile Report Publisher

Pythian Group - Fri, 2016-02-05 14:07

One of the cool things about SQL Server is that it comes bundled with all the Business Intelligence services with the core database engine license. Reporting Services (which includes the Mobile Report Publisher), Analysis Services, and Integration Services are all integrated, and are ready to get going as soon as you install SQL Server. This has made it not only cost-efficient for many organizations to deploy BI, but it has also contributed to a wide adoption among the SQL Server customer base.

 

What is the Mobile Report Publisher?

Currently in preview, the Mobile Report Publisher is a new report and dashboard editor that publishes reports to Reporting Services, and it’s part of the bigger road map that Microsoft has for their Business Intelligence On-Premises story. We all know that in the cloud, Power BI has been getting a large amount of investment, but with on-premises there was a big gap that was getting wider and wider, until now.

With this upcoming SQL 2016 release, the Microsoft team is focusing on bringing Reporting Services into the future as a one-stop integrated solution for BI deployment so that cloud BI or a competitor’s product (*cough*Tableau*cough) are not the only modern alternatives.

This Reporting Services refactor is the biggest change made to the product since SQL Server 2005 was released over 10 years ago. Leveraging the best parts of the Datazen acquisition, the Microsoft team is looking to provide a cohesive BI story that integrates web-based and mobile reports, Report Builder reports, and Power View style of modern visuals.


How is it different?

You’re probably used to working with SSRS Report Builder and are wondering what the idea is with Mobile Report Publisher. The demo below will make it very clear, but let’s just say that reports have now been split in two types:

1. Paginated reports: this is the ‘legacy’ style report that is built by Report Builder. It looks more flat, has the same controls as before, and is fully compatible with all the existing reports you have already deployed.

2. Mobile reports: Even though the name says ‘mobile’ these reports work just as well on desktop, tablet and mobile. If you’re familiar with web design, then the appropriate term would be that these reports are “responsive”. They can be done once, and will display nicely across devices. This is the new experience that the Mobile Report Publisher is targeting.

 

Where do we go from here?

Right now the current version is SQL Server 2016 CTP 3.2. We still have several different versions that will be released before SQL 2016 goes RTM later on this year. Currently, a lot of the planned functionality is not entirely there for the main SSRS interface, and you will be asked to switch to the ‘”classic” view often.

The Mobile Report Publisher experience is also very much targeted towards creating a visualization, and there’s no data modeling to be found. You pretty much need to have your datasets pre-made and in the format that the tool will expect, or you won’t get very far. Hopefully, at some point the team will add modeling capabilities like the robust experience we already have on the Power Bi desktop tool.

If you want to practice and get your feet wet with this release you can do it now, and for free by visiting SQL Server 2016 CTP 3.2. You can also preview the Mobile Report Publisher. Like I said before, this is a big departure so most people will want to train up to be ready when the new versions go live.

 

Demo

In the demo below I’m going to walk through the new SSRS interface preview in CTP 3.2, and then walk through creating a quick dashboard for both desktop and mobile consumption using the publisher. Let’s check it out!

Discover more about our expertise with SQL Server in the Cloud.

Categories: DBA Blogs

General troubleshooting lessons from recent Delphix issue

Bobby Durrett's DBA Blog - Fri, 2016-02-05 11:25

Delphix support helped me resolve an issue yesterday and the experience gave me the idea of writing this post about several general computer issue troubleshooting tips that I have learned down through the years. Never mind that I ignored these lessons during this particular problem. This is more of a “do as I say” and not a “do as I do” story.  Actually, some times I remember these lessons. I didn’t do so well this week. But the several mistakes that I made resolving this recent Delphix issue motivate me to write this post and if nothing else remind myself of the lessons I’ve learned in the past about how to resolve a computer problem.

Don’t panic!

I’m reminded of the friendly advice on the cover of the Hitchhiker’s Guide to the Galaxy: “Don’t panic!”. So, yesterday it was 4:30 pm. I had rebooted the Delphix virtual machine and then in a panic had the Unix team reboot the HP Unix target server. But, still I could not bring up any of the Delphix VDBs.  We had people coming over to our house for dinner that night and I was starting to worry that I would be working on this issue all night. I ended up getting out of the office by 5:30 pm and had a great dinner with friends. What was I so stressed about? Even the times that I have been up all night it didn’t kill me. Usually the all night issues lead to me learning things anyway.

Trust support

The primary mistake that I made was to get my mind fixed on a solution to the problem instead of working with Delphix support and trusting them to guide us to the solution. We had a number of system issues due to a recent network issue and I got my mind set on the idea that my Delphix issue was due to some network hangup. I feel sorry for our network team because it seems like the first thought people have any time there is some issue is that it is a “network issue”. I should know better. How many times have I been working on issues when everyone says it is a “database issue” and I’m annoyed because I know that the issue is somewhere else and they are not believing me when I point to things outside the database. Anyway, I opened a case with Delphix on Monday when I couldn’t get a VDB to come down. It just hung for 5 minutes until it gave me an error. I assumed that it was a network hangup and got fixated on rebooting the Delphix VM. Ack! Ultimately, I ended up working with two helpful and capable people in Delphix support and they resolved the issue which was not what I thought at all. There are times to disagree with support and push for your own solution but I did this too early in this case and I was dead wrong.

Keep it simple

I’ve heard people refer to Occam’s razor which I translate in computer terms to mean “look for simple problems first”. Instead of fixing my mind on some vague network issue where the hardware is not working properly, how about assuming that all the hardware and software is working normally and then thinking about what problems might cause my symptoms? I can’t remember how many times this has bit me. There is almost always some simple explanation.  In this case I had made a change to a Unix shell script that runs when someone logs in as the oracle user. This caused Delphix to no longer be able to do anything with the VDBs on that server. Oops! It was a simple blunder, no big deal. But I’m kicking myself for not first thinking about a simple problem like a script change instead of focusing on something more exotic.

What changed?

I found myself saying the same dumb thing that I’ve heard people say to me all the time: nothing changed. In this case I said something like “this has worked fine for 3 years now and nothing has changed”. The long-suffering and patient Delphix support folks never called me on this, but I was dead wrong. Something had to have changed for something that was working to stop working. I should have spent time looking at the various parts of our Delphix setup to see if anything had changed before I contacted support. All I had to do was see the timestamp on our login script and I would see that something had recently changed.

Understand how it all works

I think my Delphix skills are a little rusty. We just started a new expansion project to add new database sources to Delphix. It has been a couple of years since I’ve done any heavy configuration and trouble shooting. But I used to have a better feel for how all the pieces fit together. I should have thought about what must have gone on behind the scenes when I asked Delphix to stop a VDB and it hung for 5 minutes. What steps was it doing? Where in the process could the breakdown be occurring? Delphix support did follow this type of reasoning to find the issue. They manually tried some of the steps that the Delphix software would do automatically until they found the problem. If I stopped to think about the pieces of the process I could have done the same. This has been a powerful approach to solving problems all through my career. I think about resolving PeopleSoft issues. It just helps to understand how things work. For example, if you understand how the PeopleSoft login process works you can debug login issues by checking each step of the process for possible issues. The same is true for Oracle logins from clients. In general, the more you understand all the pieces of a computer system, down to the transistors on the chips, the better chance you have of visualizing where the problem might be.

Well, I can’t think of any other pearls of wisdom from this experience but I thought I would write these down while it was on my mind. Plus, I go on call Monday morning so I need to keep these in mind as I resolve any upcoming issues. Thanks to Delphix support for their good work on this issue.

Categories: DBA Blogs

Network multicast support on Azure

Pythian Group - Thu, 2016-02-04 15:07

 

Today I would like to talk about multicast support on Azure, and how to make it work. While it’s not the most required feature on a Virtual environment, nevertheless, some applications require multicast support for networks. The perfect example is Oracle RAC, where multicast is required starting from version 11.2.0.2. In Oracle RAC, multicast is used for highly available IP (HAIP) on interconnect. If you’re thinking about building a training environment with Oracle RAC on Azure you will need the multicast support.

How can we check if it works, or if it’s working now? First, you can check if it’s supported by your kernel using the netstat utility.

[root@oradb5 ~]# netstat -g | grep mcast
lo 1 all-systems.mcast.net
eth0 1 all-systems.mcast.net
eth1 1 all-systems.mcast.net

You can see that all my interfaces are ready for the multicast support. That’s fine, but how can we check if it works on our network? We can use either iperf utility or a perl script created by Oracle. You can download the script from Oracle support if you have account, from the Oracle note “How to Validate Network and Name Resolution Setup for the Clusterware and RAC (Doc ID 1054902.1)”.
Here’s what I got: I have two Azure VM A3 size with Oracle Linux 6, with two network interfaces each. The VM hostnames were oradb5 and oradb6. You can check out my blog on how to make an Azure VM with two network interface here. The second interface eth1 is one where we are going to enable multicast.

I ran the mcasttest.pl script and saw that:

[oracle@oradb5 mcasttest]$ ./mcasttest.pl -n oradb5,oradb6 -i eth1
########### Setup for node oradb5 ##########
Checking node access 'oradb5'
Checking node login 'oradb5'
Checking/Creating Directory /tmp/mcasttest for binary on node 'oradb5'
Distributing mcast2 binary to node 'oradb5'
########### Setup for node oradb6 ##########
Checking node access 'oradb6'
Checking node login 'oradb6'
Checking/Creating Directory /tmp/mcasttest for binary on node 'oradb6'
Distributing mcast2 binary to node 'oradb6'
########### testing Multicast on all nodes ##########

Test for Multicast address 230.0.1.0

Nov 24 15:05:23 | Multicast Failed for eth1 using address 230.0.1.0:42000

Test for Multicast address 224.0.0.251

Nov 24 15:05:53 | Multicast Failed for eth1 using address 224.0.0.251:42001
[oracle@oradb5 mcasttest]$

The output clearly tells us that we don’t have multicast support for either for 230.0.1.0 or 224.0.0.251 multicast addresses.

What does the Virtual Network FAQ for Azure tell us about it?
Here is the answer:

Do VNets support multicast or broadcast?
No. We do not support multicast or broadcast.
What protocols can I use within VNets?
You can use standard IP-based protocols within VNets. However, multicast, broadcast, IP-in-IP encapsulated packets and Generic Routing Encapsulation (GRE) packets are blocked within VNets. Standard protocols that work include:
* TCP
* UDP
* ICMP

So, we need a workaround. Luckily we have one. Some time ago, while discussing RAC on Amazon AWS, I was pointed to an article written by my former colleague Jeremiah Wilton, where he had described how he could work around the same problem on Amazon. You can read the article here. I decided to give a try and see if it works for Azure.

We are going to use a Peer-to-Peer VPN n2n provided by ntop.
They have mentioned that the development for the product has been put on hold, but the tool is still widely used and provides an acceptable solution for our problem. I used a Stuart Buckell’s article on how to set it up and it worked for me.
We could just use an already precompiled package, but compiling the utility from the sources provides us with an opportunity to disable encryption and compression, or change any other options.

Here is what I’ve done:
Installed kernel headers to be able to compile the n2n :

[root@oradb5 n2n_v2]# yum install kernel-headers
ol6_UEK_latest | 1.2 kB 00:00
ol6_u4_base | 1.4 kB 00:00
ol6_u4_base/primary | 2.7 MB 00:00
ol6_u4_base 8396/8396
Setting up Install Process
Resolving Dependencies
....

Installed subversion utility:

[root@oradb5 /]# yum install subversion.x86_64
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package subversion.x86_64 0:1.6.11-15.el6_7 will be installed
.............

Downloaded the sources using svn:

[root@oradb5 /]# svn co https://svn.ntop.org/svn/ntop/trunk/n2n
Error validating server certificate for 'https://svn.ntop.org:443':
- The certificate hostname does not match.
Certificate information:
- Hostname: shop.ntop.org
- Valid: from Sun, 15 Nov 2015 00:00:00 GMT until Wed, 14 Nov 2018 23:59:59 GMT
- Issuer: COMODO CA Limited, Salford, Greater Manchester, GB
- Fingerprint: fb:a6:ff:a7:58:f3:9d:54:24:45:e5:a0:c4:04:18:d5:58:91:e0:34
(R)eject, accept (t)emporarily or accept (p)ermanently? p
A n2n/n2n_v1
A n2n/n2n_v1/lzodefs.h
A n2n/n2n_v1/README
...............

Disabled encryption and compression using this article
Changed directory to n2n/n2n_v2 and compiled it.

[root@oradb5 n2n_v2]# make
gcc -g3 -Wall -Wshadow -Wpointer-arith -Wmissing-declarations -Wnested-externs -c n2n.c
gcc -g3 -Wall -Wshadow -Wpointer-arith -Wmissing-declarations -Wnested-externs -c n2n_keyfile.c
gcc -g3 -Wall -Wshadow -Wpointer-arith -Wmissing-declarations -Wnested-externs -c wire.c
gcc -g3 -Wall -Wshadow -Wpointer-arith -Wmissing-declarations -Wnested-externs -c minilzo.c
gcc -g3 -Wall -Wshadow -Wpointer-arith -Wmissing-declarations -Wnested-externs -c twofish.c
..............................

Copied files to the both my servers (oradb5 and oradb6) to /usr/sbin directory:

[root@oradb5 n2n_v2]# cp supernode /usr/sbin/
[root@oradb5 n2n_v2]# cp edge /usr/sbin/

Start a supernode daemon on the 1-st node. We only need it running on one machine, and it can even be a totally different machine. I am using port 1200 for it:

[root@oradb5 ~]# supernode -l 1200
[root@oradb5 ~]#

Started the edge on both servers. On oradb5 I am creating a sub-interface with IP 192.168.1.1 and providing some parameters:
-E – Accept multicast MAC addresses (default=drop).
-r – Enable packet forwarding through n2n community.
-c – n2n community name the edge belongs to.
-l – our supernode address:port.

[root@oradb5 ~]# edge -l 10.0.2.11:1200 -c RAC -a 192.168.1.1 -E -r

[root@oradb6 ~]# edge -l 10.0.2.11:1200 -c RAC -a 192.168.1.2 -E -r

So we are getting an interface edge0 on both nodes and can use it for connection required multicast:

[root@oradb5 ~]# ifconfig edge0
edge0 Link encap:Ethernet HWaddr 52:CD:8E:20:3D:E5
inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::50cd:8eff:fe20:3de5/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:0 (0.0 b) TX bytes:592 (592.0 b)

[root@oradb5 ~]#

On the second box:

[root@oradb6 ~]# ifconfig edge0
edge0 Link encap:Ethernet HWaddr 7E:B1:F1:41:7B:B7
inet addr:192.168.1.2 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::7cb1:f1ff:fe41:7bb7/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:0 (0.0 b) TX bytes:362 (362.0 b)

[root@oradb6 ~]#

Now we can run our multicast test again for edge0 interface and see how it works.

[oracle@oradb5 ~]$ cd mcasttest/
[oracle@oradb5 mcasttest]$ ./mcasttest.pl -n oradb5,oradb6 -i edge0
########### Setup for node oradb5 ##########
Checking node access 'oradb5'
Checking node login 'oradb5'
Checking/Creating Directory /tmp/mcasttest for binary on node 'oradb5'
Distributing mcast2 binary to node 'oradb5'
########### Setup for node oradb6 ##########
Checking node access 'oradb6'
Checking node login 'oradb6'
Checking/Creating Directory /tmp/mcasttest for binary on node 'oradb6'
Distributing mcast2 binary to node 'oradb6'
########### testing Multicast on all nodes ##########

Test for Multicast address 230.0.1.0

Nov 24 16:22:12 | Multicast Succeeded for edge0 using address 230.0.1.0:42000

Test for Multicast address 224.0.0.251

Nov 24 16:22:13 | Multicast Succeeded for edge0 using address 224.0.0.251:42001
[oracle@oradb5 mcasttest]$

As you can see, the test has completed successfully. So, the edge0 interface can be used now for any connections requiring multicast support.

In my next article I will show you how to create an Oracle RAC on Azure using the created multicast interface and a shared storage.

Categories: DBA Blogs

Cloning 10.2.0.3 Oracle Home on fully patched 11.31 HP-UX hangs

Bobby Durrett's DBA Blog - Thu, 2016-02-04 11:38

I based this blog post on information that I learned from this Oracle Support document:

Runinstaller And Emctl Do Not Work After Upgrading HP-UX 11.31 To 11.31 Update3 (Sep 2008) (Doc ID 780102.1)

My situation was slightly different from what Oracle’s note describes so I thought it would be helpful to document what I found.

In my case I am cloning an Oracle 10.2.0.3 home on to a fully patched HP-UX 11.31 server. I have used this same clone process on Oracle 11.1, 11.2, and 12.1 Oracle Homes with no issues. The symptom is that the 10.2 clone process just hangs with no helpful messages.

I searched Oracle’s support site and the web for issues with 10.2 cloning and could not find anything that matched my symptoms. I then decided to give up on cloning and try to install the base 10.2.0.1 binaries and then patch to match the home that I was trying to clone. The 10.2.0.1 install also hung.  But, I know that the 10.2.0.1 install works since we have used it on many other similar systems. But, they were on HP-UX 11.23 and not the fully patched HP-UX 11.31. So, I searched for installer issues on 11.31 and 10.2 and found the Oracle document listed above.

Evidently there is some bug with the JDK that Oracle included in 10.2 so that it does not work with HP-UX 11.31 with the current patches. A later version of Java resolves the issue.

Now that I understood the issue I decided to go back to the clone and try to apply the recommendations from the Oracle note, even though it doesn’t mention cloning.

The Oracle note suggests adding the –jreLoc /opt/java1.4 option to the runInstaller command line. The only catch is that my HP system did not have /opt/java1.4.  The oldest java we have installed is in /opt/java1.5. So, I tried the clone with the -jreLoc /opt/java1.5 option and it got almost to the end of the clone before it hung doing some emctl step.  Then I realized that I needed to follow the steps in the Oracle note to rename the Oracle Home’s jdk directory and set up a link to the Java1.5 directory.  So, I did these steps to point to the correct jdk directory:

cd $ORACLE_HOME

mv jdk jdk.orig

ln -s /opt/java1.5 jdk

Then I ran the clone with this command line:

$ORACLE_HOME/oui/bin/runInstaller -jreLoc /opt/java1.5 -clone -silent ORACLE_HOME=$ORACLE_HOME ORACLE_BASE=$ORACLE_BASE ORACLE_HOME_NAME=$ORACLE_HOME_NAME

Success!

It wasn’t that hard to apply the note to the clone situation but I thought it was worth blogging it. If someone googles runInstaller clone hang 10.2 11.31 and needs the solution they will find it.

Of course, I may be the only person in the world cloning a 10.2 Oracle Home on an HP-UX Itanium 11.31 system, but it’s here if someone needs it.

Bobby

Categories: DBA Blogs

An UNDO in a PDB in Oracle 12c?

Pythian Group - Thu, 2016-02-04 10:08

 

According to the Oracle 12cR1 documentation and concepts, it is 100% clear that there can be only one UNDO tablespace in a multitenant architecture and it is at CDB level; thus, a PDB cannot have any UNDO tablespace.

Are we really sure about that? Let’s test it!

First, we need a PDB with few tablespaces:

 

FRED_PDB> select NAME, OPEN_MODE, CON_ID from v$pdbs ;

NAME OPEN_MODE CON_ID
-------------------------------------------------- ---------- ----------
FRED_PDB READ WRITE 4

FRED_PDB> select tablespace_name from dba_tablespaces ;

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
TEMP
USERS
TBS_DATA

5 rows selected.

FRED_PDB> show parameter undo

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled boolean FALSE
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS1
FRED_PDB>

 

There we have an UNDO tablespace named UNDOTBS1 at CDB level and no UNDO at PDB level. Let’s try to create one :

FRED_CDB> create undo tablespace MY_PDB_UNDO ;

Tablespace created.

FRED_CDB>

 

It worked! Is the Oracle documentation wrong? Let’s verify this weird successful UNDO tablespace creation:

FRED_PDB> select tablespace_name from dba_tablespaces where tablespace_name like '%UNDO%' ;

no rows selected

FRED_PDB> select tablespace_name from dba_tablespaces

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
TEMP
USERS
TBS_DATA

5 rows selected.

FRED_PDB>

 

No UNDO tablespace has in fact been created even if no error message has been raised by Oracle. Digging in the documentation, this is not a not a bug but a feature. Indeed, it is well specified that:

When the current container is a PDB, an attempt to create an undo tablespace fails without returning an error.

 

Please note that this is the behavior of the 12cR1 release; from my side, I think that this a “not yet feature” and we should see some real UNDO tablespaces in PDBs in the next release(s)!

Discover more about our expertise in Oracle

Categories: DBA Blogs

Expert Insights: Oracle NET Troubleshooting for DBAs

Pythian Group - Thu, 2016-02-04 09:21

 

Troubleshooting connection issues with Oracle SQL*Net can be difficult at times due to the many options that can be taken during configuration. One of the options is where the file tnsnames.ora may be found. There are multiple locations available, and at times there is justification for having more than one copy of the file.

Perhaps there is a hybrid database naming configuration. Say there are a number of company-wide databases that are defined in Oracle OID, OpenLDAP or Active Directory, while local test databases are defined in one or more local tnsnames.ora files.

When one of the databases appears to no longer be available, even though you are quite sure it should be available, it’s good to know the default search order used by Oracle to resolve the name.

The Oracle names resolution default search order for Linux and Windows is explained here:

 

 

But wait, there’s more!

You may know that on linux and unix systems tnsnames.ora can be placed in the /etc directory.

Do you know just what happens when /etc/tnsnames.ora is used?  Learn that and more by watching the rest of the presentation.

 

Discover more about our expertise in Oracle.

Categories: DBA Blogs

RHEL vs OEL: Back in The Cold War?

Pythian Group - Thu, 2016-02-04 07:32

 

I recently encountered a déjà vu on a client system, something I’ve seen repeatedly over the last couple of years. I’ve decided to write about it to prevent others from tumbling down the same rabbit hole.

Red Hat Enterprise Linux 6 system with a Red Hat support contract on it. A DBA had installed Oracle’s oracle-rdbms-server-12cR1-preinstall RPM package. The DBA was doing that based on Oracle support note “Linux OS Installation with Reduced Set of Packages for Running Oracle Database Server (Doc ID 728346.1)” which in the main section simply states:

“For Oracle database 12cR1 running on OL6/RHEL6,use command below to install all packages required for running Oracle software and resolve all dependencies.

yum install oracle-rdbms-server-12cR1-preinstall”

I’ve got a bit problem with how that note was written. Let’s take a look at what exactly happens to your RHEL 6 system, if you do that. First of all, you have to add Oracle’s yum repo to your yum configuration in order to be able to install that package. I’m a firm believer that you should never mix repositories of different Linux distributions on a production server, but I digress.

Then, when you actually install the RPM:

Resolving Dependencies
--> Running transaction check
---> Package oracle-rdbms-server-12cR1-preinstall.x86_64 0:1.0-14.el6 will be installed
--> Processing Dependency: kernel-uek for package: oracle-rdbms-server-12cR1-preinstall-1.0-14.el6.x86_64
--> Processing Dependency: libaio-devel for package: oracle-rdbms-server-12cR1-preinstall-1.0-14.el6.x86_64
--> Running transaction check
---> Package kernel-uek.x86_64 0:2.6.39-400.264.13.el6uek will be installed
--> Processing Dependency: kernel-uek-firmware = 2.6.39-400.264.13.el6uek for package: kernel-uek-2.6.39-400.264.13.el6uek.x86_64
---> Package libaio-devel.x86_64 0:0.3.107-10.el6 will be installed
--> Running transaction check
---> Package kernel-uek-firmware.noarch 0:2.6.39-400.264.13.el6uek will be installed
--> Finished Dependency Resolution

Some DBAs just skip over that section entirely and don’t pay attention to it, but right there Oracle has just installed their own kernel on a RHEL6 system. It’s also been activated and marked as default in grub.conf (which is the norm when installing a kernel RPM):

default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Red Hat Enterprise Linux Server (2.6.32-400.37.15.el6uek.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.32-400.37.15.el6uek.x86_64 ro root=.....
initrd /initramfs-2.6.32-400.37.15.el6uek.x86_64.img

Let that sink in for a minute.

Leaving the system as it is, we’d be going ahead with the installation of the Oracle software, start running our database and go into production. If at any point in the future when we’re be rebooting our server, or if it crashes, we’d suddenly be running the UEK kernel and no longer the Red Hat kernel. There’s also a fairly ugly can of worms awaiting the DBA in question when the SA sees that someone has changed the default kernel.

But the real question is, what would running a different kernel do to us?

Well, Red Hat has an article that’s locked behind a subscriber-only wall. In a nutshell the message it contains is that third party packages are not supported by Red Hat, and third party kernels render all issues unsupported. Fair enough, that makes perfect sense, doesn’t it?

Thus, in essence, we’ve just voided support for our server. If we would hit any issue, we’d have to first clean out any Oracle packages that have replaced Red Hat’s – including the kernel – and reboot the machine back into a clean state, or we’d have to go to the maker of our custom kernel for support. That’s clearly not something you’d want to do during a critical issue on a production server.

If we read the aforementioned Oracle support note a bit more closely, way at the bottom in “Remarks”, as if it’s of no importance, we see this:

“RHEL systems registered with RHN or without an registration with an update channel and which should remain RedHat, can generate a primary list of missing dependencies (manually download the oracle-validated rpm package on public-yum):
# rpm -Uhv oracle-validated–.rpm”

“RHEL systems which should remain RedHat”.

Wait, what?

Doesn’t this basically mean that the note isn’t really telling us how to prepare for Oracle database software installation, but instead it’s telling us how to convert from RHEL to OEL? How to move our support contract over to Oracle?

Also note how the “Applies to” section in that particular note specifically does not include RHEL? It simply says “Linux”. This somehow reminds me of a certain horse that a certain city got as a present at some point in the distant past. Neatly packaged and easy to use, but potentially severe long term impact if you’re installing the package.

I’d like to appeal to both Oracle and Red Hat at this point, please folks, make this more clear. Both sides could do better here. There’s really no reason why solution 55413 should be locked behind a pay wall. It’s often the DBAs who are dealing with these packages to prep for a software install, and they often don’t have access to this content. On a similar note, support note 728346.1 also could be written in a much clearer manner to prevent this sort of confusion. Why is the kernel a dependency of that preinstall RPM? There’s absolutely no need for that.

We’re not in a cold war, are we?

TLDR; Don’t mix repositories of different distributions. Don’t install oracle-rdbms-server-12cR1-preinstall on RHEL unless you’re willing to deal with the consequences.

Discover more about our experience in Oracle.

Categories: DBA Blogs

Log Buffer #458: A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2016-02-01 10:38

This Log Buffer Edition covers various useful tips and tricks from blogs for Oracle, SQL Server and MySQL.

Oracle:

  • pstack(or thread stack) for Windows to diagnose Firefox high CPU usage
  • With the ever-changing browser landscape, we needed to make some tough decisions as to which browsers and versions are going to be deemed “supported” for Oracle Application Express.  There isn’t enough time and money to support all browsers and all versions, each with different bugs and varying levels of support of standards.
  • Are you effectively using Java SE 8 streams for data processing? Introduced in Java 8, streams allow you to process data in a declarative way and leverage multi-core architectures without writing multithreaded code.
  • If you are upgrading but would like to increase or decrease the number of data sources you must do so when installing the latest version.
  • When talking about BPM security, you need to know the about certain set of information and where those information will come from etc.

SQL Server:

  • Having fun with PARSENAME (SQL Spackle)
  • Time and Space: How to Monitor Drive Space in SQL Server
  • Application Security with Azure Key Vault
  • Declarative SQL: Using CHECK() & DEFAULT
  • Microsoft SQL Server 2016 Public Preview Boosts Database Security

MySQL:

  • Have you heard the news? As of MySQL 5.7.8, MySQL includes a new JavaScript Object Notation (JSON) data type that enables more efficient access to JSON-encoded data.
  • Oracle MySQL 5.7 Database Nears General Availability
  • Most of you know, that it is possible to synchronize MySQL and MariaDB servers using replication. But with the latest releases, it is also possible to use more than just two servers as a multi-master setup.
  • Transport Layer Security (TLS, also often referred to as SSL) is an important component of a secure MySQL deployment, but the complexities of properly generating the necessary key material and configuring the server dissuaded many users from completing this task.
  • Managing MySQL Replication for High Availability

 

Learn more about Pythian’s expertise in Oracle , SQL Server & MySQL.

Categories: DBA Blogs

Differentiate with Oracle Cloud: Transformative OPN Cloud Program Launch - 01 Feb 2016

Customers are looking for the right partners to deliver solutions to help streamline and grow their businesses. Oracle PartnerNetwork has evolved to offer new opportunities for partners to...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Trace Files -- 11 : Tracing the Optimization of an SQL Statement

Hemant K Chitale - Sun, 2016-01-31 07:53
So far, the previous examples have been on tracing the Execution of SQL statements and/or the Execution Plan used.

But what if you want to trace the Optimization --- identify how the Optimizer determined an "optimal" execution plan -- of an SQL statement.

Note : Pre-11g methods involved Event 10053.   But as with Event 10046, I prefer to use methods where I don't have to use an Event Number but a Name.  So, here I am not demonstrating the Event 10053 method itself.

Let's assume that there is a particular SQL identified as SQL_ID='b086mzzp82x7w' for which we need to know not just the Execution Plan but also how Oracle arrived at the Execution Plan.

Here's one way :

SQL> alter session set events 'trace[rdbms.SQL_Optimizer.*][sql:b086mzzp82x7w]';

Session altered.

SQL> select 'abc' from dual;

'AB
---
abc

SQL> select count(*) from small_insert;

COUNT(*)
----------
4

SQL> select count(*) from all_objects_many_list
2 where created > sysdate-365;

COUNT(*)
----------
25548

SQL> select count(*) from all_objects_many_list;

COUNT(*)
----------
7254201

SQL> select value from v$diag_info where name = 'Default Trace File';

VALUE
------------------------------------------------------------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_3102.trc

SQL>

I have multiple SQLs executed in the session but am interested in the Optimization of only 1 SQL.  Note how the specific SQL_ID is specified in the ALTER SESSION SET EVENTS command.

The resultant trace file is a very long trace file with a listing of all the instance/session parameters (hidden and public), all the Bug Fixes and the costing done for the SQL.  The trace file captures only the SQL of interest, all the other SQLs in the same session are *not* in the trace file.

Here is an extract from the trace file :

Registered qb: SEL$1 0x2173aea0 (PARSER)
---------------------
QUERY BLOCK SIGNATURE
---------------------
signature (): qb_name=SEL$1 nbfros=1 flg=0
fro(0): flg=4 objn=35014 hint_alias="ALL_OBJECTS_MANY_LIST"@"SEL$1"

SPM: statement not found in SMB

**************************
Automatic degree of parallelism (ADOP)
**************************
Automatic degree of parallelism is disabled: Parameter.

PM: Considering predicate move-around in query block SEL$1 (#0)
**************************
Predicate Move-Around (PM)
**************************
OPTIMIZER INFORMATION

******************************************
----- Current SQL Statement for this session (sql_id=b086mzzp82x7w) -----
select count(*) from all_objects_many_list
where created > sysdate-365
*******************************************
Legend
The following abbreviations are used by optimizer trace.
CBQT - cost-based query transformation
JPPD - join predicate push-down
OJPPD - old-style (non-cost-based) JPPD
FPD - filter push-down
PM - predicate move-around
CVM - complex view merging
SPJ - select-project-join
SJC - set join conversion
SU - subquery unnesting
OBYE - order by elimination
OST - old style star transformation
ST - new (cbqt) star transformation
CNT - count(col) to count(*) transformation
JE - Join Elimination
JF - join factorization
CBY - connect by
SLP - select list pruning
DP - distinct placement
qb - query block
LB - leaf blocks
DK - distinct keys
LB/K - average number of leaf blocks per key
DB/K - average number of data blocks per key
CLUF - clustering factor
NDV - number of distinct values
Resp - response cost
Card - cardinality
Resc - resource cost
NL - nested loops (join)
SM - sort merge (join)
HA - hash (join)
CPUSPEED - CPU Speed
IOTFRSPEED - I/O transfer speed
IOSEEKTIM - I/O seek time
SREADTIM - average single block read time
MREADTIM - average multiblock read time
MBRC - average multiblock read count
MAXTHR - maximum I/O system throughput
SLAVETHR - average slave I/O throughput
dmeth - distribution method
1: no partitioning required
2: value partitioned
4: right is random (round-robin)
128: left is random (round-robin)
8: broadcast right and partition left
16: broadcast left and partition right
32: partition left using partitioning of right
64: partition right using partitioning of left
256: run the join in serial
0: invalid distribution method
sel - selectivity
ptn - partition
***************************************
PARAMETERS USED BY THE OPTIMIZER
********************************
*************************************
PARAMETERS WITH ALTERED VALUES
Compilation Environment Dump
Bug Fix Control Environment


*************************************
PARAMETERS WITH DEFAULT VALUES
******************************
Compilation Environment Dump
........... long list of parameters and their values .........
..............................................................
.. followed by ...
........... long list of Bug Fixes that are enabled ..........
..............................................................


***************************************
PARAMETERS IN OPT_PARAM HINT
****************************
***************************************
Column Usage Monitoring is ON: tracking level = 1
***************************************

Considering Query Transformations on query block SEL$1 (#0)
**************************
Query transformations (QT)
**************************
JF: Checking validity of join factorization for query block SEL$1 (#0)
JF: Bypassed: not a UNION or UNION-ALL query block.
ST: not valid since star transformation parameter is FALSE
TE: Checking validity of table expansion for query block SEL$1 (#0)
TE: Bypassed: No partitioned table in query block.
CBQT bypassed for query block SEL$1 (#0): no complex view, sub-queries or UNION (ALL) queries.
CBQT: Validity checks failed for b086mzzp82x7w.
CSE: Considering common sub-expression elimination in query block SEL$1 (#0)
*************************
Common Subexpression elimination (CSE)
*************************
CSE: CSE not performed on query block SEL$1 (#0).
OBYE: Considering Order-by Elimination from view SEL$1 (#0)
***************************
Order-by elimination (OBYE)
***************************
OBYE: OBYE bypassed: no order by to eliminate.
CVM: Considering view merge in query block SEL$1 (#0)
OJE: Begin: find best directive for query block SEL$1 (#0)
OJE: End: finding best directive for query block SEL$1 (#0)
CNT: Considering count(col) to count(*) on query block SEL$1 (#0)
*************************
Count(col) to Count(*) (CNT)
*************************
CNT: COUNT() to COUNT(*) not done.
query block SEL$1 (#0) unchanged
Considering Query Transformations on query block SEL$1 (#0)
**************************
Query transformations (QT)
**************************
JF: Checking validity of join factorization for query block SEL$1 (#0)
JF: Bypassed: not a UNION or UNION-ALL query block.
ST: not valid since star transformation parameter is FALSE
TE: Checking validity of table expansion for query block SEL$1 (#0)
TE: Bypassed: No partitioned table in query block.
CBQT bypassed for query block SEL$1 (#0): no complex view, sub-queries or UNION (ALL) queries.
CBQT: Validity checks failed for b086mzzp82x7w.
CSE: Considering common sub-expression elimination in query block SEL$1 (#0)
*************************
Common Subexpression elimination (CSE)
*************************
CSE: CSE not performed on query block SEL$1 (#0).
SU: Considering subquery unnesting in query block SEL$1 (#0)
********************
Subquery Unnest (SU)
********************
SJC: Considering set-join conversion in query block SEL$1 (#0)
*************************
Set-Join Conversion (SJC)
*************************
SJC: not performed
PM: Considering predicate move-around in query block SEL$1 (#0)
**************************
Predicate Move-Around (PM)
**************************
PM: PM bypassed: Outer query contains no views.
PM: PM bypassed: Outer query contains no views.
query block SEL$1 (#0) unchanged
FPD: Considering simple filter push in query block SEL$1 (#0)
"ALL_OBJECTS_MANY_LIST"."CREATED">SYSDATE@!-365
try to generate transitive predicate from check constraints for query block SEL$1 (#0)
finally: "ALL_OBJECTS_MANY_LIST"."CREATED">SYSDATE@!-365

apadrv-start sqlid=12691376846034531580
:
call(in-use=2008, alloc=16344), compile(in-use=56240, alloc=58632), execution(in-use=2504, alloc=4032)

*******************************************
Peeked values of the binds in SQL statement
*******************************************

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT COUNT(*) "COUNT(*)" FROM "HEMANT"."ALL_OBJECTS_MANY_LIST" "ALL_OBJECTS_MANY_LIST" WHERE "ALL_OBJECTS_MANY_LIST"."CREATED">SYSDATE@!-365
kkoqbc: optimizing query block SEL$1 (#0)

:
call(in-use=2056, alloc=16344), compile(in-use=57184, alloc=58632), execution(in-use=2504, alloc=4032)

kkoqbc-subheap (create addr=0x7f5f216ff9d0)
****************
QUERY BLOCK TEXT
****************
select count(*) from all_objects_many_list
where created > sysdate-365
---------------------
QUERY BLOCK SIGNATURE
---------------------
signature (optimizer): qb_name=SEL$1 nbfros=1 flg=0
fro(0): flg=0 objn=35014 hint_alias="ALL_OBJECTS_MANY_LIST"@"SEL$1"

-----------------------------
SYSTEM STATISTICS INFORMATION
-----------------------------
Using NOWORKLOAD Stats
CPUSPEEDNW: 937 millions instructions/sec (default is 100)
IOTFRSPEED: 4096 bytes per millisecond (default is 4096)
IOSEEKTIM: 10 milliseconds (default is 10)
MBRC: NO VALUE blocks (default is 8)

***************************************
BASE STATISTICAL INFORMATION
***********************
Table Stats::
Table: ALL_OBJECTS_MANY_LIST Alias: ALL_OBJECTS_MANY_LIST
#Rows: 7197952 #Blks: 98279 AvgRowLen: 93.00 ChainCnt: 0.00
Index Stats::
Index: ALL_OBJ_M_L_CRTD_NDX Col#: 7
LVLS: 2 #LB: 19093 #DK: 1232 LB/K: 15.00 DB/K: 351.00 CLUF: 432893.00
Access path analysis for ALL_OBJECTS_MANY_LIST
***************************************
SINGLE TABLE ACCESS PATH
Single Table Cardinality Estimation for ALL_OBJECTS_MANY_LIST[ALL_OBJECTS_MANY_LIST]
Column (#7): CREATED(
AvgLen: 8 NDV: 1232 Nulls: 0 Density: 0.000812 Min: 2455803 Max: 2457343
Table: ALL_OBJECTS_MANY_LIST Alias: ALL_OBJECTS_MANY_LIST
Card: Original: 7197952.000000 Rounded: 1346076 Computed: 1346075.60 Non Adjusted: 1346075.60
Access Path: TableScan
Cost: 27174.11 Resp: 27174.11 Degree: 0
Cost_io: 26619.00 Cost_cpu: 6242311042
Resp_io: 26619.00 Resp_cpu: 6242311042
Access Path: index (index (FFS))
Index: ALL_OBJ_M_L_CRTD_NDX
resc_io: 5173.00 resc_cpu: 4598699894
ix_sel: 0.000000 ix_sel_with_filters: 1.000000
Access Path: index (FFS)
Cost: 5581.95 Resp: 5581.95 Degree: 1
Cost_io: 5173.00 Cost_cpu: 4598699894
Resp_io: 5173.00 Resp_cpu: 4598699894
Access Path: index (IndexOnly)
Index: ALL_OBJ_M_L_CRTD_NDX
resc_io: 3573.00 resc_cpu: 294660105
ix_sel: 0.187008 ix_sel_with_filters: 0.187008
Cost: 3599.20 Resp: 3599.20 Degree: 1
Best:: AccessPath: IndexRange
Index: ALL_OBJ_M_L_CRTD_NDX
Cost: 3599.20 Degree: 1 Resp: 3599.20 Card: 1346075.60 Bytes: 0

***************************************


OPTIMIZER STATISTICS AND COMPUTATIONS
***************************************
GENERAL PLANS
***************************************
Considering cardinality-based initial join order.
Permutations for Starting Table :0
Join order[1]: ALL_OBJECTS_MANY_LIST[ALL_OBJECTS_MANY_LIST]#0
***********************
Best so far: Table#: 0 cost: 3599.2033 card: 1346075.6041 bytes: 10768608
***********************
(newjo-stop-1) k:0, spcnt:0, perm:1, maxperm:2000

*********************************
Number of join permutations tried: 1
*********************************
Enumerating distribution method (advanced)

Trying or-Expansion on query block SEL$1 (#0)
Transfer Optimizer annotations for query block SEL$1 (#0)
id=0 frofkks[i] (index start key) predicate="ALL_OBJECTS_MANY_LIST"."CREATED">SYSDATE@!-365
Final cost for query block SEL$1 (#0) - All Rows Plan:
Best join order: 1
Cost: 3599.2033 Degree: 1 Card: 1346076.0000 Bytes: 10768608
Resc: 3599.2033 Resc_io: 3573.0000 Resc_cpu: 294660105
Resp: 3599.2033 Resp_io: 3573.0000 Resc_cpu: 294660105
kkoqbc-subheap (delete addr=0x7f5f216ff9d0, in-use=26384, alloc=32840)
kkoqbc-end:
:
call(in-use=8664, alloc=49288), compile(in-use=59704, alloc=62776), execution(in-use=2504, alloc=4032)

kkoqbc: finish optimizing query block SEL$1 (#0)
apadrv-end
:
call(in-use=8664, alloc=49288), compile(in-use=60616, alloc=62776), execution(in-use=2504, alloc=4032)


Starting SQL statement dump

user_id=87 user_name=HEMANT module=SQL*Plus action=
sql_id=b086mzzp82x7w plan_hash_value=1689651126 problem_type=3
----- Current SQL Statement for this session (sql_id=b086mzzp82x7w) -----
select count(*) from all_objects_many_list
where created > sysdate-365
sql_text_length=71
sql=select count(*) from all_objects_many_list
where created > sysdate-365
----- Explain Plan Dump -----
----- Plan Table -----

============
Plan Table
============
-------------------------------------------------+-----------------------------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time |
-------------------------------------------------+-----------------------------------+
| 0 | SELECT STATEMENT | | | | 3599 | |
| 1 | SORT AGGREGATE | | 1 | 8 | | |
| 2 | INDEX RANGE SCAN | ALL_OBJ_M_L_CRTD_NDX| 1315K | 10M | 3599 | 00:00:44 |
-------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
2 - access("CREATED">SYSDATE@!-365)

Content of other_xml column
===========================
db_version : 11.2.0.4
parse_schema : HEMANT
plan_hash : 1689651126
plan_hash_2 : 1742296710
Outline Data:
/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('11.2.0.4')
DB_VERSION('11.2.0.4')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
INDEX(@"SEL$1" "ALL_OBJECTS_MANY_LIST"@"SEL$1" ("ALL_OBJECTS_MANY_LIST"."CREATED"))
END_OUTLINE_DATA
*/

Optimizer state dump:
Compilation Environment Dump
optimizer_mode_hinted = false
optimizer_features_hinted = 0.0.0
........... long list of parameters and their values .........
..............................................................
.. followed by ...
........... long list of Bug Fixes that are enabled ..........
..............................................................

Query Block Registry:
SEL$1 0x2173aea0 (PARSER) [FINAL]

:
call(in-use=11728, alloc=49288), compile(in-use=90576, alloc=152120), execution(in-use=6440, alloc=8088)

End of Optimizer State Dump
Dumping Hints
=============
====================== END SQL Statement Dump ======================

The trace file captured only the SQL of interest.  It also shows all the instance /session parameters and Bug Fixes that are relevant (these are very long lists so I have not reproduced them in entirety).
Note : The listing of parameters and Bug Fixes are very important in that if you have different execution plans in two different databases, you must verify the parameters and bug fixes and ensure that any differences in them are not relevant.

From the trace file, we can determine that this is the Execution Plan chosen :
-------------------------------------------------+-----------------------------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time |
-------------------------------------------------+-----------------------------------+
| 0 | SELECT STATEMENT | | | | 3599 | |
| 1 | SORT AGGREGATE | | 1 | 8 | | |
| 2 | INDEX RANGE SCAN | ALL_OBJ_M_L_CRTD_NDX| 1315K | 10M | 3599 | 00:00:44 |
-------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
2 - access("CREATED">SYSDATE@!-365)

Content of other_xml column
===========================
db_version : 11.2.0.4
parse_schema : HEMANT
plan_hash : 1689651126
plan_hash_2 : 1742296710
Outline Data:
/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('11.2.0.4')
DB_VERSION('11.2.0.4')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
INDEX(@"SEL$1" "ALL_OBJECTS_MANY_LIST"@"SEL$1" ("ALL_OBJECTS_MANY_LIST"."CREATED"))
END_OUTLINE_DATA
*/

The computation of Cost appears here :
SINGLE TABLE ACCESS PATH
Single Table Cardinality Estimation for ALL_OBJECTS_MANY_LIST[ALL_OBJECTS_MANY_LIST]
Column (#7): CREATED(
AvgLen: 8 NDV: 1232 Nulls: 0 Density: 0.000812 Min: 2455803 Max: 2457343
Table: ALL_OBJECTS_MANY_LIST Alias: ALL_OBJECTS_MANY_LIST
Card: Original: 7197952.000000 Rounded: 1346076 Computed: 1346075.60 Non Adjusted: 1346075.60
Access Path: TableScan
Cost: 27174.11 Resp: 27174.11 Degree: 0
Cost_io: 26619.00 Cost_cpu: 6242311042
Resp_io: 26619.00 Resp_cpu: 6242311042
Access Path: index (index (FFS))
Index: ALL_OBJ_M_L_CRTD_NDX
resc_io: 5173.00 resc_cpu: 4598699894
ix_sel: 0.000000 ix_sel_with_filters: 1.000000
Access Path: index (FFS)
Cost: 5581.95 Resp: 5581.95 Degree: 1
Cost_io: 5173.00 Cost_cpu: 4598699894
Resp_io: 5173.00 Resp_cpu: 4598699894
Access Path: index (IndexOnly)
Index: ALL_OBJ_M_L_CRTD_NDX
resc_io: 3573.00 resc_cpu: 294660105
ix_sel: 0.187008 ix_sel_with_filters: 0.187008
Cost: 3599.20 Resp: 3599.20 Degree: 1
Best:: AccessPath: IndexRange
Index: ALL_OBJ_M_L_CRTD_NDX
Cost: 3599.20 Degree: 1 Resp: 3599.20 Card: 1346075.60 Bytes: 0

Note how different Access Paths (Table Scan, Index FFS, IndexOnly,IndexRange) are all listed. The Best is shown as an IndexRange on the ALL_OBJ_M_L_CRTD_NDX with a Cost of 3599.20 More details appear here :
OPTIMIZER STATISTICS AND COMPUTATIONS
***************************************
GENERAL PLANS
***************************************
Considering cardinality-based initial join order.
Permutations for Starting Table :0
Join order[1]: ALL_OBJECTS_MANY_LIST[ALL_OBJECTS_MANY_LIST]#0
***********************
Best so far: Table#: 0 cost: 3599.2033 card: 1346075.6041 bytes: 10768608
***********************
(newjo-stop-1) k:0, spcnt:0, perm:1, maxperm:2000

*********************************
Number of join permutations tried: 1
*********************************
Enumerating distribution method (advanced)

Trying or-Expansion on query block SEL$1 (#0)
Transfer Optimizer annotations for query block SEL$1 (#0)
id=0 frofkks[i] (index start key) predicate="ALL_OBJECTS_MANY_LIST"."CREATED">SYSDATE@!-365
Final cost for query block SEL$1 (#0) - All Rows Plan:
Best join order: 1
Cost: 3599.2033 Degree: 1 Card: 1346076.0000 Bytes: 10768608
Resc: 3599.2033 Resc_io: 3573.0000 Resc_cpu: 294660105
Resp: 3599.2033 Resp_io: 3573.0000 Resc_cpu: 294660105

This is a very detailed listing for an SQL query on a single Table (no joins) and a single candidate index.  Try running this with an SQL with Join of two or more tables and more than one candidate Index on each and see how complicated the Cost calculation becomes.


Note : To disable tracing in the session, I would run :

ALTER SESSION SET EVENTS 'trace[rdbms.SQL_Optimizer.*] off';


This sort of tracing can also be done with ALTER SYSTEM if you are not sure which session will be running the SQL_ID of interest and cannot interactively invoke the SQL from a private session.  
.
.
.



Categories: DBA Blogs

Is the DBA Career Dead?

Pythian Group - Thu, 2016-01-28 16:00

 

With the increased popularity of cloud services, one of the questions that I often receive is: “Is the DBA career dying? What will you do for a living in the future?” In this article I will give my personal opinion about the future of our beloved profession, and try to calm down those that have already started to look for another career.

The first thing that I want to point out is that when we started to work in IT we knew that it was a career that is different than most of the other ones out there. Its nature is  a dynamic and exciting one that reinvents itself all the time, with technological news showing up every single year and changing the entire landscape. We have chosen a field that pushes us to keep studying, learning and evolving, and this is the kind of mindset I want you to have while reading this article.

The Database Administrator role is not going anywhere. We are not an endangered species and won’t become one in the foreseeable future. Cloud is not our enemy. The data market is just evolving, and the cloud is bringing a lot of new things that will give us more power and more options.

In today’s market we have two very common problems:

  1. Companies can’t find enough people to fill in all positions.

We all know this one. I’m sure we all know several companies that have an open position for months, have interviewed dozens of people, and just can’t find anyone that suits the position.

  1. Companies want to keep their costs as low as possible.

Companies want to make money, and we had a big worldwide crisis just a few years ago that we are still recovering from. This means companies are trying to find ways to improve their productivity, while keeping their costs as low as possible.

 

In a scenario like this, the cloud offerings come as an aid to both improve our productivity as a DBA, and to help the company save money. Let’s think for a while about how many tasks we perform daily that don’t bring real value for the business. No doubt that when we’re planning the new high availability solution, or doing performance tuning on that slow query we can see the value that it will bring to the company. In the first case, this will guarantee that all applications are up and running at full speed when the company needs it. The latter will make sure that the server is handling the workload, running more sessions at the same time, and making both internal and external customers happy.

But how about the time you spent trying to find more disk space for all your databases? How about trying to find disk space for all your backups because the database has grown too large and we didn’t plan ahead? Then there’s all the time that you spend installing SQL and Windows patches. I know, in some big companies, we have a dedicated SAN admin and the infrastructure administrators that will worry about those tasks, but that’s not the everyone’s reality. The vast majority of small and medium companies have a small team that is responsible for multiple areas. Why? Scroll up and read problems 1 and 2 om my list above one more time.

I’ll wait for you.

Now, let’s imagine another reality. Let’s imagine a world where I receive a disk space alert for my backups. The company has acquired a new company, the database growth was much bigger than expected, and we ran out of disk space. I go to a web portal and a few mouse clicks later I have 1TB of disk available to me. All I have to do is open SQL Server Management Studio and change my backup jobs to use the new storage area. Problem solved in less than 15 minutes.

Let’s envision a world where I can get all those small databases I have that are not too important for the business (yeah, we all have a lot of those, don’t lie to yourself) and move those databases to the cloud so they don’t use our precious server resources. I don’t need to worry about patching and managing those databases. Wouldn’t that be great? And how about getting rid of the QA and testing servers and replacing them with virtual machines that can just turn off when they are not in use and save money? And those huge tables with hundreds of millions of rows that causes us problems every single day. Wouldn’t it be great if I could replace that complicated sliding window partition solution that we developed to manage historic data, and instead make SQL Server automatically move old and unused data to the cloud, while also keeping the data available for end users in a transparent way?

Cloud is indeed a career shift dynamic, but not one that will kill the database administrator role and destroy families. Instead, it’s one that will make us more efficient, provide us with tools and options to focus ourselves on tasks that bring value to the company. It’s a solution where we can use the existing hardware more efficiently and make our lives easier. Embrace the changes just like we embraced all new technologies that came before it, and use each one as a tool to be successful in your role.

 

Discover more about our expertise in the Cloud.

Categories: DBA Blogs

My Virtual Internship with Pythian

Pythian Group - Thu, 2016-01-28 14:38

My internship with Pythian started in October and has been an incredible opportunity to build upon my project management skills with a virtual internship. I never imagined working for a tech company. As business students, we are often characterized as lacking the hard skills needed for industries as competitive as those in STEM fields. After my internship with Pythian, I know that my perspective can bring value to a team especially within the tech industry. My work with Pythian has inspired me to apply to computer science programs in Taiwan after I graduate in June.

During my time at Pythian I worked on the Pythia Program which is Pythian’s commitment to improving gender diversity by encouraging, promoting and retaining women and young girls in STEM fields. I was able to work with managers across many different departments and learn how to be part of a virtual team, while building a plan for the Pythia Program.

 

Making an impact for women in STEM fields

The Pythia program is setting an incredible precedent for other tech companies in North America and I am very excited that I was a part of that process. Researching and compiling data on women in STEM fields, particularly the tech industries, was an eye-opening experience. Identifying the pain points for underrepresented groups in tech, particularly women, is key in developing solutions that encourage women to seek positions in this field.

I am looking forward to seeing Pythian’s impact in the local community with Technovation and the impact on the young women who will be walking the halls and learning from such great mentors. Pythian is not only making great strides for the young women in their local community, but for all women in tech by setting an example with their own diversity initiatives.

 

Working with the women of Pythian

While assigned to the Pythia Program, I was lucky to be working with women who were heads of their departments and brought a diverse range of skills to the table. Whether building communication plans with the marketing department, or measuring progress with the organizational development team, I was always challenged to look at the issue from different perspectives.

As a project manager it can be very easy to stay on the periphery and remain an outsider, and admittedly this was a concern of mine as an intern and a newcomer to Pythian. The level of trust that the OD department put in me, and their guiding hand in helping me navigate through the on boarding period was instrumental to our success.

Working with women from many different technical backgrounds, I was able to learn much more than if I had stayed within one specific department. I cannot say how important it is as a young women to work with other women on the Pythia Program. It was inspiring to be able to work with accomplished women with so much expertise that they were willing to share.

 

Working virtually is a whole different ballgame

It has been said that virtual work can be trying and difficult for those new to the team, however my time with Pythian was the complete opposite. I am very thankful to have been able to navigate a virtual internship with such incredible support from the OD team. The lines of communication have always been open, and this has been instrumental to our success on the Pythia Program.

Pythian’s managers made themselves accessible and available during my time on the Pythia program, and their guidance was excellent, as was learning from their experiences.

All in all, I could not have asked for a greater internship than my time at Pythian. I was able to work on a project that was important to me as a woman, while working with other women at Pythian. Together we made an impact within the organization and in the local tech community in Ottawa. In the months to come we will see the reach of the Pythia Program on others. For myself, the experience has been impressive as I apply to computer science programs abroad and see my own future in tech. I look forward to following the continued success of Pythian and the Pythia Program.

 

Find out more about the Pythia Program.

Categories: DBA Blogs

SQL Server Cumulative Update changes

Pythian Group - Wed, 2016-01-27 12:42

 

Microsoft recently released the first Cumulative Update for SQL Server 2012 SP3 This CU addresses 8 issues in the product. But more importantly, it also marks a big shift in the Cumulative Update message sent by Microsoft.

For a long time, there was a discussion between DBAs about when/if we should apply a cumulative update. The Microsoft official message always have been that we should apply a cumulative update only if we were facing an issue in our environment caused by a know and fixed bug. This was also evident by the fact that to be able to download a cumulative update it was necessary to register on their website, provide an email address and they would send a link to download the package to your email address.

So, what has changed? Starting now, the latest cumulative update package will be maintained in the Download Center instead of the hotfix server. This will eliminate the need to register to get the latest package, but this is not the only thing that has changed, the entire message that you can read in the knowledge base article has changed, and instead of a warning message saying that we should not install the package unless it was necessary, now we have:
“ Microsoft recommends ongoing, proactive installation of SQL Server CUs as they become available:
SQL Server CUs are certified to the same levels as Service Packs, and should be installed with the same level of confidence
Historical data show a significant number of support cases involve an issue that has already been addressed in a released CU
CUs may contain added value over and above hotfixes such as supportability, manageability, and reliability updates
As with SQL Server Service packs, Microsoft recommends that you test CUs before deploying to production environments”

This is a big change from what we had before. The concerns we had in the past were necessary because the hotfixes were not tested in the same levels as the service packs were. There were no regression tests and not all possible integration tests were executed. So there was a real concern that something could go wrong in specific scenarios that were not tested. But this has changed and now every cumulative update goes through all the same certification levels that are applied to Service Packs.

This is a trend that is happening not only with SQL Server, this is the result of an agile development effort that is happening throughout the entire Microsoft stack of products. Windows, both personal and server editions already have constant updates instead of Service Packs for some time now and it seems SQL Server will soon follow this road.

This big change in how Microsoft deliver updates bring us to an interesting discussion: how to manage frequent and constant product updates in your environment? The last item in the Microsoft message clearly says that you need to test CUs, just like you test Service Packs before applying. Are customers willing to go through testing and patching processes every couple of months when a new CU is released? How can we convince customers of the benefit of having the latest version of the product?

I believe people will eventually get used to this new model of constant updates and catch up, creating plans to update more often their environments, maybe not apply every single CU, but apply them every 2 releases, or every half year, etc.

What do you think? How do you see this new model fitting in your existing environment? I would love to know other people experience on this subject.

Oh, and before I forget: you can download the latest CU for SQL 2012 SP3 that I mentioned in the beginning of the article. The link will not change for every release, so you will always be able to download the latest version using this link.

Categories: DBA Blogs

How do you define IT innovation?

Pythian Group - Wed, 2016-01-27 10:57

Find out how the experts answered at Pythian’s Velocity of Innovation event in San Francisco

Once again, I had the pleasure of moderating another Velocity of Innovation event this past November in San Francisco. Both panelists and guests brought with them a varied range of insights, experiences and perspectives. And as always, this resulted in a thoughtful and lively discussion.

The format of these events is to put three IT leaders with expertise in a particular area in a room, and start a conversation. With our Velocity events, we always start our discussions with a few questions, and the panelists provide some answers. But the idea is to get a conversation going between the audience and panelists.

That day, we discussed a range of subjects from IT innovation, to security, to our favourite wrap-up topic: taking a look into the IT crystal ball and commenting on what current technology trends will really take hold in the future. This last one is always a lot of fun. In this blog post I will provide just some of the highlights from the first discussion topic at the event: innovation and agility in IT.

On our panel were three IT experts:

Sean Rich is the director of IT at Mozilla, leading their application services group. He takes care of web operations, along with pretty much everything data related.

Cory Isaacson is the CTO at Risk Management Solutions. He’s currently bringing big data and scalable systems together to create a new cloud-based platform.

Aaron Lee is Pythian’s VP of transformation services. He runs a team that specializes in helping clients harness technology to deliver real outcomes. Usually they involve things like big data, DevOps, cloud, advanced analytics. He’s involved in some of the most leading edge initiatives for Pythian customers.

I started the discussion by asking the panel, and the audience members, to discuss the notions of innovation and agility, and to try to describe what they have explicitly done to improve innovation and to make their own organizations and those of their customers more agile.

Cory: My business evaluates risk. Our customers are the biggest insurance companies in the world. We run catastrophe models for them so that we can actually see what an earthquake might cost them or a hurricane, for example. I run technology for the company and have to build all the software. We’re innovating tremendously and so now it’s funny because our executives ask us to evaluate the risk of our own projects. We’re trying to do some very, very innovative things. I don’t know if any of you have an insurance background, but it’s not the most up-to-date industry when it comes to technology. As you know it’s been around a long, long time. But at my company some of the things that we’re trying to do are, honestly, more advanced than most other things I’ve ever seen in my career. And that’s why I took the position. But when you’re doing innovation, it is risky. There’s no way around it. There are a lot to evaluate: from different algorithms to the risk models and the catastrophe models. How do you evaluate them? Can you actually run them? We’re going from a 25-year-old desktop application, running on Microsoft SQL server to a cloud-based implementation. We’re taking thousands of servers and trying to move all the customers into a cloud implementation.

Sean: In my role I’m interpreting it a little bit differently. Innovation is doing something new. In an effort toward agility, one of the things that we’re doing within our organization is enabling the agility of our business partners, by changing our own operating model. Traditional IT where we run all the services and infrastructure necessary to drive the business, actually taking more of an enabler or a partnership approach where we’re doing things like encouraging shadow IT, encouraging the use of SaaS applications and helping them really do that better through different service offerings like vendor management or change management when it comes to user adoption of certain platforms, data integration so that when we have work flows that span multiple areas of the business, we can complete those without moving data around manually and some of the other problems that come with that. That’s one way that we’re doing new things, looking at ourselves differently, what new capabilities do we need to develop, processes, tools and skills to enable agility for our marketing group or our product lines, as an example. That’s a little bit of one way I think about it.

Then I asked: What gets in the way of agility?

Aaron: I think it’s interesting considering we used to improve innovation by looking right at the root of the motivation for it. Why are we going down this path, trying to innovate something and what is the value of that thing we’re trying to innovate? It’s my belief that the thing that we should value in our colleagues is their judgment. The pre-conditions for being able to offer judgment are you need to be fully informed, you need to be aligned with an objective, there needs to be a reason and even an incentive to offer that kind of input. If the shared goals around innovation opportunities aren’t defined in a way that actually lead to success over time, then the business is just like any other organism: it gets its fingers burned, it starts to get more risk adverse and then it becomes harder and harder to execute any kind of change agenda. Planning in a way that is likely to have a good long-term outcome, even at the outset of any sort of initiative, is one key success criteria that we put in place to help ourselves and our customers get to a good place.

Attendee 1: Established companies who have the ability to have long-term leaders are losing the race because they’re not agile. Those leaders need to transform in their mind first to say, “This is something that needs to be done,” and commit to it and take maintain an attitude where, having given that direction, don’t penalize employees for failure as they run small experiments. A lot of companies have, complex projects where they spin-off a small team, say, “You go do whatever you want and we are going to give you some limited funding, but we are not going to ask you for results.” CEOs and COOs are looking for, “If I spend 10 million bucks, what am I going to get for it?” When you focus on bottom line results, then you hide the cost of innovation.

Sean: Yeah, it’s culturally endemic, sort of near-term focus on success instead of on the long term and the impact that has on innovation.

Cory: There are some companies, like Google who have been known to allow an engineer to take a day a week or a day every two weeks and just look at things. I think though, the challenge is you have to get your organization up to the point where this is an economically viable thing to do. That’d be something I’d love to do with our team, but they’re so stressed out on getting the next thing done. Once we get more ahead of the curve, I think we could do that kind of thing.

The discussion went on to cover operationalizing innovation, or making it a program within an organization, before we moved on to other similarly thought-provoking subjects.

Interested in being a part of a discussion like this one? VELOCITY OF INNOVATION is a series of thought-leadership events for senior IT management hosted by Pythian. Pythian invites leading IT innovators to participate in discussions about today’s disruptive technologies: big data, cloud, advanced analytics, DevOps, and more. These events are by invitation only.

If you are interested in attending an upcoming Velocity of Innovation event in a city near you, please contact events@pythian.com.

Categories: DBA Blogs

Links for 2016-01-26 [del.icio.us]

Categories: DBA Blogs

Partner Webcast – Oracle Forms 12c Upgrade: Assessing the Business Benefits

Oracle Forms and Reports is a component of Oracle Fusion Middleware, thus along with the general availability of Oracle Fusion Middleware 12c (12.2.1.0.0) , Oracle Forms and Reports...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Hybrid Databases: Key to Your Success

Pythian Group - Mon, 2016-01-25 14:51

Almost every company’s success is contingent upon its ability to effectively turn data into knowledge. Data, combined with the power of analytics, brings new, previously unavailable business insights that help companies truly transform the way they do business—allowing them to make more informed decisions, improve customer engagement, and predict trends.

Companies can only achieve these insights if they have a holistic view of their data. Unfortunately, most companies have hybrid data—different types of data, living in different systems that don’t talk to each other. What companies need is a way to consolidate the data they currently have while allowing for the integration of new data.

The Hybrid Data Challenge

There are two primary reasons why data starts life in separate systems. The first is team segregation. Each team in an organization builds its systems independently of one another based on its unique objectives and data requirements. This, in part, is also the second reason. Teams often choose a database system that specializes in handling the specific application tasks. Sometimes they need to prioritize for scalability and performance. Other times, flexibility and reliability are more important.

Businesses need to evolve their data strategy to establish and create a “data singularity,” which summarizes the total of their information assets as a competitive, strategic advantage and enables real-time knowledge. A key objective of this strategy should be to reduce data movement, especially during processing and while executing queries, because both are very expensive operations. As well, data movement, or data replication in general, is a very fragile operation, often requiring significant support resources to run smoothly.

Enter The World Of Hybrid Databases

A hybrid database is a flexible, high-performing data store that can store and access multiple, different data types, including unstructured (pictures, videos, free text) and semi-structured data (XML, JSON). It can locate individual records quickly, handle both analytical and transactional workloads simultaneously, and perform analytical queries at scale. Analytical queries can be quite resource-intensive so a hybrid database needs to be able to scale out and scale linearly. It must also be highly available and include remote replication capabilities to ensure the data is always accessible.

In the past, many organizations consolidated their data in a data warehouse. Data warehouses gave us the ability to access all of our data, and while these systems were highly optimized for analytical long-running queries, they were strictly batch-oriented with weekly or nightly loads. Today, we demand results in real time and query times in milliseconds.

When Cassandra and Hadoop first entered the market, in 2008 and 2011, respectively, they addressed the scalability limitations of traditional relational database systems and data warehouses, but they had restricted functionality. Hadoop offered infinite linear scalability for storing data, but no support for SQL or any kind of defined data structure. Cassandra, a popular NoSQL option, supported semi-structured, distributed document formats, but couldn’t do analytics. They both required significant integration efforts to get the results organizations were looking for.

In 2008, Oracle introduced the first “mixed use” database appliance—Exadata. Oracle Exadata brought a very high performant reference hardware configuration, an engineered system, as well as unique features in query performance and data compression, on top of an already excellent transactional processing system, with full SQL support, document-style datatypes, spatial and graph extensions, and many other features.

In recent years, vendors have started more aggressively pursuing the hybrid market and existing products have started emerging that cross boundaries. We can now run Hadoop-style MapReduce jobs on Cassandra data, and SQL on Hadoop via Impala. Microsoft SQL Server introduced Columnar Format storage for analytical data sets, and In-Memory OLTP—a high performance transactional system, with log-based disk storage. Oracle also introduced improvements to their product line, with Oracle In-Memory, a data warehouse with a specific high-performance memory store, bringing extreme analytical performance.

Choosing A Hybrid Database Solution

Rearchitecting your data infrastructure and choosing the right platform can be complicated—and expensive. To make informed decisions, start with your end goals in mind, know what types of data you have, and ensure you have a scalable solution to meet your growing and changing organizational and business needs—all while maintaining security, accessibility, flexibility, and interoperability with your existing infrastructure. A key design principle is to reduce your data movement and keep your architecture simple. For example, a solution that relies on data being in Hadoop but performs analytics in another database engine is not optimal because large amounts of data must be copied between the two systems during query execution—a very inefficient and compute-intensive process.

Have questions? Contact Pythian. We can help analyze your business data requirements, recommend solutions, and create a roadmap for your data strategy, leveraging either hybrid or special purpose databases.

Categories: DBA Blogs

Links for 2016-01-24 [del.icio.us]

Categories: DBA Blogs