Skip navigation.

DBA Blogs

Keeping Cassandra Alive

Pythian Group - Mon, 2015-04-20 12:28

Troubleshooting Cassandra under pressure

This is the second blog post in the series. This is a bit more technical than the first one. I will explain some things that can be made to keep a cluster/server running when you are having problems in that cluster.

There were a lot of changes in Cassandra over the last 4 years (from 0.8 to 2.1), so I will refrain from discussing troubleshooting problems that affect some specific versions. Also, this is the kind of troubleshooting you need when you can’t “add a node”.

Why can’t I just add a node? Well, if you aren’t on vnodes, and you didn’t pre-calculate the token ranges, adding a node is a big effort. Other constrains may also apply, like budget or deliver time for hardware (if you are on bare metal). Plus, rack capacity, power constrains, etc…

Now you may say:

“Ok, we can’t add a node! What should we do?! We have a storm coming!”

So, I did navigate over that storm and it’s not an easy task, but it’s doable! First thing, you have to know what you have, that is critical! You also need to know where you can take more damage.

Let’s assume you have the following situation, and what I recommend for it:

  • Heavy Write Cluster, Low Read

Now let’s define “storm”: A storm is not when when Cassandra fails, it’s about an unanticipated load increase or a disaster. What happens is that you have more load than your planned capacity (Either because of failure of nodes or because of a sudden traffic increase). This will increase your resource usage to a point where your machines will start to die.

Let’s understand what can cause a Cassandra process to die, and a probably the machine (If you OOM and you didn’t configure swap… I warned you!) for the scenario described above.

  1. More data to the commitlog = more I/O pressure (Discard if you have commitlog on a different HDD)
  2. Data is written to memtables = Memory is used
  3. Memtables reach thresholds faster, get flushed to disk = I/O pressure
  4. Compaction starts faster and frequently = I/O pressure, CPU pressure
  5. Too many I/O compaction can’t compact fast enough and the memtables aren’t flushing fast enough = Memory not being released.
  6. Too much memory usage, JVM triggers GC more frequently = CPU pressure
  7. JVM can’t release memory = OOM
  8. OOM = PUM! Node dies (if you are “lucky” kernel will kill Cassandra)

And I didn’t go trough the hints that would be stored as nodes became unresponsive and send out once they get back online.

So now we know where our pain points are. Let’s understand them and see what we can do about it:

  • Commitlog – Let’s just assume you have this on separate HDD, and don’t do anything about it (after all it’s your safeguard).
  • Memtables – We can control how often they are flushed. It is a possible tweak point. Although it requires a Cassandra restart for the changes to produce an effect.
  • Compaction – This we can control via nodetool, inclusive we can disable it in the later versions.
  • JVM GC – We can change settings, but difficult to tweak and a restart is needed.
  • Swap – We can play a bit here if we do have a swap partition.
  • Dirty_ratio – How often the data is actually written to the HDD. This can put your data at risk, but also can help.
  • Replication Factor – this can be changed on the fly, will help by having less pressure on the nodes.

So, what do to? Where to start? It depends on a case by case scenario. I would probably make my Read performance suffer to keep the writes getting in. To allow that, the easiest way should be making the reads CL = ONE. That sometimes does look like the fast and easy option. But if you’re writes are not using Quorum or/and you have read_repair… You will spread more writes (And RF>1). I would pick compaction as my first target, you can always try to get it up to pace (re-enable, increase compaction throughput). Another option would be increase dirty_ratio and risk losing data (trusting the commitlogs + RF>1 helps not losing data) but this will give your HDD more room until the cluster recovers.

But every case is a case. I will talk about my own case, problems and resolutions this Wednesday at the Datastax Day in London! Fell free to join me!

 

Categories: DBA Blogs

Pillars of PowerShell: Debugging

Pythian Group - Mon, 2015-04-20 12:09
Introduction

The is the third blog post continuing the series on the Pillars of PowerShell. The first two post are:

  1. Interacting
  2. Commanding

We are going to take a bit of a jump and dig into a particular topic that I think is better to go over up front, instead of later. In this post I want to go over a few things of how you can debug scripts or just issues in PowerShell. This is a topic that can get very advanced and make for a very long blog post. In place of trying to put all that in one blog post, I have a few links that I am going to share at the end of this post that will point you to some of the more deep dive material on debugging.

Pillar 3: Debugging

When it comes to writing scripts or developing T-SQL procedures you will generally see folks use print statements to either check where the processing is at in the script, or output some “additional” information. PowerShell is no different and offers cmdlets that you can output it to with various destinations and to even use it to make a decision. One of the main ones I like to use when I write scripts is Write-Verbose. You may see some folks use Write-Host in their scripts, and all I can say to that is, “be kind to puppies”. The basic gist of it is Write-Host outputs plain text, and will always output text unless you comment it out or remove it from your script. In using Write-Verbose you can actually have that information only output when a parameter switch is used, rightly called “-verbose”. This switch is included in most built-in cmdlets for modules provided by Microsoft. If you want to include it in your script or function you simply need to include this at the top:

[CmdletBinding()]
Param()

So in the example below you can see that both functions will never output the Write-Verbose cmdlet when they are called:

p3_function_verbose_example

The difference you will see is that Test-NoVerbose does not do anything when you include the verbose switch, where Test-WithVerbose will:

p3_function_verbose_example_2

So in cases where other people may be using your scripts this feature will help keep output clean, unless you need it for debugging. I tend to use this most often when I am working on long scripts that I want to initially know what is going on as it runs. If I ever have to come back to the script for debugging I can just use the switch, versus the normal execution which doesn’t need all that output.

Errors

They are going to happen, it is inevitable in your scripting journey that at some point you are going to have an error. You cannot always prepare for every error but you can help in collecting as much information about an error to help in debugging. Just like you would handle errors in T-SQL using a TRY/CATCH block, you can do the same in PowerShell.

PowerShell offers a variable that is available in every session you open or run called $Error. [The dollar sign in PowerShell denotes a variable.] This variable holds records of the errors that have occurred in your session. This variable is going to hold those errors that can occur in your scripts. There are other errors or exceptions that can also be thrown by .NET objects that can work a bit different in how you capture them; I will refer you to Allen White’s post on Handling Errors in PowerShell to see a good example.

Summary

Debugging is one of those topics that can go into a 3-day course so one blog post is obviously not going to cover all the information you might need. I came across a good blog post by the PowerShell Team on Advanced Debugging in PowerShell that should be a read for anyone wanting to get involved with PowerShell scripting.

Categories: DBA Blogs

Oracle Digital Transformation EMEA Partner Community Launch Summit - 28-29th April 2015, Budapest, Hungary

BE A DIGITAL DISRUPTOR! Be part of the Digital Disruption wave that is prevalent in all our partners and customers discussions today. Join us in Budapest to understand what Oracle’s...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Parallel Execution -- 3b Limiting PX Servers with Resource Manager

Hemant K Chitale - Sun, 2015-04-19 10:57
As demonstrated earlier, in the absence of CALIBRATE_IO, the "automatic" degree computed by a PARALLEL Hint is CPU_COUNT x PARALLEL_THREADS_PER_CPU

HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
4

HEMANT>select /*+ PARALLEL */ count(*) from Large_Table;

COUNT(*)
----------
4802944

HEMANT>select executions, px_servers_executions, sql_fulltext
2 from v$sqlstats
3 where sql_id = '8b0ybuspqu0mm';

EXECUTIONS PX_SERVERS_EXECUTIONS SQL_FULLTEXT
---------- --------------------- --------------------------------------------------------------------------------
1 16 select /*+ PARALLEL */ count(*) from Large_Table

HEMANT>

Now, I shall explore using the Resource Manager to place a limit.

HEMANT>connect system/oracle
Connected.
SYSTEM>BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
END; 2 3
4 /

PL/SQL procedure successfully completed.

SYSTEM>BEGIN
2 DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP (
3 CONSUMER_GROUP=>'PX_QUERY_USER',
4 COMMENT=>'New Group for PX');
5 END;
6 /

PL/SQL procedure successfully completed.

SYSTEM>BEGIN
2 DBMS_RESOURCE_MANAGER.CREATE_PLAN(
3 PLAN=>'LIMIT_PX_TO_4',
4 COMMENT=>'Limit PQ/PX to 4 Server Processes');
5 END;
6 /

PL/SQL procedure successfully completed.

SYSTEM>BEGIN
2 DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(
3 PLAN=>'LIMIT_PX_TO_4',
4 GROUP_OR_SUBPLAN=>'PX_QUERY_USER',
5 PARALLEL_DEGREE_LIMIT_P1=>4,
6 COMMENT=>'Directive to limit PQ/PX to 4 Server Processes');
7 END;
8 /

PL/SQL procedure successfully completed.

SYSTEM>BEGIN
2 DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
3 END;
4 /
BEGIN
*
ERROR at line 1:
ORA-29382: validation of pending area failed
ORA-29377: consumer group OTHER_GROUPS is not part of top-plan LIMIT_PX_TO_4
ORA-06512: at "SYS.DBMS_RMIN", line 444
ORA-06512: at "SYS.DBMS_RESOURCE_MANAGER", line 809
ORA-06512: at line 2


SYSTEM>BEGIN
2 DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(
3 PLAN=>'LIMIT_PX_TO_4',
4 GROUP_OR_SUBPLAN=>'OTHER_GROUPS',
5 MGMT_P1=>10,
6 COMMENT=>'Directive for OTHER_GROUPS (mandatory)');
7 END;
8 /

PL/SQL procedure successfully completed.

SYSTEM>BEGIN
2 DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
3 END;
4 /

PL/SQL procedure successfully completed.

SYSTEM>BEGIN
2 DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
3 END;
4 /

PL/SQL procedure successfully completed.

SYSTEM>alter system set resource_manager_plan='LIMIT_PX_TO_4';

System altered.

SYSTEM>alter system flush shared_pool;

System altered.

SYSTEM>

I have now created 1 single consumer group and a plan.
Note 1 :  The COMMENT is mandatory for the Consumer Group, the Plan and the Directives.
Note 2 : It is mandatory to specify Directives for OTHER_GROUPS in the Plan (even if I don't explicitly define any other groups).
For the Group 'PX_QUERY_USER', I've set a PX Limit of 4 (and no CPU limit).  For the 'OTHER_GROUPS', I've set a CPU Limit of 10%
I must now associate the Consumer Group with the User.

SYSTEM>BEGIN
2 DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
3 END;
4 /

PL/SQL procedure successfully completed.

SYSTEM>BEGIN
2 DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING(
3 ATTRIBUTE=>DBMS_RESOURCE_MANAGER.ORACLE_USER,
4 VALUE=>'HEMANT',
5 CONSUMER_GROUP=>'PX_QUERY_USER');
6 END;
7 /

PL/SQL procedure successfully completed.

SYSTEM>BEGIN
2 DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
3 END;
4 /

PL/SQL procedure successfully completed.

SYSTEM>BEGIN
2 DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
3 END;
4 /

PL/SQL procedure successfully completed.

SYSTEM>alter system flush shared_pool;

System altered.

SYSTEM>
SYSTEM>BEGIN
2 DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
3 END;
4 /

PL/SQL procedure successfully completed.

SYSTEM>BEGIN
2 DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SWITCH_CONSUMER_GROUP(
3 GRANTEE_NAME=>'HEMANT',
4 CONSUMER_GROUP=>'PX_QUERY_USER',
5 GRANT_OPTION=>FALSE);
6 END;
7 /

PL/SQL procedure successfully completed.

SYSTEM>
SYSTEM>BEGIN
2 DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
3 END;
4 /

PL/SQL procedure successfully completed.

SYSTEM>BEGIN
2 DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
3 END;
4 /

PL/SQL procedure successfully completed.

SYSTEM>

Let me test the configuration now.

SYSTEM>connect hemant/hemant
Connected.
HEMANT>select username, resource_consumer_group
2 from v$session
3 where username='HEMANT'
4 /

USERNAME RESOURCE_CONSUMER_GROUP
------------------------------ --------------------------------
HEMANT PX_QUERY_USER

HEMANT>
HEMANT>select /*+ PARALLEL */ count(*) from Large_Table;

COUNT(*)
----------
4802944

HEMANT>select executions, px_servers_executions, sql_fulltext
2 from v$sqlstats
3 where sql_id = '8b0ybuspqu0mm';

EXECUTIONS PX_SERVERS_EXECUTIONS SQL_FULLTEXT
---------- --------------------- --------------------------------------------------------------------------------
1 4 select /*+ PARALLEL */ count(*) from Large_Table

HEMANT>

Now, the same query executed with only 4 PX servers.

.
.
.


Categories: DBA Blogs

Book Review: Oracle Database 12c New Features by Robert Freeman

Oracle in Action - Sun, 2015-04-19 03:04

RSS content

This book touches a myriad of new features of oracle database 12c relevant to DBA’s, developers and architects. It starts with new features as well as step by step detailed instructions of  installation along with relevant screenshots followed by an  introduction to EM Express. Next chapter covers new features related to upgrading to Oracle Database 12c and various methods to perform the upgrade. All the subsequent chapters explore  a whole lot of  new features from which as a DBA,  I was more interested in multitenant architecture, Flex Clusters, Flex ASM, ACFS, RMAN-Related New Features, Oracle Data Guard New Features, auditing , statistics and Optimizer-related new features. For  every feature,  first the need to introduce the feature has been explained followed by the demonstration of  basic functionality of the  feature with simple and easy to reproduce scripts.  There is a lot left to be learnt and explored for which you need to refer to documentation and practice. In short, this book a launching point to start your journey  for understanding oracle database 12c.

Thanks to the author Robert Freeman, contributors Scott Black,  Tom Kyte and Eric Yen for putting together such a great book. A must have for Oracle enthusiasts.

 

 



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [Book Review: Oracle Database 12c New Features by Robert Freeman], All Right Reserved. 2015.

The post Book Review: Oracle Database 12c New Features by Robert Freeman appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Log Buffer #419: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-04-17 10:51

This Log Buffer Edition covers Oracle, MySQL, SQL Server blog posts from around the world.

Oracle:

  • Why the Internet of Things should matter to you
  • Modifying Sales Behavior Using Oracle SPM – Written by Tyrice Johnson
  • SQLcl: Run a Query Over and Over, Refresh the Screen
  • Data Integration Tips: ODI 12.1.3 – Convert to Flow
  • JRE 1.8.0_45 Certified with Oracle E-Business Suite

SQL Server:

  • What’s this, a conditional WHERE clause that doesn’t use dynamic SQL?
  • The job of a DBA requires a fusion of skill and knowledge. To acquire this requires a craftsman mindset. Craftsmen find that the better they get at the work, the more enjoyable the work gets, and the more successful they become.
  • Using SQL to perform cluster analysis to gain insight into data with unknown groups
  • There are times when you don’t what to return a complete set of records. When you have this kind of requirement to only select the TOP X number of items Transact SQL (TSQL) has the TOP clause to meet your needs.
  • Spatial Data in SQL Server has special indexing because it has to perform specialised functions.

MySQL:

Profiling MySQL queries from Performance Schema

How to Easily Identify Tables With Temporal Types in Old Format!

The Perfect Server – CentOS 7.1 with Apache2, Postfix, Dovecot, Pure-FTPD, BIND and ISPConfig 3

Database Security – How to fully SSL-encrypt MySQL Galera Cluster and ClusterControl

MDX: retrieving the entire hierarchy path with Ancestors()

Categories: DBA Blogs

Good Singapore Maths Students Would Likely Make Good Oracle DBAs (Problems)

Richard Foote - Tue, 2015-04-14 19:46
An interesting mathematics based question from a Singapore high school exam has been doing the internet rounds in the past few days. Considering it’s aimed at high school students, it’s a tricky one and obviously designed to filter out the better students, in a country with a very good reputation for churning out mathematically gifted […]
Categories: DBA Blogs

Ever Wondered How Pythian is Kind of Like a Fire Truck?

Pythian Group - Tue, 2015-04-14 06:10
pierce____enforcerg____-54ee03db38d10

 

I have.

Coming from the world of selling fire trucks I’m used to selling necessary solutions to customers in need. The stakes are high. If the truck doesn’t perform best case scenario it’s a false alarm. Worst case scenario someone, many people, die.

Let me tell you a bit about fire trucks.

A lot of people think that a fire truck is a fire truck. That there is some factory where fire trucks are made, carbon copies of one another, varying only in what they carry – water, a pump, a ladder. That’s not the case. Every truck is custom engineered, designed, and manufactured from scratch. Things can go wrong. In a world where response time is everything, you don’t want something to go wrong. Not with the fire truck. Not when everything else is going wrong. Not when someone is trapped in their vehicle. Not when a house is burning down.

For the past five years I have been selling disaster management systems. There has been a clear, immediate, pressing need from my customers. I loved the urgency, I fed off that energy, helping people in charge of saving lives come up with solutions that help them do just that. When first walking into Pythian, I didn’t understand the importance of data, I didn’t comprehend the stakes. But they are present and the analogy can be made.

Pythian’s services are like a fire truck.

Data is like your house, your car, your life. When your business is dependent on your data and your data fails, your business fails. Data failures are serious. Downtime causes huge revenue losses as well as loss of trust and reputation. Identity theft, loss of security, these disasters are pressing threats in our digitized society.

Pythian’s FIT-ACER program is like your Fire Marshall.

We don’t just prepare for disasters, we help prevent them. Modeled after the Mayo Clinic’s patient checklist, Pythian’s FIT-ACER human reliability check acknowledges that no matter how intelligent our DBAs are (http://www.pythian.com/experts/) they can still make mistakes:

FIT-ACER: Pythian Human Reliability Checklist

F

Focus (SLOW DOWN! Are you ready?)

A

Assess the command (SPEND TIME HERE!)

I

Identify server/DB name, time, authorization

C

Check the server / database name again

T

Type the command (do not hit enter yet)

E

Execute the command

R

Review and document the results

We don’t just hire the best to do the best work, we hire the best, make sure they’re at their best, check their best, and apply their best. Every time we interact with your data we do so at a high level to improve your system, to prevent disaster.  And we answer our phones when disaster hits.

The average response time for a fire crew in Ontario is 6 minutes. The average response time for Pythian is under 4.

Take it from someone who knows disaster,

Pythian’s the best fire truck around.

Categories: DBA Blogs

Find Out Values Of Bind Variables Across Multiple Executions Of A SQL

Oracle in Action - Tue, 2015-04-14 00:48

RSS content

The execution of the same SQL statement might result in different execution plans due to various reasons. In my last post,  I had demonstrated how the value of parameter OPTIMIZER_MODE can cause change in execution plan.  In this post, I will demonstrate that in case of skewed data distribution,  change in the value of a bind variables can also result in differing execution plans for the same statement. I will also explore how we can find out the values of bind variables used during various executions of the same statement when

  • the cursors are still available in shared pool
  • the cursors have been flushed to AWR

As part of the setup, I have created a table  TEST_BIND with skewed data distribution in column ID which is indexed. There are 1000 records for ID = 1 and only 1 record with ID = 2.

SQL>select id, count(*) from hr.test_bind group by id;

ID COUNT(*)
---------- ----------
1 1000
2 1

Subsequently, I have executed the following statement with values 1 and 2 assigned to bind variable VID multiple no. of times so that Adaptive Cursor Sharing kicks in and the statement executes with Full Table Scan when VID = 1 and with Index Range Scan when VID = 2 .

SQL>select * from hr.test_bind where id = :vid;

–  Now I will execute the statement with different values of the bind variable

SQL>variable vid number;
exec :vID := 1;
select * from hr.test_bind where id = :vid;

variable vid number;
exec :vID := 2;
select * from hr.test_bind where id = :vid;

– Since the statement is still in shared pool, we can find out sql_id for the statement from V$SQL
– It can be seen two child cursors have been created indicating that the two executions used different execution plans

SQL>select sql_id, child_number, sql_text, is_bind_sensitive bind_sensitive,
is_bind_aware bind_aware , is_shareable shareable
fom v$sql
where sql_text like '%select * from hr.test_bind%'
and is_shareable = 'Y'
and sql_text not like '%sql_text%';

SQL_ID CHILD_NUMBER SQL_TEXT BIND_SENSITIVE BIND_AWARE SHAREABLE
------------- ------------ ----------------------------------- ----------
7a73kdgy0q1c2 1 select * from hr.test_bind whe Y Y Y
re id = :vid

7a73kdgy0q1c2 2 select * from hr.test_bind whe Y Y Y
re id = :vid

–  It can be verified that different plans are being used by the same statement during two executions due to different values of bind variable VID. The respective values of the bind variable are visible in Peeked Binds section of the execution plan

SQL>select * from table (dbms_xplan.display_cursor('7a73kdgy0q1c2', 1, format => 'TYPICAL +PEEKED_BINDS'));

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------
SQL_ID 7a73kdgy0q1c2, child number 1
-------------------------------------
select * from hr.test_bind where id = :vid

Plan hash value: 3519963602
-------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 3 (100)| |
|* 1 | TABLE ACCESS FULL| TEST_BIND | 1000 | 7000 | 3 (0)| 00:00:01 |
-------------------------------------------------------------------------

Peeked Binds (identified by position):
--------------------------------------
1 - :VID (NUMBER): 1

SQL>select * from table (dbms_xplan.display_cursor('7a73kdgy0q1c2', 2, format => 'TYPICAL +PEEKED_BINDS'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------
SQL_ID 7a73kdgy0q1c2, child number 2
-------------------------------------
select * from hr.test_bind where id = :vid

Plan hash value: 2825156252
-------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 (100)| |
| 1 | TABLE ACCESS BY INDEX ROWID| TEST_BIND | 1 | 7 | 2 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | TEST_IDX | 1 | | 1 (0)| 00:00:01 |
-------------------------------------------------------------------------

Peeked Binds (identified by position):
--------------------------------------
1 - :VID (NUMBER): 2

– We can look at v$sql_bind_capture also  to find out  the values of bind variable across various executions of the SQL

SQL>SELECT NAME, child_number, DATATYPE_STRING,VALUE_STRING
FROM v$sql_bind_capture
WHERE sql_id='7a73kdgy0q1c2' order by child_number;

NAME CHILD_NUMBER DATATYPE_STRING VALUE_STRING
--------------- ------------ -------------------- ---------------
:VID 0 NUMBER 2
:VID 1 NUMBER 1
:VID 2 NUMBER 2

Now I will demonstrate how we can find out different execution plans and corresponding values of bind variables after the statement has been flushed to AWR.

– To flush the statement to AWR, let’s  take a snapshot

SQL>exec dbms_workload_repository.create_snapshot ('ALL');

– We can verify that that the  statement is indeed in AWR

SQL> select sql_id, sql_text
from dba_hist_sqltext
where sql_text like '%select * from hr.test_bind%'
and  sql_text not like '%sql_text%';
SQL_ID SQL_TEXT
------------- --------------------------------------------------
7a73kdgy0q1c2 select * from hr.test_bind where id = :vid

– Find out various execution plans andcorresponding values of bind variables  used by the statement

SQL>select * from table(dbms_xplan.display_awr ('7a73kdgy0q1c2', format => 'TYPICAL +PEEKED_BINDS'));

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------
SQL_ID 7a73kdgy0q1c2
--------------------
select * from hr.test_bind where id = :vid

Plan hash value: 2825156252
-------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 (100)| |
| 1 | TABLE ACCESS BY INDEX ROWID| TEST_BIND | 1 | 7 | 2 (0)| 00:00:01 |
| 2 | INDEX RANGE SCAN | TEST_IDX | 1 | | 1 (0)| 00:00:01 |
-------------------------------------------------------------------------

Peeked Binds (identified by position):
--------------------------------------
1 - :VID (NUMBER): 2

SQL_ID 7a73kdgy0q1c2
--------------------
select * from hr.test_bind where id = :vid

Plan hash value: 3519963602
-------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 3 (100)| |
| 1 | TABLE ACCESS FULL| TEST_BIND | 1000 | 7000 | 3 (0)| 00:00:01 |
-------------------------------------------------------------------------

Peeked Binds (identified by position):
--------------------------------------
1 - :VID (NUMBER): 1

– We can also use dba_hist_sqlbind to check the values of bind variables during 2 executions

SQL>SELECT sql_id, NAME,DATATYPE_STRING,VALUE_STRING
FROM DBA_HIST_SQLBIND WHERE SQL_ID='7a73kdgy0q1c2' ;

SQL_ID NAME DATATYPE_STRING VALUE_STRING
------------- --------------- -------------------- ---------------
7a73kdgy0q1c2 :VID NUMBER 1
7a73kdgy0q1c2 :VID NUMBER 2

 Summary:
In case of skewed data distribution, different values of bind variables can result in different execution plans. Various execution plans and corresponding values of bind variables can be determined both when the statement is still shared pool and when it has been flushed to AWR.

I hope this post was useful.  Your comments and suggestions are always welcome.
Keep visiting my blog …

References:
http://shaharear.blogspot.in/2009/02/find-bind-variable-value.html

https://www.linkedin.com/groupAnswers?viewQuestionAndAnswers=&gid=2342993&discussionID=5993686363038375940&trk=eml-group_discussion_new_comment-respond-btn&midToken=AQE9SYOdN_UFjg&fromEmail=fromEmail&ut=0ZAtL1rWAWtCI1

—————————————————————————————————

Related links:

Home
Tuning Index

Identify difference in CBO parameters across two executions of a SQL



Tags:  

Del.icio.us
Digg

Comments:  2 comments on this item
You might be interested in this:  
Copyright © ORACLE IN ACTION [Find Out Values Of Bind Variables Across Multiple Executions Of A SQL], All Right Reserved. 2015.

The post Find Out Values Of Bind Variables Across Multiple Executions Of A SQL appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Community dinner @ Pedro’s

Pythian Group - Mon, 2015-04-13 08:36

Folks, as usual Pythian is organizing the community dinner. After many years, food, (responsible) drinking and photos, this event has become an important moment for all of us, to know each other better, discuss and have fun.

This year is also the 20th year for MySQL so … YEAAAH let us celebrate, with more food, fun and responsible drinking.

If you had not done it yet … register yourself here: https://www.eventbrite.com/e/the-pythian-mysql-community-pay-your-own-way-dinner-tickets-15692805604

Info about the event:

When: Tuesday April 14, 2015 – 7:00 PM at Pedro’s (You are welcome to show up later, too!)
Where: Pedro’s Restaurant and Cantina – 3935 Freedom Circle, Santa Clara, CA 95054

 

I know, I know … we are that kind of people that decide where to go at the last minute, and every year we do the same, but if you could register, that will help us to organize it better … and c’mon the dinner is on Tuesday … so we are almost there!!!

 

Anyhow, hope to see all of you there, all of you!

Some reference: Menu Eventbrite Pedro

Categories: DBA Blogs

Technology for the Non-Technical

Pythian Group - Mon, 2015-04-13 08:33

I am potentially one of the least technical people in my generation. I’m 30 and I am afraid of my cellphone, my laptop, Netflix, the microwave…. Okay, afraid is maybe a strong word, but baffled by them at the very least.

In high school, while my classmates wrote most of their papers and assignments on the computer, I insisted on writing everything out by hand and only typed it out afterwards if absolutely required. It wasn’t that I had issues with typing – my mom who worked as an administrator for many years made sure that I learned to type from a very young age and I type quickly with a reasonable amount of accuracy. I just felt that writing by hand kept me more “connected” to the words I penned. Simply, my name is Sarah and I am a Luddite.

After high school I studied journalism for a couple of years and then entered the workforce into a number of different jobs, such as in sales and marketing and it became necessary for me to “engage” with technology a little more heavily. Typing articles and assignments slowly became second nature but grocery lists, thank you notes, birthday cards all continued to be written by hand.

For the last few years I’ve been working for technology and IT organizations, and for the last 14 months I’ve been working with Pythian, a leading IT services provider specializing in data infrastructure management. That was a big leap for me. Not only was I required to use technology constantly in my day-to-day (Smartphone, CRM system, soft phone, multiple email interfaces ACK!), but I also needed to do a lot more than dip my toes into some fairly intense technical knowledge to gain an understanding of our client base and what solutions would be most appropriate for the people I speak to every day. These people are Chief Information Officers, Chief Technology Officers’s and Vice Presidents of Information Technology for companies that are incredibly data-dependent. The quality and security of their data management directly affects their revenue and it’s critical that it is handled with a great amount of expertise and attention to detail. Kind of intimidating.

I have spent the last year wrapping myself in terms like NoSQL, non-relational database, Hadoop, MongoDB, SQL Server and Oracle. Do I have a perfect understanding of the benefits and draw-backs of each of these yet? No. What I do have is a great network of technical geniuses who work with me who have spent their careers becoming experts in their respective technologies. I know who the best resources are and how to connect with them to get the best answers and solutions. I’m very lucky to work at company that is incredibly transparent – questions are always welcomed and answered. I sit sandwiched between the offices of the Chief Revenue Officer and the CEO and Founder of our organization and while both are incredibly busy people, they are also happy to answer questions and share their insights and energy with anyone here.

All of our technical resources are just an instant message away and can often answer my questions in a few concise lines. So, while I am still monstrously uncomfortable with tasks like defragging (sounds like organized Fraggle removal to me) my computer or resetting my smartphone when it acts up, I am coming along slowly, in baby steps – an IT late-bloomer you could say – and it’s all much less painful than I ever feared it would be.

Categories: DBA Blogs

My thoughts on the Resilience of Cassandra

Pythian Group - Mon, 2015-04-13 06:32

This blog is a part 1 of a 2 in a series. This will be different from my previous blogs, as this is more about some decisions you can make with Cassandra regarding the resilience of your system. I will talk deeply about this topic in the upcoming Datastax Days in London (https://cassandradaylondon2015.sched.org/), this is more of an introduction!

TL;DR: Cassandra is tough!

Cassandra presents itself as a “Cassandra delivers continuous availability, linear scalability, and operational simplicity across many commodity servers with no single point of failure, along with a powerful data model designed for maximum flexibility and fast response times.“ (http://docs.datastax.com/en/cassandra/2.0/cassandra/gettingStartedCassandraIntro.html). In a production system, having your persistence layer failure tolerant is a big thing. Even more so when you can make it resilient to full locations failure through geographic replication (and easily).

As in any production system you need to plan for failure. Should we blindly trust in Cassandra resilience and forget about the plan because “Cassandra can handle it”? By reading the documentation, some may think that by having several data centers and a high enough replication factor we are covered. In part this is true. Cassandra will handle servers down, even a full DC (or several!) down. But, anyway, you should always prepare for chaos! Failure will increase pressure on your remaining servers, latency will increase, etc. And when things get up again, will it just work? Getting all data in sync, are you ready for that? Did you forgot about gc_grace_seconds? There are lots of variables and small details that can be forgotten if you don’t plan ahead. And then in the middle of a problem, it will not help having those details forgotten!

My experience tells me that you must take Cassandra failures seriously, and plan for them! Having a B plan is never a bad thing, and a C even. Also, make sure those plans work! So for this short introduction I will leave a couple of recommendations:

  • Test your system against Cassandra delivering a bad service (timeouts, high latency, etc).
  • Set a “bare minimum” for your system to work (how low can we go on consistency, for example).
  • Test not only your system going down, but also prepare for the coming up!
  • Keep calm! Cassandra will help you!

Overall, Cassandra is a tough and robust system. I’ve had major problems with network, storage, Cassandra itself, etc. And in the end Cassandra not only survived, it gave me no downtime. But with every problem I had, it increased my knowledge and awareness of what I could expect. This lead to planning for major problems (which did happen) and this combined with the natural resilience of Cassandra made me go through those events without downtime.

Fell free to comment/discuss about it, in the comment section below! Juicy details will be left for London!

Categories: DBA Blogs

Oracle Java Cloud IDE Integrations

Oracle Java Cloud is the platform that runs on Oracle's Weblogic Server combined with the Oracle Database that allows deploying Java applications on Cloud just like deploying on a local Weblogic...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Consuming web-services via secured connection with self-signed certificate from MAF application

Most common way to expose existing services from enterprise applications to mobile environment is using SOAP and REST (JSON, XML) services for consuming data. However before our solution usually goes...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Partner Webcast – Oracle Exadata Database Machine X5: Extreme Performance for ISVs

Oracle’s sixth generation Exadata,Oracle Exadata Database Machine X5, is the highest-performing and lowest-cost platform for running Oracle Database.  Oracle Exadata's architecture features...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Partner Webcast – Next Generation Oracle Database Appliance X5

Oracle Database Appliance (ODA ) enables you to take advantage of the latest of Oracle Database technologies in an easy-to-deploy and manage system that supports virtualization, saving time and money...

We share our skills to maximize your revenue!
Categories: DBA Blogs

SQLcl, a revolution for SQL*Plus users

DBA Scripts and Articles - Thu, 2015-04-02 08:34

What is SQLcl ? SQLcl is a new command line interface like SQL*PLUS coming along with SQL Developper 4.1 Early Adopter. It is a lightweight tool (only 11MB) developed by the SQL Developer team fully compatible with Windows and Unix/Linux. You don’t need to install it so it is totally portable. The tool does not need … Continue reading SQLcl, a revolution for SQL*Plus users →

The post SQLcl, a revolution for SQL*Plus users appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Oracle Business Intelligence Cloud Service (BICS) Partner Training 2-day Workshop

OPN Gold+ member partners can attend this 2-day hands-on in-class technical workshop on Oracle Business Intelligence Cloud Service (BICS) at No-Fee to partners. This workshop consists of...

We share our skills to maximize your revenue!
Categories: DBA Blogs

IOUG Collaborate 2015 – #C15LV

DBASolved - Wed, 2015-04-01 19:31

Like many other Oracle professionals and speakers I will be attending IOUG Collaborate 2015 this year in Las Vegas.  I’m not a big fan of Las Vegas, but hey cannot turn down an opportunity to speak, especially when IOUG asked me to do more than one session.  

This year my schedule is going to keep me busy; yet full of good topics that cover both EM12c and GoldenGate.  If you are going to be a Collaborate, come check out my sessions and many others.

My sessions this year:

4/12/2015
09:00 am – 03:00 pm – RAC SIG Function (RAC Attack)

4/13/2015:  
10:30 am – Writing to Lead Panel discussion
12:00 pm – Exadata Exachk and EM12c: Keeping up with Exadata                 
17:30 pm – IOUG Data Integration SIG Meeting

4/14/2015:  
11:00 am – Enable Oracle GoldenGate Monitoring for the Masses with EM12c                     

4/15/2015:  
08:00 am – Examine Oracle GoldenGate Trail Files: How and When to use Logdump Utility
10:45 am – Extreme Replication: Performance Tuning Oracle GoldenGate for the Real World

If you are going to be a Collaborate, I look forward to see you there and hopefully in one of my sessions.

Enjoy!

about.me:http://about.me/dbasolved


Filed under: Database, Golden Gate, OEM
Categories: DBA Blogs