Skip navigation.

DBA Blogs

Check out the new Oracle help web site

Bobby Durrett's DBA Blog - Thu, 2014-11-13 17:52

I found a broken link to an Oracle help document in one of my posts and when I went to the Oracle 12c database documentation to find the new URL to put in my post I found that Oracle had totally revamped their online manuals.

Here is a link to the new high level Oracle help site: url

I’ve only looked at it for a few minutes and can’t say whether I like it or not.  You can still download the manuals in PDF format so that remains familiar.  It looks like the new site integrates documentation across most or all of Oracle’s products in a similar format and that’s pretty cool.

Anyway, I just saw this and thought I would pass it along.

– Bobby


Categories: DBA Blogs

NoCOUG watchers protest despicable tactics being used by NoCOUG management

Iggy Fernandez - Thu, 2014-11-13 13:49
FOR IMMEDIATE RELEASE NoCOUG watchers protest despicable tactics being used by NoCOUG management SILICON VALLEY (NOVEMBER 13, 2014) – Veteran NoCOUG watchers all over Northern California have been protesting the despicable tactics being used by NoCOUG management to lure Oracle Database professionals to the NoCOUG conference at the beautiful eBay Town Hall next week. Instead […]
Categories: DBA Blogs

Foreign Archived Log in #Oracle – what does it mean?

The Oracle Instructor - Thu, 2014-11-13 02:54

When you look into V$RECOVERY_AREA_USAGE, you see a strange row at the bottom:

SQL> select * from v$recovery_area_usage;

FILE_TYPE               PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES     CON_ID
----------------------- ------------------ ------------------------- --------------- ----------
CONTROL FILE                             0                         0               0          0
REDO LOG                                 0                         0               0          0
ARCHIVED LOG                         10.18                         0              73          0
BACKUP PIECE                             0                         0               0          0
IMAGE COPY                               0                         0               0          0
FLASHBACK LOG                            0                         0               0          0
FOREIGN ARCHIVED LOG                     0                         0               0          0
AUXILIARY DATAFILE COPY                  0                         0               0          0

Curious what that could be? You will see values other than zero on a Logical Standby Database:

SQL> connect sys/oracle@logst as sysdba
Connected.
SQL> select database_role from v$database;

DATABASE_ROLE
----------------
LOGICAL STANDBY

SQL> select * from v$recovery_area_usage;

FILE_TYPE               PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES     CON_ID
----------------------- ------------------ ------------------------- --------------- ----------
CONTROL FILE                             0                         0               0          0
REDO LOG                                 0                         0               0          0
ARCHIVED LOG                         14.93                         0               9          0
BACKUP PIECE                             0                         0               0          0
IMAGE COPY                               0                         0               0          0
FLASHBACK LOG                            0                         0               0          0
FOREIGN ARCHIVED LOG                  2.03                         0              26          0
AUXILIARY DATAFILE COPY                  0                         0               0          0

In contrast to a Physical Standby Database, this one writes not only into standby logs but also into online logs while being in standby role. That leads to two different kinds of archive logs:
Foreign Archived LogsWhen DML (like insert and update) is done on the primary 1) that leads to redo entries into online logs 2) that are simultaneously shipped to the standby and written there into standby logs 2) also. The online logs on the primary and the standby logs on the standby will be archived 3) eventually. So far that is the same for both physical and logical standby. But now a difference: Logical standby databases do SQL Apply 4) by logmining the standby or the archive logs that came from the primary. That generates similar DML on the standby which in turn leads LGWR there to write redo into online logs 5) that will eventually get archived 6) as well.

A logical standby could do recovery only with its own archive logs (if there was a backup taken before) but not with the foreign archive logs. Therefore, those foreign archive logs can and do get deleted automatically. V$ARCHIVED_LOG and V$FOREIGN_ARCHIVED_LOG can be queried to monitor the two different kinds of logs.

That was one topic of the course Oracle Database 12c: Data Guard Administration that I’m delivering as an LVC this week, by the way. Hope you find it useful :-)


Tagged: Data Guard, High Availability
Categories: DBA Blogs

Sangam 2014

Oracle in Action - Wed, 2014-11-12 23:23

RSS content

AIOUG meet “SANGAM  – Meeting of Minds” is the Largest Independent Oracle Event in India, organized annually in the month of November. This year, the 6th annual conference, Sangam14 (7th, 8th and 9th November 2014) was held at Hotel Crowne Plaza Bengaluru Electronics City, India.

I had the honour to present papers on
- Histograms : Pre-12c and now
- Adaptive Query Optimization
Both the papers were well received by the audience.

On the first day, a full day seminar on “Optimizer Master Class” by Tom Kyte was simply great. Hats off to Tom who conducted the session through the day with relentless energy, answering the queries during breaks without taking any break himself.

The pick of the second day was Maria Colgan’s 2 hour session on “What You Need To Know About Oracle Database In-Memory Option”. The session was brilliant, to the point and packed with knowledge about the new feature.

Aman Sharma’s session on 12c High Availability New features was very well conducted and quite informative.

On the 3rd day there was a one hour session by Dr. Rajdeep Manwani on “Time to Reinvent Yourself – Through Learning, Leading, and Failing”. The session was truly amazing and left the audience introspecting .

On the whole, it was a learning experience with the added advantage of networking with Oracle technologists from core Oracle technology as well as Oracle Applications. Thanks to all the members of organizing committee whose selfless dedication and efforts made the event so successful. Thanks to all the speakers for sharing their knowledge.

Looking forward to SANGAM 15….

——————————————————————
Related Links:

Home



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [Sangam 2014], All Right Reserved. 2014.

The post Sangam 2014 appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Is Oracle Database 12c (12.1.0.2.0) Faster Than Previous Releases?

Is Oracle Database 12c (12.1.0.2.0) Faster Than Previous Releases?
I was wondering if the new Oracle Database 12c version 12.1.0.2.0 in-memory column store feature will SLOW performance when it is NOT being used. I think this is a fair question because most Oracle Database systems will NOT be using this feature.

While the new in-memory column store feature is awesome and significant, with each new Oracle feature there is additional kernel code. And if Oracle is not extremely careful, these new lines of Oracle kernel code can slow down the core of Oracle processing, that is, buffer processing in Oracle's buffer cache.

Look at it this way, if a new Oracle release requires 100 more lines of kernel code to be executed to process a single buffer, that will be reflected in how many buffers Oracle can process per second.

To put bluntly, this article is the result of my research comparing core buffer processing rates between Oracle Database versions 11.2.0.2.0, 12.1.0.1.0 and 12.1.0.2.0.

With postings like this, it is very important for everyone to understand the results I publish are based on a very specific and targeted test and not on a real production load. Do not use my results in making a "should I upgrade decision." That would be stupid and an inappropriate use of the my experimental results. But because I publish every aspect of my experiment and it is easily reproducable it is valid data point with which to have a discussion and also highlight various situations that DBAs need to know about.

There are two interesting results from this research project. This article is about the first discovery and my next article will focus on the second. The second is by far the most interesting!

FYI. Back in August of 2013 performed a similar experiment where I compared Oracle database versions 11.2.0.2.0 with 12.1.0.1.0. I posted the article HERE.

Why "Faster" Means More Buffer Gets Processed Per Second
For this experiment when I say "faster" I am referring to raw buffered block processing. When a buffer is touched in the buffer cache it is sometimes called a buffer get or a logical IO. But regardless of the name, every buffer get increases the instance statistic, session logical reads.

I like raw logical IO processing experiments because they are central to all Oracle Database processing. Plus with each new Oracle release, as additional functionality is inserted it is likely more lines of Oracle kernel code will exist. To maintain performance with added functionality is an incredible feat. It's more likely the core buffer processing will be slower because of the new features. Is this case with Oracle's in-memory column store?

How I Setup The Experiment
I have included all the detailed output, scripts, R commands and output, data plots and more in the Analysis Pack that can be downloaded HERE.

There are a lot of ways I could have run this experiment. But two key items must exist for a fare comparison. First, all the processing must be in cache. There can be no physical read activity. Second, the same SQL must be run during the experiment and have the same execution plan. This implies the Oracle 12c column store will NOT be used. A different execution plan is considered "cheating" as a bad plan will clearly loose. Again, this is a very targeted and specific experiment.

The experiment compares the buffer get rates for a given SQL statement. For each Oracle version, I gathered 33 samples and excluded the first two, just to ensure caching was not an issue. The SQL statement runs for around 10 seconds, processes around 10.2M rows and touches around 5M buffers. I checked to ensure the execution plans are the same for each Oracle version. (Again, all the details are in the Analysis Pack for your reading pleasure.)

I ran the experiment on a Dell server. Here are the details:
$ uname -a
Linux sixcore 2.6.39-400.17.1.el6uek.x86_64 #1 SMP Fri Feb 22 18:16:18 PST 2013 x86_64 x86_64 x86_64 GNU/Linux
To make this easier for myself, to perform the test I used my CPU Speed Test tool (version 1i). I blogged about this last month HERE. The latest version of this tool can be downloaded HERE.

The Results, Statistically
Shown below are the experimental results. Remember, the statistic I'm measuring is buffer gets per millisecond.


Details about the above table: The "Normal" column is about if the statistical distribution of the 31 samples is normal. If the p-value (far right column) is greater than 0.05 then I'll say they are normal. In all three cases, the p-value is less than 0.05. If fact, if you look at the histograms contained in the Analysis Pack every histogram is visually clearly not normal. As you would expect the "Average" and the "Median" are the statistical mean and median. The "Max" is the largest value in the sample set. The "Std Dev" is the standard deviation, which is doesn't mean much since our sample sets are not normally distributed.

As I blogged about before the Oracle Database 12c buffer processing is faster than Oracle Database 11g. However, the interesting part is Oracle version with in-memory column store 12.1.0.2.0 is slower then the previous version of 12c, 12.1.0.1.0. In fact, in my experiment the in-memory column store version is around 5.5% slower! This means version 12.1.0.1.0 "out of the box" can process logical buffers around 5.5% faster! Interesting.

In case you're wondering, I used the default out-of-the-box in-memory column store settings for version 12.1.0.2.0. I checked the in-memory size parameter, inmemory_size and it was indeed set to zero. Also, when I startup the Oracle instance there is no mention of the in-memory column store.

Statistically Comparing Each Version
As an important side bar, I did statistically compare the Oracle Database versions. Why? Because while a 5.5% decrease in buffer throughput may seem important, it may not be statistically significant, meaning this difference can not be explained with our sample sets.

So going around saying version 12.1.0.2.0 is "slower" by 5.5% would be misleading. But in my experiment, it would NOT be misleading because the differences in buffer processing are statistically significant. The relevant experimental details are shown below.

Version A Version B Statistical p-value
Difference
---------- ---------- ----------- -------
11.2.0.1.0 12.1.0.1.0 YES 0.0000
11.2.0.1.0 12.1.0.2.0 YES 0.0000
12.1.0.1.0 12.1.0.2.0 YES 0.0000

In all three cases the p-value was less than 0.05 signifying the two sample sets are statistically
different. Again, all the details are in the Analysis Pack.

The chart above shows the histograms of both Oracle Database 12c version sample sets together. Visually they look very separated and different with no data crossover. So from both a numeric and visual perspective there is a real difference between 12.1.0.1.0 and 12.1.0.2.0.


What Does This Mean To Me
To me this is surprising. First, there is a clear buffer processing gain upgrading from Oracle 11g to 12c. That is awesome news! But I was not expecting a statistically significant 5.5% buffer processing decrease upgrading to the more recent 12.1.0.2.0 version. Second, this has caused me to do a little digging to perhaps understand the performance decrease. The results of my experimental journey are really interesting...I think more interesting than this posting! But I'll save the details for my next article.

Remember, if you have any questions or concerns about my experiment you can run the experiment yourself. Plus all the details of my experiment are included in the Analysis Pack.

All the best in your Oracle performance tuning work!

Craig.





Categories: DBA Blogs

Presentations to go to at #DOAG2014

The Oracle Instructor - Mon, 2014-11-10 11:26

As every year, there’s a long list of great speakers with interesting talks to attend at the DOAG (German Oracle User Group) annual conference. Sadly I cannot attend them all, so I’ve got to make a choice:

First day

Datenbank-Upgrade nach Oracle 12.1.0.2 – Aufwand, Vorgehen, Kunden by Mike Dietrich, Oracle

Die unheimliche Begegnung der dritten Art: XML DB für den DBA by Carsten Czarski, Oracle

Advanced RAC Programming Features by Martin Bach, Enkitec

Automatische Daten Optimierung, Heatmap und Compression 12c live by Ulrike Schwinn, Oracle

Second day

Understanding Oracle RAC Internals – The Cache Fusion Edition by Markus Michalewicz, Oracle

Die Recovery Area: Warum ihre Verwendung empfohlen ist – I have to go to that one because I present it myself :-)

Geodistributed Oracle GoldenGate and Oracle Active Data Guard: Global Data Services by Larry Carpenter, Oracle

Oracle Database In-Memory – a game changer for data warehousing? by Hermann Baer & Maria Colgan, Oracle

Oracle Distributed Transactions by Joel Goodman, Oracle

Third day

High Noon – Bessere Überlebenschancen beim Datenbank Security Shoot Out by Heinz-Wilhelm Fabry, Oracle

Tuning Tools für echte Männer und Sparfüchse – vom Leben ohne EM12c by Björn Rost, portrix Systems

Best Practices in Managing Oracle RAC Performance in Real Time by Mark Scardina, Oracle

Maximum Availability with Oracle Multitenant: Seeing Is Believing by Larry Carpenter, Oracle


Tagged: #DOAG2014
Categories: DBA Blogs

ECO 2014 and Slides

DBASolved - Mon, 2014-11-10 07:30

Last week I attended the East Coast Oracle User Group conference, also known as ECO, in Raleigh, NC.  This being my first time at ECO, it was a good event for being a two day conference.  The low-key environment provided a nice, comfortable environment for interaction between the speakers and those in attendance.  If you ever have the chance to catch this conference, it would be a good one to attend.

What you can expect from ECO, is to see great speakers, both local to Raleigh and from around the country. There seems to be opportunities to see also see speakers that we all hear about and would like to see at some point.  As one of the speakers at this year’s conference, I have to say it was nice to have great attendance for my session on Oracle GoldenGate 12c Conflict Detection and Resolution.  My session was scheduled for 45 minutes; due to discussions throughout the session it lasted about 65 minutes.  Although the session ran over, it was exciting to see so many people wantiong to know more about Oracle GoldenGate and what benefits it provides to an organization.

If you would like to see the slides from my ECO session, they can be found here.

Lastly, I would like to say that ECO is one of the smaller user group conferences which seem to draw some great speakers.  Check it out next year!

Enjoy!

about.me: http://about.me/dbasolved


Filed under: General
Categories: DBA Blogs

SQL Developer and Big Data Appliance (sort of)

DBASolved - Sun, 2014-11-09 20:55

Recently, Enkitec received an Oracle Big Data Appliance (BDA) for our server farm in Dallas (Thanks Accenture!).  With this new addition to the server farm, I’m excited to see what the BDA can do and how to use it.  Since I use Oracle SQL Developer for a lot of things, I figure I better see if I can connect to it…. wait I don’t have access yet, darn!  Simple solution, I’ll just use the Oracle Virtual Box VM (Big Data Lite) to make sure my that my SQL Developer can connect when I eventually get access.

The first thing I needed is download the Big Data Lite VM.  It can be downloaded from the Oracle Technology Network (here). The second thing I needed to do was download the connectors for HIVE from Cloudera, use the version for the platform you need (here).

After downloading the Cloudera connectors for HIVE, these needed to be unzipped in a location that can be accessed by SQL Developer. Since I’m on a MacBook Pro, I unzipped them in this location:


$ cd ~/Downloads
$ unzip ./Cloudera_HiveJDBC_2.5.4.1006.zip -d /Users/Bobby/Oracle/connectors
$ cd /Users/Bobby/Oracle/connectors
$ ls -ltr
total 21176
-rw-r--r--@ 1 Bobby  staff  5521341 Sep 10 15:16 Cloudera_HiveJDBC4_2.5.4.1006.zip
-rw-r--r--@ 1 Bobby  staff  5317239 Sep 10 15:16 Cloudera_HiveJDBC3_2.5.4.1006.zip
$ unzip ./Cloudera_HiveJDBC4_2.5.4.1006.zip -d ./Hive
$ cd ./Hive
$ ls -ltr
-r--r--r--@ 1 Bobby  staff  1083758 Sep  8 17:28 Cloudera - Simba JDBC Driver for Hive Install Guide.pdf
-rw-r--r--@ 1 Bobby  staff     9679 Sep  8 23:28 slf4j-log4j12-1.5.8.jar
-rw-r--r--@ 1 Bobby  staff    23445 Sep  8 23:28 slf4j-api-1.5.8.jar
-rw-r--r--@ 1 Bobby  staff   367444 Sep  8 23:28 log4j-1.2.14.jar
-rw-r--r--@ 1 Bobby  staff   347531 Sep  8 23:28 libthrift-0.9.0.jar
-rw-r--r--@ 1 Bobby  staff   275186 Sep  8 23:28 libfb303-0.9.0.jar
-rw-r--r--@ 1 Bobby  staff   294796 Sep  8 23:28 ql.jar
-rw-r--r--@ 1 Bobby  staff   596600 Sep  8 23:28 hive_service.jar
-rw-r--r--@ 1 Bobby  staff  7670596 Sep  8 23:28 hive_metastore.jar
-rw-r--r--@ 1 Bobby  staff  2972229 Sep  8 23:28 TCLIServiceClient.jar
-rw-r--r--@ 1 Bobby  staff  1656683 Sep  8 23:29 HiveJDBC4.jar

 
Once the connectors are extracted, SQL Developer needs to know which HIVE connector to use.  In this case the JDBC4 connector is required.  Unzipped the JDBC4 set of files into a directory, in my case I’m using a directory called Hive.

In order to tell SQL Developer which connector to use, it needs to be specified in the interface by doing the following:

  1. Start SQL Developer
  2. Oracle SQL Developer -> Preferences
  3. Database -> Third Party JDBC -> Add Entry
  4. Restart SQL Developer

After restarting SQL Developer, we now see an option on the connection screen for Hive.


 

 
 

 
 

 
 

Now SQL Developer is ready to connect to a Big Data Appliance, oh I mean to my VM for Big Data Lite :), lets setup a connection and see if we can connect.  Since I’m connecting to a Virtual Box VM, I need to setup some ports to be used between my MacBook and the VM.  In this case, I have setup a SQL port on 15211 which maps to the standard database port of 1521.  For the Hive connection I’ve setup 10001 which maps to port 10000.

 

 

With the ports put in place, now I can setup SQL Developer to connect to the Hive on the Big Data Lite VM.  You will notice that on the username, password, server name and port is needed.  The database parameter is optional when connecting to a Bid Data Hive.


 

 
 

 
 

 
 

Once the connection is configured, I can login to the Hive and review what tables are listed in the Big Data Lite VM.

 

 

 

 

 

 
 

 
 

 

The end result is that now I can visualize the data that is in a Big Data Appliance/Lite VM and begin to interact with objects defined within.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: BigData
Categories: DBA Blogs

rmoug 2015 presentation ready to go ... makes me ( not ) grumpy

Grumpy old DBA - Sun, 2014-11-09 19:04
All ready to roll with RMOUG 2015 Training days presentation "OS Truth, little white lies, and the Oracle Wait Interface".

Best of course to come to RMOUG 2015 Training days ... but this is link to pdf version of the presentation here: John Hurley RMOUG 2015

If you are out there ( very excited personally my first time at this conference ) please kick me and say hello.  Ok maybe skip the kicking part ...


Categories: DBA Blogs

handy ash queries to look at blocked sessions ( how many when for what event ) ...

Grumpy old DBA - Sun, 2014-11-09 15:13
Licensing may be required ... please check if applicable.

A query like this can be used to check how much blocking and what session it was ... ( so then you can drill into those blockers sessions ).  Probably can be done easily with some kind of rollup query?

Adjust SAMPLE_TIMEs in where clause below.

select ash_data.*, substr(sqlinfo.sql_text,1,70)
from
(SELECT to_char(ash.sample_time,'MM/DD/YYYY HH24:MI:SS') what_time,  count(*) sessions_blocked, ash.event, ash.blocking_session, ash.blocking_session_serial#, ash.sql_id, ash.sql_opname
FROM DBA_HIST_ACTIVE_SESS_HISTORY ash
WHERE ash.SAMPLE_TIME >= TO_DATE('01-NOV-2014 13:00', 'DD-MON-YYYY HH24:MI')
  and ash.sample_time <= to_date('08-NOV-2014 17:00', 'DD-MON-YYYY HH24:MI')
-- and ash.event like 'enq: TX - row%'
and blocking_session is not null
group by to_char(ash.sample_time,'MM/DD/YYYY HH24:MI:SS'), ash.event, ash.sql_id, ash.sql_opname, ash.blocking_session, ash.blocking_session_serial#
order by 1) ash_data,
v$sqlarea sqlinfo
where ash_data.sql_id = sqlinfo.sql_id
and sessions_blocked >= 1
order by what_time

...

For example once you have narrowed it down to something interesting looking ( who is blocked / what sql_id / what event ) ... you can use something like this.  I am now looking at any active history information on what the blockers were doing or waiting on.


select * from DBA_HIST_ACTIVE_SESS_HISTORY where ( session_id, session_serial# ) in (
SELECT blocking_session, blocking_session_serial# FROM DBA_HIST_ACTIVE_SESS_HISTORY
WHERE SAMPLE_TIME > TO_DATE('06-NOV-2014 14:00', 'DD-MON-YYYY HH24:MI')
AND SAMPLE_TIME < TO_DATE('06-NOV-2014 15:00', 'DD-MON-YYYY HH24:MI')
and event like 'enq: TX - row%'
and sql_id = '0kbzgn17vbfc5' )
and SAMPLE_TIME > TO_DATE('06-NOV-2014 14:30', 'DD-MON-YYYY HH24:MI')
AND SAMPLE_TIME < TO_DATE('06-NOV-2014 15:00', 'DD-MON-YYYY HH24:MI')
order by sample_time
Categories: DBA Blogs

StatsPack and AWR Reports -- Bits and Pieces -- 3

Hemant K Chitale - Sat, 2014-11-08 09:48
This is the third post in this series.
Post 1 is here
Post 2 is here

Note : Some figures / details may be slightly changed / masked to hide the real source.

Identifying Unusual Events / Behaviours / Applications

Here I find a few "unusual" events and mark them out.

Extract A : 11.2 AWR

Snap IdSnap TimeSessionsCursors/SessionBegin Snap:.... 03:00:57107.7End Snap:.... 07:00:07114.9Elapsed:239.17 (mins)DB Time:22.61 (mins)
Top 5 Timed Foreground Events

    EventWaitsTime(s)Avg wait (ms)% DB timeWait ClassDB CPU1,33298.16SQL*Net more data to client49,7012001.50NetworkSQL*Net more data from client213,915500.34Networkdb file scattered read1,159110.08User I/Odb file sequential read7,547100.07User I/O
    The two "SQL*Net more data" sets of waits are the unusual events.The Time on SQL*Net more data to/from client is negligible isn't it ?  So, should I be concerned ?  Over a 4 hour period, only 20seconds were on "SQL*Net more data to client".  Time on "SQL*Net more data from client" is much lower at a total time of 5seconds only.  So "Time based" tuning would ignore these two waits.Foreground Wait EventsEventWaits%Time -outsTotal Wait Time (s)Avg wait (ms)Waits /txn% DB timeSQL*Net more data to client49,701020037.781.50SQL*Net more data from client213,915050162.620.34db file scattered read1,1580110.880.08db file sequential read7,5500105.740.07






    SQL*Net message to client652,102000495.890.04













    SQL*Net message from client652,1020183,327281495.89
    Not that Oracle treats "SQL*Net message from client" as an idle wait so the 183,327seconds of wait time do NOT appear in the Top 5 Timed Foreground Events list.

    I would draw attention to the high number of "more data from client" waits and the correlation with the "message from client" waits.  Either extremely large SQL statements or PLSQL blocks are being submitted very frequently or row inserts with very large array sizes (number of rows per insert) are being received.  In this case, further investigation reveals an ETL loader that does bulk inserts of a number of rows per array.  If we need tuning, tuning the SDU may help.
    Similarly the "more data to client" indicates large data sets are being returned.  The numbers of columns and rows per every "send" are high.
    Instance Activity Stats (from the same AWR)StatisticTotalper Secondper Trans


    Requests to/from client654,73845.63497.90



    SQL*Net roundtrips to/from client654,74045.63497.91



    bytes received via SQL*Net from client1,793,072,463124,950.541,363,553.20bytes sent via SQL*Net to client552,048,24738,469.57419,808.55



    logical read bytes from cache762,514,227,20053,135,924.61579,858,727.91



    physical read total bytes8,772,479,488611,311.626,671,087.06


    physical write total bytes25,334,243,3281,765,420.7619,265,584.28



    redo size6,373,204,848444,117.794,846,543.61




    1.793billion bytes received in 654K SQL*Net trips is 2741bytes per trip received at 45 messages per second.  Given that it is still only 2,741bytes per trip, possibly the array size could also be tuned with the SDU and TDU.
    So, this is an AWR that doesn't call for tuning but reveals information about how the database is being used.  Large number of rows (large number of columns) being inserted and retrieved in each call.  The Performance Analyst needs to be aware of the nature of the "application" --- here it is not OLTP users but an ETL job that is the "application".  Although the database had more than a hundred sessions a very small number of sessions (possibly 4 ?) were active doing ETL and checking the status of ETL tables during this window.

    Would you care to analyze the other statistics I've listed -- bytes read and bytes written ?



    Extract B : 10.2 AWR
    Snap IdSnap TimeSessionsCursors/SessionBegin Snap:
     00:00:072284.5End Snap:
     23:00:062324.3Elapsed:1,379.97 (mins)DB Time:11,543.24 (mins)
    I know.  It is really bad and most likely meaningless to get an AWR for a 24hour range.  (I'll not go into the details about why the AWR is for 24hours -- save that I did NOT ask for a 24hour AWR report).
    Top 5 Timed Events
    EventWaitsTime(s)Avg Wait(ms)% Total Call TimeWait ClassCPU time258,10137.3db file sequential read62,150,655208,148330.1User I/Odb file scattered read28,242,529141,638520.4User I/ORMAN backup & recovery I/O1,597,42137,137235.4System I/Oenq: TX - row lock contention22,27634,9421,5695.0ApplicationThe RMAN Backup load is expected.  (Why ? Given a 24hour report, I expect RMAN to have run at least once during the day).For performance tuning, I would look at the "db file ... read" events and identify and analyze SQL statements and the schema.
    What is the "unusual" event here ?  It is the "enq: TX - row lock contention".  Over a period of 1,380minutes, there were 22,276 Row-Lock Contention waits.  Actually, this application does not have the same load throughout the 23hours.  Most likely, it had load for 15hours only.  So, we had 22,276 Row-Lock Contention waits over 15hours.  That translates to 1,485 waits per hour or one Row-Lock Contention wait every 2.5seconds.  Now, that is a very high frequency.  Either users are locking each other out for a very short while (1.569seconds per wait on average) or there is/are one or more jobs that run at a very high frequency and update a common "reference" table.  I won't reveal my findings here but analysis of the SQL statements indicates what the "problem" is.
    Now, should "Time based performance tuning" be worried about the 5% of time lost on these waits ?  Probably not.  But they do indicate something peculiar in the design of this system.  There are less than 250 user sessions in this OLTP database but there is/are one or more jobs that is/are locking itself every 2.5seconds -- so there is some point of serialisation occurring.  Is that job also accounting for CPU time or 'db file read' time ?  That needs further analysis.
    Both these cases show how a Performance Analyst needs to know how the database is being used.  What sort of jobs are submitted, besides OLTP users ?
    ...


    Categories: DBA Blogs

    Securing Oracle GoldenGate Database Login in a Credential Store

    DBASolved - Fri, 2014-11-07 17:44

    Security is always a big deal.  In setting up Oracle GoldenGate the capture (extract) and apply (replicat) parameter files need to be configured to log in to the database which they will perform operations.  In order to do this the Oracle GoldenGate User name and password need to be provided in the parameter files.  Example 1 shows how the database login is traditionally done in a extract or replicat parameter file.

    Example 1:

    --Oracle Login
    USERID ggate, PASSWORD ggate
    

    To make this process login information more secure, we can create a userid alias that the extract or replicat process can use to log into the database.  In order to create a login alias, a credential store needs to be create.  Below are the steps to create the credential store and associated aliases.

    After logging into the GoldenGate Service Command Interface (GGSCI), a credential store needs to be created.  By default the credential store will be kept in the “dircrd” directory undert the $OGG_HOME.

    Create the credential store:

    
    
    GGSCI (db12cgg.acme.com) 1> add credentialstore
    
    
    Credential store created in ./dircrd/.
    
    

    With the credential store created, now an alias can be created for the gguser.

    
    
    GGSCI (db12cgg.acme.com) 2> alter credentialstore add user ggate, password ggate alias aggate
    
    
    Credential store in ./dircrd/ altered.
    
    

    The extract or replicat parameter files need to be updated to use the new alias.  Once the update is done the associated process needs to be restarted.

    
    --Oracle Login
    
    USERIDALIAS aggate
    
    

    After restarting the process, the Oracle GoldenGate login is secure.

    Note: If the password for the Oracle GoldenGate User changes, the alias in the credential store will need to be updated.

    Enjoy!

    about.me: http://about.me/dbasolved


    Filed under: Golden Gate
    Categories: DBA Blogs

    Partner Webcast – Oracle R Enterprise: Bringing the power of R to the enterprise

    For those of you just getting interested in Big “Data” and/or “Advanced Analytics”, the addition of R through Oracle R Enterprise could leave them wondering "What is R?" R is an...

    We share our skills to maximize your revenue!
    Categories: DBA Blogs

    SQL Translation Framework in Oracle Database 12c

    One of the new Oracle Database 12c features is the SQL Translation Framework. A key part of migrating non-Oracle databases to Oracle Database involves the conversion of non-Oracle SQL statements...

    We share our skills to maximize your revenue!
    Categories: DBA Blogs

    setting pctfree to 0 for regular indexes ... ( not familiar with this theory ) ... help!?

    Grumpy old DBA - Fri, 2014-11-07 12:31
    One of my coworkers has this theory ( habit ) of creating new regular indexes on regular table columns.  Not compressed indexes or anything special.

    He is currently believing ( back from somewhere / not sure of the origin ) that "if a table only gets inserts and deletes" that any indexes created on that table should be set with pctfree set to 0.

    I do not agree.  In my opinion one should stick with the tried and true default values here.  I also am skeptical of this belief.

    It seems to me like a very small amount of potential tangible gain in space usage while causing potential problems down the road.

    Even if this theory is somehow true ( don't think so ) ... what happens in the future when something changes in the application and what one was initially told ( only inserts and deletes ) changes.

    Anyone seen / heard /debunked this theory?  Any idea of where it comes from?

    Thanks John   
    Categories: DBA Blogs

    Watch: The Most Underrated Features of SQL Server 2014 — Part 3

    Pythian Group - Fri, 2014-11-07 10:02

    Since its release back in April, SQL Server experts across the globe are becoming familiar with the top features in Microsoft SQL Server 2014—the In-Memory OLTP engine, the AlwaysOn enhancements, and more. But we couldn’t help but notice that there are a few features that aren’t getting the same attention. Warner Chaves, a Microsoft Certified Master and SQL Server Principal Consultant at Pythian has filmed a video series sharing the most underrated features of SQL Server 2014.

    In his final video in the series, Warner discusses SQL Server 2014’s parallel implementation of the SELECT INTO statement. In previous versions of SQL Server, it was always a one-threaded operation, using only one CPU—the new feature however, uses multiple threads.“This makes the operation a lot faster and a lot more efficient.” Warner says. Learn common use cases and how to enable the new feature by watching his video The Most Underrated Features of SQL Server 2014 — Part 3 down below.

    Watch the rest of the series here:

     

    Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s Microsoft SQL Server expertise.

     

    Categories: DBA Blogs

    Log Buffer #396, A Carnival of the Vanities for DBAs

    Pythian Group - Fri, 2014-11-07 08:52

    This Log Buffer Edition lists down some of valuable blog posts from across the arenas of Oracle, SQL Server, and MySQL.

    Oracle:

    OOW Recap: Industrial Innovation with IoT, enabled by Oracle Fusion Middleware and JD Edwards.

    PeopleSoft HCM 9.2 Update Image 9 is Now Available! – Simple and Intuitive User Interface on Multiple Devices.

    API Catalog (OAC) allows organizations to easily build a catalog of their APIs to provide visibility to those APIs for application development.

    Oracle Announces Oracle API Catalog to Simplify API Management.

    Tools For Generating Consistent Loads.

    SQL Server:

    Is Database Mirroring Causing The Transaction Log To Fill Up?

    Free SQL in the City Seminar Washington DC – Dec 5, 2014.

    Stop Relying on Cloud File Stores as a Backup Strategy.

    Stairway to SQL Server Indexes: Level 1, Introduction to Indexes.

    SSRS – Report for Stored Procedure with Parameters.

    MySQL:

    Can you use the binary logs to undo a DELETE? Maybe, if you’re using ROW format logging. The difference between a delete and an insert event could be just one byte – the one that maps it as a DELETE_ROWS_EVENT or a WRITE_ROWS_EVENT.

    Ubuntu 14.10 LAMP server tutorial with Apache 2, PHP 5 and MySQL (MariaDB).

    Resizing the InnoDB Buffer Pool Online.

    Inside Distributed Architectures: Seeing In-Flight Requests And Blockers in Realtime.

    Stopping a Runaway Rollback due to Failing Mass Import or ALTER TABLE.

    Categories: DBA Blogs

    Partner Webcast – Oracle SOA 12c: BPM 12c integration with OEP 12c

    Increased integration complexity may seem inevitable as organizations are suddenly faced with the requirement to support new integration challenges, such as Cloud, Mobile and Internet of Things....

    We share our skills to maximize your revenue!
    Categories: DBA Blogs

    Pythian at LISA14

    Pythian Group - Wed, 2014-11-05 10:27

    Pythian is a sponsor at the LISA conference this year, where we’ll be participating in a panel discussion, invited talk, and a birds-of-a-feather session.

    Bill Fraser, Principal Consultant in Pythian’s SRE practice notes that this year is different for Pythian. “While members of our team have attended LISA in the past, this marks the second year Pythian will be a sponsor for the event, and the first time we have been accepted to speak.” Bill will be speaking at one of the birds-of-a-feather sessions alongside Pierig Le Saux, another Principal Consultant in Pythian’s SRE practice.

    “One of the longest running technical conferences of its kind (this will be the 28th incarnation), the LISA conference is an opportunity to meet, learn from, and network with some of the most respected technical leaders and researchers in the industry,” Bill says. “The conference program features talks, panels, and tutorials on the topics of DevOps, monitoring and metrics, and security, and provides the attendee with an opportunity to learn about the newest tools and emerging technologies in the field.”

    “For Pythian, the conference provides us with an opportunity to give back to the community, by showing our support of the LISA conference and USENIX organization, and allowing us to share the experiences of members of our team. We look forward to seeing you there. Please stick around after our talks and introduce yourself, and / or stop by our booth and say hi!”

     

    Birds-of-a-Feather Session featuring Bill Fraser and Pierig Le Saux
    Wednesday November 12, 2014 — 7:00-8:00 PM
    Grand Ballroom A

    Bill Fraser and Pierig, Principal Consultants for the SRE practice at Pythian, will be discussing what it really means to be practicing Infrastructure as Code. They will provide examples of concepts, tools and workflows, and encourage attendees to engage in a dialog of how their day-to-day work is changing.

     

    Remote Work panel featuring Bill Lincoln
    Thursday November 13, 2014 — 11:00 AM-12:30 PM

    Bill Lincoln, Service Delivery Manager and business advocate at Pythian, will be participating in the Remote Work panel. This panel will focus on how companies handle the remote workers in Ops roles. “When you’re looking for the best talent in the world, remote work is a requirement—not an option,” Bill says. “Finding ways to effectively manage large, remote, teams across the globe is a challenge for any organization and Pythian has built its entire business around this model.” He will be presenting alongside folks from DigitalOcean, TeamSnap, and Etsy.

     

    Invited Talk presented by Chris Stankaitis
    Friday November 14, 2014 — 9:00-9:45 AM

    Chris Stankaitis, Team Lead for the Enterprise Infrastructure Services practice at Pythian, will be presenting an invited talk called Embracing Checklists as a Tool for Human Reliability.

    “A pilot cannot fly a plane, and a surgeon cannot cut into a person without first going through a checklist.” Chris says. “These are some of the most well educated and highly skilled people in the world, and they have embraced the value of checklists as a tool that can dramatically reduce human error.”

     

    EXPO Hall
    Wednesday November 12, 12:00 -7:00 PM
    Thursday November 13, 10:00 AM-2:00 PM

    The LISA EXPO hall opens at noon on Wednesday, so be sure to stop by Pythian’s booth #204 (we’ll be in good company, right next to Google!) You could win a Sonos Play: 1, and all you have to do is take a selfie. Learn the full contest details at our booth, and follow us on Twitter @Pythian to stay updated!

     

    Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s Data Infrastructure expertise.

     

    Categories: DBA Blogs

    Watch: The Most Underrated Features of SQL Server 2014 — Part 2

    Pythian Group - Wed, 2014-11-05 10:09

    Since its release back in April, SQL Server experts across the globe are becoming familiar with the top features in Microsoft SQL Server 2014—the In-Memory OLTP engine, the AlwaysOn enhancements, and more. But we couldn’t help but notice that there are a few features that aren’t getting the same attention. Warner Chaves, a Microsoft Certified Master and SQL Server Principal Consultant at Pythian has filmed a video series sharing the most underrated features of SQL Server 2014.

    In the second video in his series, Warner discusses the significance of the new partition operations, making online index rebuilds and incremental statistics much more efficient. “For many clients, it was really hard to find maintenance windows that were big enough to actually rebuild the entire set [of partition tables] when they had fragmentation issues,” said Warner. “2014 now implements online index rebuild on a partition level.” Learn how incremental statistics became more efficient, and when you can start using the new partition operations by watching his video The Most Underrated Features of SQL Server 2014 — Part 2  down below.

    Watch the rest of the series here:

     

    Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s Microsoft SQL Server expertise.

     

    Categories: DBA Blogs