Skip navigation.

DBA Blogs

Oracle Fusion Middleware EMEA Partner Community Forum 2015

Take this opportunity and register now for the Oracle Fusion Middleware Partner Community Forum XX Budapest in on March 3th & 4th 2015. with hands-on training delivered on March...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracleinaction.com in 2014 : A review

Oracle in Action - Wed, 2014-12-31 10:17

RSS content

The Louvre Museum has 8.5 million visitors per year. This blog was viewed about 320,000 times in 2014 with an average of 879 page views per day. If it were an exhibit at the Louvre Museum, it would take about 14 days for that many people to see it.

The busiest day of the year was December 1st with 1,656 views. The most popular post that day was ORACLE CHECKPOINTS.

These are the posts that got the most views on ORACLE IN ACTION in 2014.

The blog was visited by readers from 194 countries in all!
Most visitors came from India. The United States & U.K. were not far behind.

Thanks to all the visitors.

Keep visiting and giving your valuable feedback.

Wish you all a Very Happy New Year 2015  !!!!!



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [Oracleinaction.com in 2014 : A review], All Right Reserved. 2015.

The post Oracleinaction.com in 2014 : A review appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

uhesse.com in 2014 – a Review

The Oracle Instructor - Wed, 2014-12-31 09:56
Better than ever

2014 brought a new high water mark with 282,000 hits!

Meanwhile, uhesse.com gets over 1,000 hits per day on average.

Who are the referrers?

The vast majority of visitors came to uhesse.com via google searches: 163,000

On a distant second place is Twitter: 2,500 visitors came from there

Facebook and LinkedIn each lead 800 visitors to uhesse.com

400 visitors came from jonathanlewis.wordpress.com – thank you, Jonathan!

Where did uhesse.com refer to?

As a good employee, I sent 1,500 visitors to education.oracle.com and 1,300 to oracle.com :-)

About 600 each went to jonathanlewis.wordpress.com and blog.tanelpoder.com, while richardfoote.wordpress.com got 350 visitors

Nationality of the visitors

They came from almost everywhere, the full list contains 200 countries in total – isn’t the internet a great thing?

Vistors of uhesse.com by nationality

Thank you all for visiting – I hope you come back in 2015 again :-)


Categories: DBA Blogs

What it’s Like to Intern at Pythian

Pythian Group - Tue, 2014-12-30 14:00

About eight months ago I started working as a Global Talent Acquisition Intern here at Pythian. It was my first co-op work placement, and my first experience in a professional work setting. I had never done anything like it before and was definitely nervous yet excited for the opportunity. As any eager but nervous person, I researched as much as I could to prepare myself for what I’d be doing. I distinctly remember searching on the Internet; What does an intern do?” and was a little unnerved by some of the results that I came across. Being an intern couldn’t be that bad now could it? Surely what I was reading was a collection of misconceptions—people wouldn’t lie and exaggerate things on the Internet now would they? After having worked at Pythian for this long, I can finally put those misconceptions to rest and reveal firsthand what it’s really like to be an intern—well, here at Pythian at least.

“You don’t get to choose the people you work with.”

I worked as an intern in Talent Acquisition, the functional department within Human Resources whose main task is to acquire and retain the most important resource for Pythian; its people. Working in TA, I got see exactly what Pythian meant when it said it hires only the top 5%. For a company that hires talent from all around the world, it takes a special group of people who understand the complexities of knowing not only where to look, but also who to look for in order to hire the right person. As a result, not only are the people being hired exceptionally talented across a vast spectrum of technologies, but the team doing the hiring, Talent Acquisition itself, is amazingly talented at what they do (they even won an award for it this year!).

Whether you’re an intern or a regular employee, there is one thing that both have in common, and it’s that you don’t get to choose the people you work with. As soon as I walked through the doors of the TA office, I knew that I was in good hands. My colleagues respected me and treated me as one of them from the get-go and sometimes even came to me for advice and assistance. It was humbling and a true reflection of the type of people who work at Pythian. I never once was treated like an intern, nor did I ever feel like one. There were no coffee runs that I had to do and there was never a situation where I had to push papers all day. This was nothing like what my Google searches had led me to believe.

“What do I know? I’m only an intern!”

Speaking of pushing papers all day and running coffee errands, many people often get the impression that working as an intern means having to do the work that no one else wants to do. In other words, an intern is simply there to do the mundane. That may be true for internships elsewhere but here at Pythian, the work I was doing was far from mundane—it was both challenging and quite interesting. There were times where I couldn’t believe what I was asked to do. Not because I didn’t want to do it, but because I couldn’t conceive why something so important was being given to me to figure out. What do I know? I’m only an intern! Within the first week I found myself working on projects whose impact would be felt by the entire team. I wasn’t being given work for the sake of work to keep me busy, rather, the work I was doing had a sense of tangibility to it. I was able to see the impact before, during and after and each time I knew that what I was doing was truly going to make a difference in the end.

True to their nature, internships allow for lots of opportunities when it comes to learning. Everywhere you turn in Pythian there is something to learn and someone willing to teach. As a Commerce student, there is only so much you can learn about organizations and how they operate just by sitting in a lecture. At Pythian, I got to experience it. Each and every day I was learning something new, something that I didn’t know before. I had the opportunity to really see the inner workings of a successful organization and understand how all the pieces of the company worked together towards a common purpose. There were also opportunities to participate in workshops with other co-op students on topics that ranged from business acumen to time management. These workshops concluded with a final session where each of us got to present some of the things we had been working on in front of our managers and supervisors. This experience was one of the many highlights of my time at Pythian, as it was a great way to interact and learn from other students like myself.

“Get comfortable at being uncomfortable.”

If there is one thing that I could tell future co-op students or interns here at Pythian it is to not be afraid to leave your comfort zone. Pythian’s CEO at the time, Andrew Waitman, said it best when he said, “get comfortable at being uncomfortable.” When given an opportunity to do something you haven’t done before, don’t think about whether or not you can do it—just go for it. You’ll be surprised what you are capable of when you get past that feeling of uncertainty and doubt. Making mistakes is inevitable, but it is also a necessary step in order to learn and grow. Always ask questions and have an open mind in everything that you do. When all is said and done, you will leave Pythian smarter, more confident, and more prepared for anything that comes your way.

Categories: DBA Blogs

Watch: Riak vs. Oracle

Pythian Group - Tue, 2014-12-30 11:58

Every data platform has its value, and deciding which one will work best for your big data objectives can be tricky—Alex Gorbachev, Oracle ACE Director, Cloudera Champion of Big Data, and Chief Technology Officer at Pythian, has recorded a series of videos comparing the various big data platforms and presents use cases to help you identify which ones will best suit your needs.

Riak and Oracle are completely different platforms. Alex explains that “Oracle database is a very feature-rich platform,” while Riak, Alex explains, “…is a very simple model… with very minimalistic features.” Riak is excellent for a public cloud infrastructure because it provides elasticity and scalability on demand. Learn about more use cases from Alex’s video Riak vs. Oracle.

Note: You may recognize this series, which was originally filmed back in 2013. After receiving feedback from our viewers that the content was great, but the video and sound quality were poor, we listened and re-shot the series.

Find the rest of the series here

 

Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s Big Data expertise.

Categories: DBA Blogs

Calculating Business Days in HiveQL

Pythian Group - Tue, 2014-12-30 09:07

One of the common tasks in data processing is to calculate the number of days between two given dates. You can easily achieve this by using Hive DATEDIFF function. You can also get weekday number by using this more obscure function:

SELECT FROM_UNIXTIME(your_date,'u') FROM some_table;

This will return 1 for Monday, 7 for Sunday and is based on Java SimpleDateFormat —
http://docs.oracle.com/javase/7/docs/api/java/text/SimpleDateFormat.html

It becomes more challenging if you need to calculate the number of business days (excluding Saturdays and Sundays) between two dates. There is no built-in function in Hive to perform such calculation, but fortunately it is rather simple to write your own Hive UDFs. I couldn’t quickly find any existing open source functions to solve this problem, so I wrote my own using Scala — https://github.com/pythian/hive-udfs

There are actually three functions in hive-udfs package: CountBusinessDays, CountSaturdays and CountSundays. These functions accept start date and end date as UNIX_TIMESTAMP and return the count of different types of full days in this interval, excluding start and end points.

Here is how you can use this UDF in Hive:

ADD JAR hdfs:///user/hive/udfs/pythian-hive-udfs-assembly-0.1.jar;
CREATE TEMPORARY FUNCTION count_business_days AS 'com.pythian.udf.CountBusinessDays';
CREATE TEMPORARY FUNCTION count_saturdays AS 'com.pythian.udf.CountSaturdays';
CREATE TEMPORARY FUNCTION count_sundays AS 'com.pythian.udf.CountSundays';

SELECT count_business_days(UNIX_TIMESTAMP(start_date), UNIX_TIMESTAMP(end_date)) FROM some_table;

The code is open source and free to use. Comments and suggestions are always welcome.

Categories: DBA Blogs

How to Migrate a Database Using GoldenGate

Pythian Group - Tue, 2014-12-30 09:06

There are many ways to migrate a database from A server to B server like Datapump, RMAN,etc. Using the combination of datapump and GoldenGate to migrate your database on cross-platform will minimize your down-time to even three minutes.

This method can be used for any size database from MB to TB level. Here is a simple sample to demonstrate this idea.

The prerequisites I assume that the GoldenGate has been configured in the source database and target database. To simulate the OLTP database, in my source database “SOURCE” there is a job will keep inserting a record into the table HOWIE.TEST as shown below.

CREATE PROCEDURE howie.insert_test
IS
BEGIN
   insert into test values(test_seq.nextval,sysdate);
   commit;
END;
/ 

SQL> SELECT * FROM HOWIE.TEST ORDER BY ID;

        ID MOD_DATE
---------- -------------------
         1 12/13/2014 21:19:17
         2 12/13/2014 21:24:03
         3 12/13/2014 21:31:11
         
		 .....................
           
        21 12/15/2014 19:14:25
        22 12/15/2014 19:15:25
        23 12/15/2014 19:16:25

23 rows selected.

2nd step, you need to start capture process on the source database and stop replicate process on the target database

SOURCE:

GGSCI (11gGG1) 4> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EXT1        44:56:46      00:00:01

TARGET:

GGSCI (11gGG2) 6> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED
REPLICAT    STOPPED     REP1        00:00:00      00:00:53

3rd step, export the source database using datapump with flashback_scn

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
     284867
	 
[ggadmin@11gGG1 11.2.0]$ expdp directory=DATA_PUMP_DIR dumpfile=source.dmp logfile=source.log schemas=HOWIE flashback_scn=284867




4th step, transferred the dumpfile to the target server

[ggadmin@11gGG1 11.2.0]$ scp /u01/app/oracle/admin/SOURCE/dpdump/source.dmp 11gGG2:/u01/app/oracle/admin/TARGET/dpdump/

5th step, import the dumpfile into the target database.

[ggadmin@11gGG2 11.2.0]$ impdp directory=DATA_PUMP_DIR dumpfile=source.dmp logfile=source.log schemas=HOWIE

6th step, verify the data in the target database

SQL> SELECT * FROM HOWIE.TEST ORDER BY ID;

        ID MOD_DATE
---------- -------------------
         1 12/13/2014 21:19:17
         2 12/13/2014 21:24:03
         3 12/13/2014 21:31:11

		 ...............

    	21 12/15/2014 19:14:25
        22 12/15/2014 19:15:25
        23 12/15/2014 19:16:25

23 rows selected.

7th step, start replicate process on the target database using ATCSN

GGSCI (11gGG2) 8> start rep rep1 atcsn 284867

Sending START request to MANAGER ...

8th step, confirm the data has been synced

SQL> SELECT * FROM HOWIE.TEST ORDER BY ID;

        ID MOD_DATE
---------- -------------------
         1 12/13/2014 21:19:17
         2 12/13/2014 21:24:03
         3 12/13/2014 21:31:11
         4 12/13/2014 21:44:33
         5 12/13/2014 21:45:33
         6 12/13/2014 21:46:33
         7 12/13/2014 21:47:33

		 ...............

        60 12/15/2014 19:53:33
        61 12/15/2014 19:54:33
        62 12/15/2014 19:55:33
        63 12/15/2014 19:56:33

63 rows selected.

Action plan Summary

Step Source Target Source DB (11g) Target DB (11g) 1 Configure goldengate for capture processes. Configure goldengate for Replicate processes. 2 Start capture processes. Don’t start replicate now. 3 start export from the source database (Mark SCN when export started.) 4 Export completed. start SCP of dumpfile to target server. 5 SCP completed. Start Import on Target database using dumpfiles. 6 Import Finished. 7 Start replicat using atcsn 8 Replicate applied all changes 9 when lag is zero for capture,stop capture wait till replicate apply all changes , lag should be zero for replicate. After this stop replicate. 10 Redirect db connection point to target db. Redirect db connection point to target db.
Categories: DBA Blogs

Merging Apps Patches in Oracle EBS R12.2

Pythian Group - Tue, 2014-12-30 09:04

It’s public knowledge that the traditional patching tool in Oracle EBS “adpatch” is replaced with “adop” utility. It’s also known that adop utility automatically merges patches when more than one patch is specified in the command line arguments. So whats the need for blog post on merging patches when its taken care automatically?

This blog is for people who like to dig little deep into EBS to shave off some downtime during the upgrades. We save some downtime if we merge the patches ahead of time instead of letting adop do it during the upgrade window. This is especially true when you are applying big patches like 12.2.4.

Merging patches is done using same utility as in earlier versions called “admrgpch”. Except that there are few extra steps needed after merging the patches.

In EBS 12.2 after merging the patches using admrgpch, we need to copy the actual unzipped patches that we merged also into the destination  directory. This is required as adop utility seems to be looking for these patches during the prepare phase. If you don’t copy the unzipped patch directories, you can still apply the patches. But when you run adop=prepare during next patching cycle,  it will fail as it will look for actual patch directories inside the merged patch dir.

Here is how a sample merging procedure will look like in EBS R12.2

# merge patches 111111 & 222222
$ pwd
  /u01/EBS/fs_ne/EBSapps/patch
$ ls
  111111 222222 
$ mkdir dest
$ admrgpch -s /u01/EBS/fs_ne/EBSapps/patch -d /u01/EBS/fs_ne/EBSapps/patch/dest
$ cd dest
$ pwd
  /u01/R122_EBS/fs_ne/EBSapps/patch/dest
$ ls
  fnd u_merged.drv 

# After admrgpch is finished, we need to copy patch directories into the dest dir

$ cd ..
$ mv 111111 /u01/EBS/fs_ne/EBSapps/patch/dest
$ mv 222222 /u01/EBS/fs_ne/EBSapps/patch/dest
$ cd /u01/EBS/fs_ne/EBSapps/patch/dest
$ ls
  111111 222222 fnd u_merged.drv

# Now you can the patches using adop=apply
Categories: DBA Blogs

Log Buffer #403, A Carnival of the Vanities for DBAs

Pythian Group - Tue, 2014-12-30 09:02

As the 2014 is drawing to its end, the Log Buffer edition is looking back proudly at some of the blog posts from this week looking at whats happening in around database field.

Oracle:

Fusion Applications provides web services that allow external systems to integrate with Fusion Applications.

OEM 12c Release 4 has several new EM CLI verbs, including manage_agent_partnership.

To reflect the Oracle Retail enterprise applications newest code base, the 14.1 release of the Oracle Retail application enterprise includes new End User documents, considerable updates to existing End User documentation sets, and a wide range of new White Papers and Technical Papers.

If you programmatically change data in your Oracle MAF application then you need to ensure the UI reflects those data changes.

Configuring MDS Customisation Layer and Layer Value Combination in ADF.

SQL Server:

15 Quick Short Interview Questions Useful When Hiring SQL Developers.

Business Intelligence architect, Analysis Services Maestro, and author Bill Pearson exposes the DAX SUM() and SUMX() functions, comparing and contrasting the two.

SQL Server Data Aggregation for Data with Different Sampling Rates.

SSRS continues to use SET FMTONLY ON even though it has many problems. How can we cope?

When a hospital’s mission-critical database fails at Christmas, disaster for the hospital – and its hapless DBA – seems certain.

MySQL:

Does your dataset consist of InnoDB tables with large BLOB data such that the data is stored in external BLOB pages?

InnoDB crash recovery speed in MySQL 5.6.

Somebody wanted to know how to find any non-unique indexes in information_schema of the MySQL.

What is a data type?

File carving methods for the MySQL DBA.

Categories: DBA Blogs

Can A Background Process Impact A Foreground Process And Its Database Time?

Can A Background Process Impact A Foreground Process And Its Database Time?
Have you ever heard someone say, "Background processes do not impact foreground processes because they run in the background and in parallel with foreground processes." I've heard this hundreds of times!

While doing some performance research I came across a great example of how an Oracle Database background process can directly and significantly impact a foreground process.

The above quote represents a masterfully constructed lie; it contains both a lie and a truth. The mix of a truth and a lie make understanding the reality of the situation difficult. In this post, I'll explain the truth, delve into the lie and relate it all to foreground process database time.

By the way, I am in no way saying there is something wrong with or incorrect about DB Time. I want to ensure this is clear from the very beginning of this post.

Just so there is no confusion, an Oracle foreground process is sometimes also called a server process or a shadow process. These can terms can be used interchangeably in this post.

The Truth
Clearly background and foreground processes operate in parallel. I don't think any DBA would deny this. As I frequently say, "serialization is death and parallelism is life!" A simple "ps" command will visually show both Oracle background and foreground processes at work. But this in no way implies they do not impact each other's activity and performance.

In fact, we hope they do impact each other! Can you imagine what performance would be with the background processes NOT running in parallel?! What a performance nightmare that would be. But this where the "no impact" lie lives.

The Lie
Most senior DBAs can point to a specific situation where Oracle cache buffer chain latch contention affected multiple foreground sessions. In this situation, foreground sessions were franticly trying to acquire a popular cache buffer chain latch. But this is a foreground session versus foreground session situation. While this is example is important, this post is about when a background process impacts a foreground process.

Have you every committed a transaction and it hangs while the foreground process is waiting on "log file switch (checkpoint incomplete)" or even worse "log file switch (archiving needed)" event? All the foreground process knows is that its statement can't finish because a required log switch has not occurred because a checkpoint is incomplete. What the server process does not know is the checkpoint (CKPT), the database writer (DBWR) and the log writer (LGWR) background processes are involved. There is a good chance the database writer is frantically writing dirty buffers to the database (dbf) files so the LGWR can safely overwrite the associated redo in the next online redo log.

For example, if a server process issued a commit during the checkpoint, it will wait until the checkpoint is complete and the log writer has switched and can write into the next redo log. So, while the log writer background processes is probably waiting on "log file parallel write" and the database writer is burning CPU and waiting on "db file parallel write", the foreground processes are effectively hung.

This is a classic example of how a background process can impact the performance of a foreground process.

A Demonstration Of The Lie
Here's a quick demonstration of the above situation. On an existing database in my lab, I created two 4MB redo logs and dropped all the other redo logs. I started a DML intensive workload. According to the alert.log file, the redo logs where switching every couple of seconds! Take a look at this:
$ tail -f /home/oracle/base/diag/rdbms/prod30/prod30/trace/alert*log
Thread 1 cannot allocate new log, sequence 2365
Checkpoint not complete
Current log# 4 seq# 2364 mem# 0: /home/oradata/prod30/redoA1.log
Mon Dec 29 11:02:09 2014
Thread 1 advanced to log sequence 2365 (LGWR switch)
Current log# 5 seq# 2365 mem# 0: /home/oradata/prod30/redoA2.log
Thread 1 cannot allocate new log, sequence 2366
Checkpoint not complete
Current log# 5 seq# 2365 mem# 0: /home/oradata/prod30/redoA2.log
Thread 1 advanced to log sequence 2366 (LGWR switch)
Current log# 4 seq# 2366 mem# 0: /home/oradata/prod30/redoA1.log
Thread 1 cannot allocate new log, sequence 2367
Checkpoint not complete
Current log# 4 seq# 2366 mem# 0: /home/oradata/prod30/redoA1.log
Thread 1 advanced to log sequence 2367 (LGWR switch)
Current log# 5 seq# 2367 mem# 0: /home/oradata/prod30/redoA2.log
Thread 1 cannot allocate new log, sequence 2368
Checkpoint not complete
Current log# 5 seq# 2367 mem# 0: /home/oradata/prod30/redoA2.log
Mon Dec 29 11:02:20 2014

Obviously not what you want to see on a production Oracle system! (But my guess many of you have.)

Using my OSM realtime session sampler tool (rss.sql - related blog posting HERE) I sampled the log writer every half a second. (There is only one log writer background process because this is an Oracle 11g database, not an Oracle Database 12c system.) If the log writer session showed up in v$session as an active session, it would be picked up by rss.sql.  Both "ON CPU" and "WAIT" states are collected. Here is a sample of the output.


It's very obvious the log writer is doing some writing. But we can't tell from the above output if the process is impacting other sessions. It would have also been very interesting to sample the database writer also, but I didn't do that. To determine if the background processes are impacting other sessions, I needed to find a foreground session that was doing some commits. I noticed that session 133, a foreground process was busy doing some DML and committing as it processed its work. Just as with the log writer background process, I sampled this foreground process once every 0.5 second. Here's a sample of the output.


Wow. The foreground process is waiting a lot for the current checkpoint to be completed! So... this means the foreground process is being effectively halted until the background processes involved with the checkpoint have finished their work.
This is a great example of how Oracle background processes can impact the performance of an Oracle foreground process.

But let's be clear. Without the background processes, performance would be even worse. Why? Because all that work done in parallel and in the background would have to be done by each foreground process AND all that work would have to be closely controlled and coordinated. And that, would be a performance nightmare!
DB Time Impact On The Foreground Process
Just for the fun of it, I wrote a script to investigate DB Time, CPU consumption, non-idle wait time and the wait time for the "log file switch wait (checkpoint incomplete)" wait event for the foreground process mentioned above (session 133). The script simply gathers some session details, sleeps for 120 seconds, again gathers some session details, calculates the differences and displays the results. You can download the script HERE. Below is the output for the foreground process, session 133.
SQL> @ckpttest.sql 133

Table dropped.

Table created.

PL/SQL procedure successfully completed.

CPU_S_DELTA NIW_S_DELTA DB_TIME_S_DELTA CHECK_IMPL_WAIT_S
----------- ----------- --------------- -----------------
2.362 117.71 119.973692 112.42

1 row selected.

Here is a quick description of the output columns.

  • CPU_S_DELTA is the CPU seconds consumed by session 133, which is the time model statistic DB CPU.
  • NIW_S_DELTA is the non-idle wait time for session 133, in seconds.
  • DB_TIME_S_DELTA is the DB Time statistic for session 133, which is the time model statistic DB Time.
  • CHECK_IMPL_WAIT_S is the wait time only for event "log file switch (checkpoint incomplete)" for session 133, in seconds.

Does the time fit together as we expect? The "log file switch..." wait time is part of the non-idle wait time. The DB Time total is very close to the CPU time plus the non-idle wait time. Everything seems to add up nicely.

To summarize: Oracle background processes directly impacted the database time for a foreground process.

In Conclusion...
First, for sure Oracle foreground and background processes impact each other...by design for increased performance. Sometimes on real production Oracle Database systems things get messy and work that we hoped would be done in parallel must become momentarily serialized. The log file switch example above, is an example of this.

Second, the next time someone tells you that an Oracle background process does not impact the performance of a foreground process, ask them if they have experienced a "log file switch checkpoint incomplete" situation. Pause until they say, "Yes." Then just look at them and don't say a word. After a few seconds you may see a "oh... I get it." look on their face. But if not, simply point them to this post.

Thanks for reading and enjoy your work!

Craig.




Categories: DBA Blogs

OT: On a musical note for 2014's year ending

Grumpy old DBA - Fri, 2014-12-26 12:32
There have been some really strong albums that impressed me this year.  I tend to like loud rock stuff but do mix it up somewhat.  What follows is just some ramblings:

The Drive By Truckers have a tremendous album out "English Oceans" if you like rock this is a no brainer.

Jolie Hollands "Wine Dark Sea" album is stunningly magnificent.  It varies quite a bit rocks out and blues it out and then just charms you at times.

Taylor Swift knocked it out of the ballpark with 1989.

Also digging new one by Lana Del Rey.

Discovered one that I should have known about a long time ago "Gov't Mule" album Live with a little help from our friends is straight out loud good rock and roll well put together.

My latest addition that I am just listening to is the Delphines "Colfax" this one strays fairly close to some kind of cross between rock and roll and folk/country but seems like a well put together sophisticated album that is a little slower paced than many.


Categories: DBA Blogs

Happy holidays from the grumpy old dba!

Grumpy old DBA - Thu, 2014-12-25 08:57
Best wishes for everyone heading into 2015!

I am looking forward to RMOUG Training days 2015 ( speaking there ) while the planning for our conference here in Cleveland GLOC 2015 kicks into high gear.

Our conference has call for abstracts open now and conference registration is also open.  For us the registrations typically don't start rolling in big time until March timeframe.

Please consider submitting a presentation abstract!

GLOC 2015 speaker application

GLOC 2015 conference registration ( May 18-20 2015 )
Categories: DBA Blogs

Packt - The $5 eBook Bonanza is here!

Surachart Opun - Tue, 2014-12-23 00:38
 The $5 eBook Bonanza is here!Spread out news for people who are interested in reading IT books. The $5 eBook Bonanza is here! You will be able to get any Packt eBook or Video for just $5 until January 6th 2015.Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Is Oracle Database Time Correct? Something Is Not Quite Right.

Is Oracle Database Time Correct? Something Is Not Quite Right.
Oracle Database tuning and performance analysis is usually based on time. As I blogged HERE, the Oracle "database time" statistic is more interesting than simply "time spent in the database." It is the sum of CPU consumption and non-idle wait time. And Elapsed Time is the sum of all the database time related to perhaps a session or a SQL statement execution. However...
If you do the fundamental math, you'll notice the numbers don't always add up. In fact, they rarely match. In this posting, I want to demonstrate this mismatch and I want you to see this on your systems!

I'll include experimental data from a number of different systems, the statistical analysis (including pictures) and provide a tool you can download for free from OraPub.com to check out the situation on your systems.
Checking DB Time Math
DB Time is defined as "time spent in the database." This is the sum of Oracle process CPU consumption plus non-idle wait time. Usually we don't derive DB Time. The Time Model Statistics view v$sys_time_mode contains the DB Time statistic. But this view also contains the DB CPU statistic. Since there is no sum of non-idle wait time, most people just assume everything is fine.
However, if you run the simple query below on your system, it could look something this:
SQL> l
1 select db_time_s, db_cpu_s, tot_ni_wt_s
2 from (select value/1000000 db_time_s from v$sys_time_model where stat_name = 'DB time' ),
3 (select value/1000000 db_cpu_s from v$sys_time_model where stat_name = 'DB CPU' ),
4* (select sum(TIME_WAITED_MICRO_FG)/1000000 tot_ni_wt_s from v$system_event where wait_class != 'Idle' )
SQL> /

DB_TIME_S DB_CPU_S TOT_NI_WT_S
---------- ---------- -----------
330165.527 231403.925 119942.952

1 row selected.
If you add up the DB CPU and the total non-idle wait time, the value is 351,346.877. Woops! 351K does not equal 330K. What happened on my Oracle Database 12c (12.1.0.2.0)? As I have demonstrated in this POSTING (which contains videos of this) and in my online seminar training HERE, many times DB Time does nearly equal DB CPU plus the non-idle wait time. But clearly in the above situation something does not seem quite right.

Checking DB Time On Your Systems
To demonstrate the possibility of a DB Time mismatch, I created a simple plsql tool. You can download this free tool or do an OraPub.com search for "db time tool". The tool, which is easily configurable, takes a number of samples over a period of time and displays the output.


Here is an example of the output.

OraPub DB Time Test v1a 26-Sep-2014. Enjoy but use at your own risk.
.
Starting to collect 11 180 second samples now...
All displayed times are in seconds.
.
anonymer Block abgeschlossen
..........................................................................
... RAW OUTPUT (keep the output for your records and analysis)
..........................................................................
.
sample#, db_time_delta_v , db_cpu_delta_v, tot_ni_wait_delta_v, derived_db_time_delta_v, diff_v, diff_pct_v
.
1, 128,4, 128,254, ,103, 128,357266, ,043, 0
2, 22,014, 3,883, 17,731, 21,614215, ,399, 1,8
3, 1,625, 1,251, ,003, 1,253703, ,371, 22,8
4, 13,967, 12,719, 1,476, 14,194999, -,228, -1,6
5, 41,086, 41,259, ,228, 41,486482, -,4, -1
6, 36,872, 36,466, ,127, 36,593884, ,278, ,8
7, 38,545, 38,71, ,137, 38,847459, -,303, -,8
8, 37,264, 37,341, ,122, 37,463525, -,199, -,5
9, 22,818, 22,866, ,102, 22,967141, -,149, -,7
10, 30,985, 30,614, ,109, 30,723831, ,261, ,8
11, 5,795, 5,445, ,513, 5,958586, -,164, -2,8
.
The test is complete.
.
All displayed times are in seconds.

The output is formatted to make it easy to statistically analyze. The far right column is percent difference between the reported DB Time and the calculated DB Time. In the above example, they are pretty close. Get the tool and try it out on your systems.

Some Actual Examples
I want to quickly show you four examples from a variety of systems. You can download all the data in the "analysis pack" HERE. The data, for each of the four systems, contains the raw DB Time Test output (like in the section above), the statistical numeric analysis output from the statistical package "R", the actual "R" script and the visual analysis using "smooth histograms" also created using "R."

Below is the statistical numeric summary:


About the columns: Only the "craig" system is mine and other are real production or DEV/QA systems. The statistical columns all reference the far right column of the DB Time Test Tool's output, which is the percent difference between the reported DB Time and the calculated DB Time. Each sample set consists of eleven 180 second samples. The P-Value greater than 0.05 means the reported and calculated DB Time differences are normally distributed. This is not important in this analysis, but gives me clues if there is a problem with the data collection.

As you can easily see, two of the system's "DB Times" difference is greater than 10% and one of them was over 20%. The data collected shows that something is not quite right... but that's about it.

What Does This Mean In Our Work?
Clearly something is not quite right. There are a number of possible reasons and this will be focus of my next few articles.

However, I want to say that even though the numbers don't match perfectly and sometimes they are way off, this does not negate the value of a time based analysis. Remember, we not trying to land a man on the moon. We try diagnosing performance to derive solutions that (usually) aim to reduce the database time. I suspect that in all four cases I show, we would not be misled.

But this does highlight the requirement to also analysis performance from a non-Oracle database centric perspective. I always look at the performance situation from an operating system perspective, an Oracle centric perspective and an application (think: SQL, processes, user experience, etc.) perspective. This "3 Circle" analysis will reduce the likelihood of making a tuning diagnosis mistake. So in case DB Time is completely messed up, by diagnosing performance from the other two "circles" you will know something is not right.

If you want to learn more about my "3-Circle" analysis, here are two resources:
  1. Paper. Total Performance Management. Do an OraPub search for "3 circle" and you'll find it.
  2. Online Seminar: Tuning Oracle Using An AWR Report. I go DEEP into an Oracle Time Based Analysis but keeping it day-to-day production system practical.
In my next few articles I will drill down into why there can be a "DB Time mismatch," what to do about it and how to use this knowledge to our advantage.

Enjoy your work! There is nothing quite like analyzing performance and tuning Oracle database systems!!

Craig.





Categories: DBA Blogs

OLTP type 64 compression and ‘enq: TX – allocate ITL entry’ on Exadata

Pythian Group - Mon, 2014-12-22 11:33

Recently we’ve seen a strange problem with the deadlocks at the client database on Exadata, Oracle version 11.2.0.4 . Wait events analysis showed that sessions were waiting for “enq: TX – allocate ITL entry” event. It was strange because there are at most two sessions making DMLs and at least two ITL slots are available in the affected tables blocks. I made some block dumps and found that affected blocks contain the OLTP-compressed data, Compression Type = 64 (DBMS_COMPRESSION Constants – Compression Types).  Actually table has the “compress for query high” attribute, but direct path inserts have never used, so I’m not expecting any compressed data here. Compression Type 64 is very specific type. Oracle migrates data out of HCC compression units into Type 64 compression blocks in case of updates of HCC compressed data. We made some tests and were able to reproduce Type 64 compression without direct path operations. Here is one of the test cases. MSSM tablespace has been used, but problem is reproducible with ASSM too.

create table z_tst(num number, rn number, name varchar2(200)) compress for query high partition by list(num)
(
partition p1 values(1),
partition p2 values(2));

Table created.

insert into z_tst select mod(rownum , 2) + 1, rownum, lpad('1',20,'a') from dual connect by level <= 2000;

2000 rows created.

commit;

Commit complete.

select dbms_compression.get_compression_type(user, 'Z_TST', rowid) comp, count(*)  cnt from Z_tst
group by dbms_compression.get_compression_type(user, 'Z_TST', rowid);

      COMP        CNT
---------- ----------
        64       2000

select  dbms_rowid.rowid_block_number(rowid) blockno, count(*) cnt from z_tst a
group by dbms_rowid.rowid_block_number(rowid);

   BLOCKNO        CNT
---------- ----------
      3586        321
      2561        679
      3585        679
      2562        321

select name, value from v$mystat a, v$statname b where a.statistic# = b.statistic# and lower(name) like '%compress%' and value != 0;

NAME                                                    VALUE
-------------------------------------------------- ----------
heap block compress                                        14
HSC OLTP Compressed Blocks                                  4
HSC Compressed Segment Block Changes                     2014
HSC OLTP Non Compressible Blocks                            2
HSC OLTP positive compression                              14
HSC OLTP inline compression                                14
EHCC Block Compressions                                     4
EHCC Attempted Block Compressions                          14

alter system dump datafile 16 block min 2561 block max 2561;

We can see that all rows are compressed by compression type 64. From the session statistics we can see that HCC had been in place before the data was migrated into OLTP Compressed Blocks. I think, this is not an expected behavior and there is should not be any compression involved at all. Let’s take a look into the block dump:

Block header dump:  0x04000a01
 Object id on Block? Y
 seg/obj: 0x6bfdc  csc: 0x06.f5ff8a1  itc: 2  flg: -  typ: 1 - DATA
     fsl: 0  fnx: 0x0 ver: 0x01

 Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x0055.018.0002cd54  0x00007641.5117.2f  --U-  679  fsc 0x0000.0f5ffb9a
0x02   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
bdba: 0x04000a01
data_block_dump,data header at 0x7fbb48919a5c
===============
tsiz: 0x1fa0
hsiz: 0x578
pbl: 0x7fbb48919a5c
     76543210
flag=-0----X-
ntab=2
nrow=680
frre=-1
fsbo=0x578
fseo=0x5b0
avsp=0x6
tosp=0x6
        r0_9ir2=0x1
        mec_kdbh9ir2=0x1
                      76543210
        shcf_kdbh9ir2=----------
                  76543210
        flag_9ir2=--R-LNOC      Archive compression: N
                fcls_9ir2[3]={ 0 32768 32768 }
                perm_9ir2[3]={ 0 2 1 }

It’s bit odd that avsp (available space) and tosp (total space) = 6 bytes. So there is no free space in the block at all, but I’m expecting to see 10% pctfee defaults here since it’s OLTP compression.
Let’s try to update two different rows in the same type 64 compressed block:

select rn from z_tst where DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) = 3586 and rownum <= 4;

        RN
----------
      1360
      1362
      1364
      1366
From the first session:
update z_tst set name = 'a' where rn = 1360;
From the second:
update z_tst set name = 'a' where rn = 1362;
-- waiting here

Second session waits on “enq: TX – allocate ITL entry” event.

Summary

In some cases HCC and subsequent OLTP, type 64 compression can take place even without direct path operations (probably a bug).

OLTP, type 64 compressed block, in contrast to regular OLTP compression, can have no free space after data load.

In case of DML operations, the whole type 64 compressed block gets locked (probably a bug).

Better not to set HCC attributes on segments until the real HCC compression operation.

 

Categories: DBA Blogs

Watch: HBase vs. Cassandra

Pythian Group - Mon, 2014-12-22 09:45

Every data platform has its value, and deciding which one will work best for your big data objectives can be tricky—Alex Gorbachev, Oracle ACE Director, Cloudera Champion of Big Data, and Chief Technology Officer at Pythian, has recorded a series of videos comparing the various big data platforms and presents use cases to help you identify which ones will best suit your needs.

“When we look at HBase and Cassandra, they can look very similar,” Alex says. “They’re both part of the NoSQL ecosystem.” Although they’re capable of handling very similar workloads, Alex explains that there are also quite a few differences. “Cassandra is designed from the ground up to handle very high, concurrent, write-intensive workloads.” HBase on the other hand, has its limitations in scalability, and may require a bit more thinking to achieve the same quality of service, Alex explains. Watch his video HBase vs. Cassandra for specific use cases.

Note: You may recognize this series, which was originally filmed back in 2013. After receiving feedback from our viewers that the content was great, but the video and sound quality were poor, we listened and re-shot the series.

Find the rest of the series here

 

Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s Big Data expertise.

Categories: DBA Blogs

Log Buffer #402, A Carnival of the Vanities for DBAs

Pakistan's First Oracle Blog - Sat, 2014-12-20 18:39
This Log Buffer edition hits the ball out of park by smashing yet another record of surfacing with a unique collection of blog posts from various database technologies. Enjoy!!!

Oracle:

EM12c and the Optimizer Statistics Console.
SUCCESS and FAILURE Columns in DBA_STMT_AUDIT_OPTS.
OBIEE and ODI on Hadoop : Next-Generation Initiatives To Improve Hive Performance.
Oracle 12.1.0.2 Bundle Patching.
Performance Issues with the Sequence NEXTVAL Call.

SQL Server:

GUIDs GUIDs everywhere, but how is my data unique?
Questions About T-SQL Transaction Isolation Levels You Were Too Shy to Ask.
Introduction to Advanced Transact SQL Stairway and Using the CROSS JOIN Operator.
Introduction to Azure SQL Database Scalability.
What To Do When the Import and Export Wizard Fails.

MySQL:

Orchestrator 1.2.9 GA released.
Making HAProxy 1.5 replication lag aware in MySQL.
Monitor MySQL Performance Interactively With VividCortex.
InnoDB’s multi-versioning handling can be Achilles’ heel.
Memory summary tables in Performance Schema in MySQL 5.7.

Also published here.
Categories: DBA Blogs

Exadata Patching Introduction

The Oracle Instructor - Sat, 2014-12-20 10:24

These I consider the most important points about Exadata Patching:

Where is the most recent information?

MOS Note 888828.1 is your first read whenever you think about Exadata Patching

What is to patch with which utility?

Exadata Patching

Expect quarterly bundle patches for the storage servers and the compute nodes. The other components (Infiniband switches, Cisco Ethernet Switch, PDUs) are less frequently patched and not on the picture therefore.

The storage servers have their software image (which includes Firmware, OS and Exadata Software)  exchanged completely with the new one using patchmgr. The compute nodes get OS (and Firmware) updates with dbnodeupdate.sh, a tool that accesses an Exadata yum repository. Bundle patches for the Grid Infrastructure and for the Database Software are being applied with opatch.

Rolling or non-rolling?

This the sensitive part! Technically, you can always apply the patches for the storage servers and the patches for compute node OS and Grid Infrastructure rolling, taking down only one server at a time. The RAC databases running on the Database Machine will be available during the patching. Should you do that?

Let’s focus on the storage servers first: Rolling patches are recommended only if you have ASM diskgroups with high redundancy or if you have a standby site to failover to in case. In other words: If you have a quarter rack without a standby site, don’t use rolling patches! That is because the DBFS_DG diskgroup that contains the voting disks cannot have high redundancy in a quarter rack with just three storage servers.

Okay, so you have a half rack or bigger. Expect one storage server patch to take about two hours. That summarizes to 14 hours (for seven storage servers) patching time with the rolling method. Make sure that management is aware about that before they decide about the strategy.

Now to the compute nodes: If the patch is RAC rolling applicable, you can do that regardless of the ASM diskgroup redundancy. If a compute node gets damaged during the rolling upgrade, no data loss will happen. On a quarter rack without a standby site, you put availability at risk because only two compute nodes are there and one could fail while the other is just down.

Why you will want to have a Data Guard Standby Site

Apart from the obvious reason for Data Guard – Disaster Recovery – there are several benefits associated to the patching strategy:

You can afford to do rolling patches with ASM diskgroups using normal redundancy and with RAC clusters that have only two nodes.

You can apply the patches on the standby site first and test it there – using the snapshot standby database functionality (and using Database Replay if you licensed Real Application Testing)

A patch set can be applied on the standby first and the downtime for end users can be reduced to the time it takes to do a switchover

A release upgrade can be done with a (Transient) Logical Standby, reducing again the downtime to the time it takes to do a switchover

I suppose this will be my last posting in 2014, so Happy Holidays and a Happy New Year to all of you :-)


Tagged: exadata
Categories: DBA Blogs

Log Buffer #402, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-12-19 09:15

This Log Buffer edition hits the ball out of park by smashing yet another record of surfacing with a unique collection of blog posts from various database technologies. Enjoy!!!

Oracle:

EM12c and the Optimizer Statistics Console.

SUCCESS and FAILURE Columns in DBA_STMT_AUDIT_OPTS.

OBIEE and ODI on Hadoop : Next-Generation Initiatives To Improve Hive Performance.

Oracle 12.1.0.2 Bundle Patching.

Performance Issues with the Sequence NEXTVAL Call.

SQL Server:

GUIDs GUIDs everywhere, but how is my data unique?

Questions About T-SQL Transaction Isolation Levels You Were Too Shy to Ask.

Introduction to Advanced Transact SQL Stairway and Using the CROSS JOIN Operator.

Introduction to Azure SQL Database Scalability.

What To Do When the Import and Export Wizard Fails.

MySQL:

Orchestrator 1.2.9 GA released.

Making HAProxy 1.5 replication lag aware in MySQL.

Monitor MySQL Performance Interactively With VividCortex.

InnoDB’s multi-versioning handling can be Achilles’ heel.

Memory summary tables in Performance Schema in MySQL 5.7.

Categories: DBA Blogs

Seasons's Greetings from the Oracle ISV Migration Center Team

WISHING YOU MUCH SUCCESS IN THE YEAR AHEAD. THANK YOU FOR YOUR CONTINUED PARTNERSHIP. ...

We share our skills to maximize your revenue!
Categories: DBA Blogs