Skip navigation.

Feed aggregator

What PeopleSoft content was popular in 2015?

Duncan Davies - Thu, 2016-01-28 17:48

The ‘Year in Blogging’ reports have come through so I can see what posts and newsletter items garnered the most views.

PeopleSoft Tipster Blog

So, according to the summary, this blog was visited 130,000 times during the year, an average of ~350/day with the busiest day being just over double that at 749 visitors. About 50% of the traffic is from the US, 15% from India, and 5% from the UK and Canada.

Amazingly, the most viewed post was one written prior to 2015, about PeopleSoft Entity Relationship Diagrams. The most popular post that was actually authored last year was The Future of PeopleSoft video with Marc Weintraub, followed by PeopleSoft and Taleo integration, the Faster Download of PeopleSoft Images and the profile of Graham Smith and how he works.

The PeopleSoft Weekly Newsletter

The PSW newsletter seems to go from strength to strength. During 2015 the subscriber base rose from 919 to 1,104 which is an approx 20% increase. The ‘open rate’ sits around 40% for any one issue (against an industry average of 17%) with the US accounting for 55% of readers, the UK 15% and India 10%.

The top articles in terms of clicks were:

  1. Gartner’s Report on Oracle’s Commitment to PeopleSoft (263 clicks)
  2. Randy ‘Remote PS Admin’ on Forcing Cache Clears (198)
  3. PeopleSoft Planned Features and Enhancements (180)
  4. 5 Life Lessons I Learned at PeopleSoft (167)
  5. Dan Sticka on stopping writing Record Field PeopleCode (166)
  6. Greg Kelly’s Security Checklist from Alliance (155)
  7. Virginia Ebbeck’s list of PeopleSoft Links (145)
  8. Greg Wendt of Grey Heller on the PS Token Vulnerability (142)
  9. Dennis Howlett on the Oracle vs Rimini St court battle (142)
  10. Wade Coombs on PeopleSoft File Attachments (140)
  11. I’m Graham Smith and this is How I Work (139)
  12. Graham’s PeopleSoft Ping Survey (135)
  13. How to write an efficient PeopleCode (134)
  14. Mohit Jain on Tracing in PeopleSoft (131)
  15. The 4 types of PeopleSoft Testing (130)
  16. PS Admin.io on Cobol (127)
  17. Matthew Haavisto on the Cost of PeopleSoft vs SaaS (124)
  18. The PeopleSoft Spotlight Series (119)
  19. Prashant Tyagi on PeopleSoft Single Signon (118)
  20. Adding Watermarks to PeopleSoft Fields (116)

 

 


Bad to crosscheck archivelog all

Michael Dinh - Thu, 2016-01-28 16:44

Typically, it’s not a good idea to have “crosscheck archivelog all;” in backup scripts and
even worse to have “delete expired archivelog all;” since any evidence will be eradicated.

When you don’t crosscheck archivelog and the archivelog is missing, backup archivelog will fail.
When you crosscheck archivelog and archivelog is missing backup archivelog does not fail since RMAN marks archivelog as expired and ignore.

oracle@arrow:hawklas:/tmp
$ rman target /

Recovery Manager: Release 11.2.0.4.0 - Production on Thu Jan 28 14:17:35 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: HAWK (DBID=3130551611)
RMAN> list archivelog all;
using target database control file instead of recovery catalog
List of Archived Log Copies for database with db_unique_name HAWKLAS
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - --------------------
5191    1    868     X 2016-JAN-28 11:46:46
        Name: /oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_868_cbnw3vtd_.arc

5201    1    873     A 2016-JAN-28 14:15:44
        Name: /oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_873_cbo4v850_.arc

RMAN> list backup of archivelog sequence 873;
specification does not match any backup in the repository

RMAN> exit
$ mv -v /oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_873_cbo4v850_.arc /tmp/
oracle@arrow:hawklas:/tmp
`/oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_873_cbo4v850_.arc' -> `/tmp/o1_mf_1_873_cbo4v850_.arc'
oracle@arrow:hawklas:/tmp
$ rman target /

Recovery Manager: Release 11.2.0.4.0 - Production on Thu Jan 28 14:18:30 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: HAWK (DBID=3130551611)
RMAN> backup archivelog all;
Starting backup at 2016-JAN-28 14:18:40
current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=149 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=21 device type=DISK
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 01/28/2016 14:18:46
RMAN-06059: expected archived log not found, loss of archived log compromises recoverability
ORA-19625: error identifying file /oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_873_cbo4v850_.arc
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
RMAN> crosscheck archivelog all;
released channel: ORA_DISK_1
released channel: ORA_DISK_2
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=149 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=21 device type=DISK
validation succeeded for archived log
archived log file name=/oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_874_cbo4y4or_.arc RECID=5203 STAMP=902326725
Crosschecked 1 objects

validation failed for archived log
archived log file name=/oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_868_cbnw3vtd_.arc RECID=5191 STAMP=902317692
validation failed for archived log
archived log file name=/oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_873_cbo4v850_.arc RECID=5201 STAMP=902326632
Crosschecked 2 objects

RMAN> backup archivelog all;

sequence=873 is missing and no errors raised.

Starting backup at 2016-JAN-28 14:19:09
current log archived
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=874 RECID=5203 STAMP=902326725
channel ORA_DISK_1: starting piece 1 at 2016-JAN-28 14:19:11
channel ORA_DISK_2: starting compressed archived log backup set
channel ORA_DISK_2: specifying archived log(s) in backup set
input archived log thread=1 sequence=875 RECID=5204 STAMP=902326750
channel ORA_DISK_2: starting piece 1 at 2016-JAN-28 14:19:11
channel ORA_DISK_1: finished piece 1 at 2016-JAN-28 14:19:12
piece handle=/oradata/backup/HAWK_3130551611_20160128_jdqsgqev_1_1_1645_1 tag=TAG20160128T141910 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_2: finished piece 1 at 2016-JAN-28 14:19:14
piece handle=/oradata/backup/HAWK_3130551611_20160128_jeqsgqev_1_1_1646_1 tag=TAG20160128T141910 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:03
Finished backup at 2016-JAN-28 14:19:14

Starting Control File and SPFILE Autobackup at 2016-JAN-28 14:19:14
piece handle=/oradata/backup/HAWK_c-3130551611-20160128-09 comment=NONE
Finished Control File and SPFILE Autobackup at 2016-JAN-28 14:19:21
RMAN> list backup of archivelog sequence 873;
specification does not match any backup in the repository

RMAN> exit


Recovery Manager complete.
oracle@arrow:hawklas:/tmp
$

RMAN> delete expired archivelog all;
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=149 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=21 device type=DISK
List of Archived Log Copies for database with db_unique_name HAWKLAS
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - --------------------
5191    1    868     X 2016-JAN-28 11:46:46
        Name: /oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_868_cbnw3vtd_.arc

5201    1    873     X 2016-JAN-28 14:15:44
        Name: /oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_873_cbo4v850_.arc


Do you really want to delete the above objects (enter YES or NO)? yes
deleted archived log
archived log file name=/oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_868_cbnw3vtd_.arc RECID=5191 STAMP=902317692
deleted archived log
archived log file name=/oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_873_cbo4v850_.arc RECID=5201 STAMP=902326632
Deleted 2 EXPIRED objects
RMAN> backup archivelog all;
Starting backup at 2016-JAN-28 14:36:20
current log archived
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=874 RECID=5203 STAMP=902326725
input archived log thread=1 sequence=875 RECID=5204 STAMP=902326750
channel ORA_DISK_1: starting piece 1 at 2016-JAN-28 14:36:21
channel ORA_DISK_2: starting compressed archived log backup set
channel ORA_DISK_2: specifying archived log(s) in backup set
input archived log thread=1 sequence=876 RECID=5207 STAMP=902327780
channel ORA_DISK_2: starting piece 1 at 2016-JAN-28 14:36:22
channel ORA_DISK_1: finished piece 1 at 2016-JAN-28 14:36:23
piece handle=/oradata/backup/HAWK_3130551611_20160128_jgqsgrf5_1_1_1648_1 tag=TAG20160128T143621 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
channel ORA_DISK_2: finished piece 1 at 2016-JAN-28 14:36:23
piece handle=/oradata/backup/HAWK_3130551611_20160128_jhqsgrf5_1_1_1649_1 tag=TAG20160128T143621 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:01
Finished backup at 2016-JAN-28 14:36:23

Starting Control File and SPFILE Autobackup at 2016-JAN-28 14:36:23
piece handle=/oradata/backup/HAWK_c-3130551611-20160128-0a comment=NONE
Finished Control File and SPFILE Autobackup at 2016-JAN-28 14:36:30

RMAN>
Updated:

Does this mean we should never perform crosscheck archivelog all? No.

The purpose is to let archivelog backup fail and to investigate.

If investigation shows archivelog is indeed missing, then might be better to perform Level 1 or Level 0 backup.

RMAN> crosscheck archivelog all;
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=149 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=10 device type=DISK
validation succeeded for archived log
archived log file name=/oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_874_cbo4y4or_.arc RECID=5203 STAMP=902326725
validation succeeded for archived log
archived log file name=/oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_875_cbo4yy69_.arc RECID=5204 STAMP=902326750
validation succeeded for archived log
archived log file name=/oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_876_cbo5z4lz_.arc RECID=5207 STAMP=902327780
Crosschecked 3 objects

validation failed for archived log
archived log file name=/oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_877_cbog0qbg_.arc RECID=5209 STAMP=902336024
Crosschecked 1 objects
RMAN> list expired archivelog all;
List of Archived Log Copies for database with db_unique_name HAWKLAS
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - --------------------
5209    1    877     X 2016-JAN-28 14:36:20
        Name: /oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_877_cbog0qbg_.arc

RMAN> list backup of archivelog sequence 877;
specification does not match any backup in the repository

RMAN>

Is the DBA Career Dead?

Pythian Group - Thu, 2016-01-28 16:00

 

With the increased popularity of cloud services, one of the questions that I often receive is: “Is the DBA career dying? What will you do for a living in the future?” In this article I will give my personal opinion about the future of our beloved profession, and try to calm down those that have already started to look for another career.

The first thing that I want to point out is that when we started to work in IT we knew that it was a career that is different than most of the other ones out there. Its nature is  a dynamic and exciting one that reinvents itself all the time, with technological news showing up every single year and changing the entire landscape. We have chosen a field that pushes us to keep studying, learning and evolving, and this is the kind of mindset I want you to have while reading this article.

The Database Administrator role is not going anywhere. We are not an endangered species and won’t become one in the foreseeable future. Cloud is not our enemy. The data market is just evolving, and the cloud is bringing a lot of new things that will give us more power and more options.

In today’s market we have two very common problems:

  1. Companies can’t find enough people to fill in all positions.

We all know this one. I’m sure we all know several companies that have an open position for months, have interviewed dozens of people, and just can’t find anyone that suits the position.

  1. Companies want to keep their costs as low as possible.

Companies want to make money, and we had a big worldwide crisis just a few years ago that we are still recovering from. This means companies are trying to find ways to improve their productivity, while keeping their costs as low as possible.

 

In a scenario like this, the cloud offerings come as an aid to both improve our productivity as a DBA, and to help the company save money. Let’s think for a while about how many tasks we perform daily that don’t bring real value for the business. No doubt that when we’re planning the new high availability solution, or doing performance tuning on that slow query we can see the value that it will bring to the company. In the first case, this will guarantee that all applications are up and running at full speed when the company needs it. The latter will make sure that the server is handling the workload, running more sessions at the same time, and making both internal and external customers happy.

But how about the time you spent trying to find more disk space for all your databases? How about trying to find disk space for all your backups because the database has grown too large and we didn’t plan ahead? Then there’s all the time that you spend installing SQL and Windows patches. I know, in some big companies, we have a dedicated SAN admin and the infrastructure administrators that will worry about those tasks, but that’s not the everyone’s reality. The vast majority of small and medium companies have a small team that is responsible for multiple areas. Why? Scroll up and read problems 1 and 2 om my list above one more time.

I’ll wait for you.

Now, let’s imagine another reality. Let’s imagine a world where I receive a disk space alert for my backups. The company has acquired a new company, the database growth was much bigger than expected, and we ran out of disk space. I go to a web portal and a few mouse clicks later I have 1TB of disk available to me. All I have to do is open SQL Server Management Studio and change my backup jobs to use the new storage area. Problem solved in less than 15 minutes.

Let’s envision a world where I can get all those small databases I have that are not too important for the business (yeah, we all have a lot of those, don’t lie to yourself) and move those databases to the cloud so they don’t use our precious server resources. I don’t need to worry about patching and managing those databases. Wouldn’t that be great? And how about getting rid of the QA and testing servers and replacing them with virtual machines that can just turn off when they are not in use and save money? And those huge tables with hundreds of millions of rows that causes us problems every single day. Wouldn’t it be great if I could replace that complicated sliding window partition solution that we developed to manage historic data, and instead make SQL Server automatically move old and unused data to the cloud, while also keeping the data available for end users in a transparent way?

Cloud is indeed a career shift dynamic, but not one that will kill the database administrator role and destroy families. Instead, it’s one that will make us more efficient, provide us with tools and options to focus ourselves on tasks that bring value to the company. It’s a solution where we can use the existing hardware more efficiently and make our lives easier. Embrace the changes just like we embraced all new technologies that came before it, and use each one as a tool to be successful in your role.

 

Discover more about our expertise in the Cloud.

Categories: DBA Blogs

My Virtual Internship with Pythian

Pythian Group - Thu, 2016-01-28 14:38

My internship with Pythian started in October and has been an incredible opportunity to build upon my project management skills with a virtual internship. I never imagined working for a tech company. As business students, we are often characterized as lacking the hard skills needed for industries as competitive as those in STEM fields. After my internship with Pythian, I know that my perspective can bring value to a team especially within the tech industry. My work with Pythian has inspired me to apply to computer science programs in Taiwan after I graduate in June.

During my time at Pythian I worked on the Pythia Program which is Pythian’s commitment to improving gender diversity by encouraging, promoting and retaining women and young girls in STEM fields. I was able to work with managers across many different departments and learn how to be part of a virtual team, while building a plan for the Pythia Program.

 

Making an impact for women in STEM fields

The Pythia program is setting an incredible precedent for other tech companies in North America and I am very excited that I was a part of that process. Researching and compiling data on women in STEM fields, particularly the tech industries, was an eye-opening experience. Identifying the pain points for underrepresented groups in tech, particularly women, is key in developing solutions that encourage women to seek positions in this field.

I am looking forward to seeing Pythian’s impact in the local community with Technovation and the impact on the young women who will be walking the halls and learning from such great mentors. Pythian is not only making great strides for the young women in their local community, but for all women in tech by setting an example with their own diversity initiatives.

 

Working with the women of Pythian

While assigned to the Pythia Program, I was lucky to be working with women who were heads of their departments and brought a diverse range of skills to the table. Whether building communication plans with the marketing department, or measuring progress with the organizational development team, I was always challenged to look at the issue from different perspectives.

As a project manager it can be very easy to stay on the periphery and remain an outsider, and admittedly this was a concern of mine as an intern and a newcomer to Pythian. The level of trust that the OD department put in me, and their guiding hand in helping me navigate through the on boarding period was instrumental to our success.

Working with women from many different technical backgrounds, I was able to learn much more than if I had stayed within one specific department. I cannot say how important it is as a young women to work with other women on the Pythia Program. It was inspiring to be able to work with accomplished women with so much expertise that they were willing to share.

 

Working virtually is a whole different ballgame

It has been said that virtual work can be trying and difficult for those new to the team, however my time with Pythian was the complete opposite. I am very thankful to have been able to navigate a virtual internship with such incredible support from the OD team. The lines of communication have always been open, and this has been instrumental to our success on the Pythia Program.

Pythian’s managers made themselves accessible and available during my time on the Pythia program, and their guidance was excellent, as was learning from their experiences.

All in all, I could not have asked for a greater internship than my time at Pythian. I was able to work on a project that was important to me as a woman, while working with other women at Pythian. Together we made an impact within the organization and in the local tech community in Ottawa. In the months to come we will see the reach of the Pythia Program on others. For myself, the experience has been impressive as I apply to computer science programs abroad and see my own future in tech. I look forward to following the continued success of Pythian and the Pythia Program.

 

Find out more about the Pythia Program.

Categories: DBA Blogs

M2M, the Other IoT

Oracle AppsLab - Thu, 2016-01-28 11:50

Before IoT became ‘The’ buzzword, there was M2M (machine to machine). Some industries still refer to IoT as M2M, but overall the term Internet of Things has become the norm. I like the term M2M because it describes better what IoT is meant to do: Machines talking to other machines.

This year our team once again participated int he AT&T Developer Summit 2016 hackathon. With M2M in our minds, we created a platform to allow machines and humans report extraordinary events in their neighborhood.  Whenever a new event was reported (by machine or human),  devices and people (notified by an app) connected to the platform could react accordingly.  We came with two possible use cases to showcase our idea.

CX1FlaUUoAAuT6f

Virtual Gated Community

Gated communities are a great commodity for those wanting to have privacy and security. The problem is that usually these communities come with a high price tag. So we came up with a turnkey solution for a virtual gate using M2M. We created a device using the Qualcomm DragonBoard 410c board with wifi and bluetooth capabilities. We used a common motion sensor and a camera to detect cars and people not belonging to the neighborhood. Then, we used Bluetooth beacons that could be placed in at the resident keychains. When a resident drove (or walked) by the virtual gate, it would not trigger the automated picture and report to the system, but if someone without the Bluetooth beacon drove by, the system will log and report it.

We also created an app, so residents could get notifications as well as report different events, which brings me to our second use case.

Devices reacting to events

We used AT&T Flow Designer and M2X platform to create event workflows with notifications. A user or a device could subscribe to receive only events that they care about such as lost dog/cat, water leaks etc. The real innovative idea here is that devices can also react to certain events. For example, a user could configure its porch lights to automatically turn on when someone nearby reported suspicious activity. If everyone in their street do the same, it could be a pretty good crime deterrent to effectively being able to turn all the porch lights in the street at once by reporting such event.

We called our project “Neighborhood”, and we are still amazed on how much we were able to accomplish in merely 20+ hours.

IMG_4469 IMG_4472 IMG_4470 IMG_4471 Possibly Related Posts:

Stinkin' Badges

Scott Spendolini - Thu, 2016-01-28 07:55
Ever since APEX 5, the poor Navigation Bar has taken a back seat to the Navigation Menu. And for good reason, as the Navigation Menu offers a much more intuitive and flexible way to provide site-wide navigation that looks great, is responsive and just plain works. However, the Navigation Bar can and does still serve a purpose. Most application still use it to display the Logout link and perhaps the name of the currently signed on user. Some applications use it to also provide a link to a user's profile or something similar.

Another use for the Navigation Bar is to present simple metrics via badges. You've seen the before: the little red numbered icons that hover in the upper-right corner of an iPhone or Mac application, indicating that there's something that needs attention. Whether you consider them annoying or helpful, truth be told, they are a simple, minimalistic way to convey that something needs attention.

Fortunately, adding a badge to a Navigation Bar entry in the Universal Theme in APEX 5 is tremendously simple. In fact, it's almost too simple! Here's what you need to do:
First, navigate to the Shared Components of your application and select Navigation Bar List. From there, click Desktop Navigation Bar. There will likely only be one entry there: Log Out.

2016 01 28 08 40 05

Click Create List Entry to get started. Give the new entry a List Entry Label and make sure that the sequence number is lower than the Log Out link. This will ensure that your badged item displays to the left of the Log Out link. Optionally add a Target page. Ideally, this will be a modal page that will pop open from any page. This page can show the summary of whatever the badge is conveying. Next, scroll down to the User Defined Attributes section. Enter the value that you want the badge to display in the first (1.) field. Ideally, you should use an Application or Page Item here with this notation: &ITEM_NAME. But for simplicity's sake, it's OK to enter a value outright.
Run your application, and have a look:

2016 01 28 08 48 45

Not bad for almost no work. But we can make it a little better. You can control the color of the badge with a single line of CSS, which can easily be dropped in the CSS section of Theme Roller. Since most badges are red, let's make ours red as well. Run your application and Open Theme Roller and scroll to the bottom of the options. Expand the Custom CSS region and enter the following text:

.t-Button--navBar .t-Button-badge { background-color: red;}

Save your customizations, and note that the badge should now be red:

2016 01 28 08 49 49

Repeat for each metric that you want to display in your Navigation Bar.

Bitmap Efficiency

Jonathan Lewis - Thu, 2016-01-28 07:02

An interesting observation came up on the Oracle-L list server a few days ago that demonstrated how clever the Oracle software is at minimising run-time work, and how easy it is to think you know what an execution plan means when you haven’t actually thought through the details – and the details might make a difference to performance.

The original question was about a very large table with several bitmap indexes, and an anomaly that appeared as a query changed its execution plan.  Here are the critical sections from the plans (extracted from memory with rowsource execution statistics enabled):

--------------------------------------------------------------------------------------------------------
|  Id |Operation                        | Name       | Starts | E-Rows | A-Rows |     A-Time | Buffers |
--------------------------------------------------------------------------------------------------------
|   6 |    TABLE ACCESS BY INDEX ROWID  |       FACT |      1 |      1 |     24 |00:00:00.01 |      31 |
|   7 |     BITMAP CONVERSION TO ROWIDS |            |      1 |        |     24 |00:00:00.01 |       7 |
|   8 |      BITMAP AND                 |            |      1 |        |      1 |00:00:00.01 |       7 |
|*  9 |       BITMAP INDEX SINGLE VALUE |     FACT_0 |      1 |        |      1 |00:00:00.01 |       3 |
|* 10 |       BITMAP INDEX SINGLE VALUE |  FACT_DIM1 |      1 |        |      4 |00:00:00.01 |       4 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
     9 - access("FACT"."C0"=243001)
    10 - access("FACT"."C1"="DIMENSION1"."ID")


-------------------------------------------------------------------------------------------------------
|  Id | Operation                      | Name       | Starts | E-Rows | A-Rows |     A-Time | Buffers |
-------------------------------------------------------------------------------------------------------
|   7 |    BITMAP CONVERSION TO ROWIDS |            |      5 |        |      8 |00:00:00.01 |     119 |
|   8 |     BITMAP AND                 |            |      5 |        |      1 |00:00:00.01 |     119 |
|*  9 |      BITMAP INDEX SINGLE VALUE |  FACT_DIM1 |      5 |        |     20 |00:00:00.01 |      28 |
|* 10 |      BITMAP INDEX SINGLE VALUE |  FACT_DIM2 |      5 |        |    140 |00:00:00.01 |      78 |
|* 11 |      BITMAP INDEX SINGLE VALUE |     FACT_0 |      5 |        |      5 |00:00:00.01 |      13 |
|  12 |   TABLE ACCESS BY INDEX ROWID  |       FACT |      8 |      1 |      8 |00:00:00.01 |       8 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
     9 - access("FACT"."C1"="DIMENSION1"."ID")
    10 - access("FACT"."C2"="DIMENSION2"."ID")
    11 - access("FACT"."C0"=243001)

The first plan shows the steps leading to a single access (Starts = 1) to the FACT table after combining two bitmap indexes; the second shows the second child of a nested loop join where Oracle has combined three bitmaps indexes to access the FACT table – operation 7 (and its descendants) execute 5 times in this case. I’ve included the related parts of the predicate section so that you can see that the predicates at operations 9 and 10 of the first plan are the same as the predicates at operations 9 and 11 of the second plan.

So here’s the question – if one access to fact_dim1 requires 4 buffer visits, why does it take 28 buffer visits to do the same thing 5 times (and it is with the same value every time); conversely if one access to fact_0 requires 3 buffer visits, why do 5 visits to do the same thing take only 13 buffer visits. (Note: the arithmetic is made a little more obscure by the way in which index branch blocks may be pinned during nested loop joins.)

Then there’s a further question – not visible in the plan – the A-Rows in the “BITMAP INDEX SINGLE VALUE” operation is the number of bitmap sections in the rowsource, and we can see that the key values for index fact_dim2 have a significant number of bitmap chunks for a single key (5 executions returned 140 bitmap chunks). This scale, though, is true of all three indexes – in fact a follow-up email pointed out that a typical key value in EVERY ONE of the three indexes consisted of about 100 bitmap chunks, so why can’t we see those hundreds in the execution plan ?

So this is where we’re at: we have an execution plan where we haven’t visited all the bitmap chunks for a bitmap key, and the order in which the bitmap indexes are used in the plan seems to have some effect on the choice of leaf-blocks you visit when accessing the chunks. So (a) could a change in the order of indexes make a significant difference to the number of bitmap chunks you visit and the resulting performance, and (b) is there a way to control the order in which you visit the indexes. That’s where the note starts to get a bit technical – if you don’t want to read any more the answers are: (a) yes but probably not significantly and (b) yes.

Demo

To investigate what goes on inside a “BITMAP AND” I created a table with two bitmap indexes and used a very large setting for pctfree for the indexes so that they had to be stored with a large number of bitmap chunks per key. Here’s the code that I used, with some results from an instance of 12.1.0.2:


create table people
nologging
as
with generator as (
        select  --+ materialize 
                rownum id 
        from dual
        connect by
                level <= 1e4
)
select
        rownum                  id,
        mod(rownum-1, 1e2)      id_town_home,
        trunc((rownum-1)/1e4)   id_town_work,
        rpad('x',10,'x')        small_vc,
        rpad('x',100,'x')       padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6
;
begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'PEOPLE',
                method_opt       => 'for all columns size 1'
        );
end;
/

create bitmap index pe_home on people(id_town_home) nologging pctfree 95;
create bitmap index pe_work on people(id_town_work) nologging pctfree 95;

select
        index_name, distinct_keys, num_rows, leaf_blocks, avg_leaf_blocks_per_key
from
        user_indexes
where
        table_name = 'PEOPLE'
order by
        index_name
;


INDEX_NAME           DISTINCT_KEYS   NUM_ROWS LEAF_BLOCKS AVG_LEAF_BLOCKS_PER_KEY
-------------------- ------------- ---------- ----------- -----------------------
PE_HOME                        100      30399       15200                     152
PE_WORK                        100       1800         907                       9

As you can see I’ve generated two columns (id_town_home, id_town_work) with 100 distinct values and 10,000 rows each, but with very different data distributions – the rows for any given value for id_town_home are uniformly spread across the entire table, every hundredth row; while the rows for any given value of id_town_work are very tightly clustered as a group of 10,000 consecutive rows. As a consequence the index entry (bitmap string) for a typical key value for id_town_home is enormous and has to be broken into 304 chunks spread across 152 leaf blocks (2 index entries per leaf block), while the index entry for a typical key value for id_town_work is much shorter, but still requires 18 chunks spread across 9 leaf blocks.

So what will I see if I run the following query, and force it to use a BITMAP AND of the two indexes, in the two different orders:

select
        /*+ index_combine(pe) */
        max(small_vc)
from
        people pe
where
        id_town_home = 50
and     id_town_work = 50
;

Based on a very simple interpretation of the typical execution plan and using the index stats shown above we might expect to see roughly A-Rows = 18 with 9 buffer gets (plus a few more for segment headers and branch blocks) on the id_town_work index and A-Rows = 304 with 152 buffer gets on the id_town_home index to allow Oracle to generate and compare the two bit strings – but here are the two plans with their execution stats, generated in 12.1.0.2, and each run after flushing the buffer cache:

-------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name    | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
-------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |         |      1 |        |      1 |00:00:00.01 |     118 |    117 |
|   1 |  SORT AGGREGATE                      |         |      1 |      1 |      1 |00:00:00.01 |     118 |    117 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| PEOPLE  |      1 |    100 |    100 |00:00:00.01 |     118 |    117 |
|   3 |    BITMAP CONVERSION TO ROWIDS       |         |      1 |        |    100 |00:00:00.01 |      18 |     17 |
|   4 |     BITMAP AND                       |         |      1 |        |      1 |00:00:00.01 |      18 |     17 |
|*  5 |      BITMAP INDEX SINGLE VALUE       | PE_WORK |      1 |        |     18 |00:00:00.01 |      14 |     13 |
|*  6 |      BITMAP INDEX SINGLE VALUE       | PE_HOME |      1 |        |      4 |00:00:00.01 |       4 |      4 |
-------------------------------------------------------------------------------------------------------------------

-------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name    | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
-------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |         |      1 |        |      1 |00:00:00.01 |     122 |    120 |
|   1 |  SORT AGGREGATE                      |         |      1 |      1 |      1 |00:00:00.01 |     122 |    120 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| PEOPLE  |      1 |    100 |    100 |00:00:00.01 |     122 |    120 |
|   3 |    BITMAP CONVERSION TO ROWIDS       |         |      1 |        |    100 |00:00:00.01 |      22 |     20 |
|   4 |     BITMAP AND                       |         |      1 |        |      1 |00:00:00.01 |      22 |     20 |
|*  5 |      BITMAP INDEX SINGLE VALUE       | PE_HOME |      1 |        |      5 |00:00:00.01 |       8 |      7 |
|*  6 |      BITMAP INDEX SINGLE VALUE       | PE_WORK |      1 |        |     18 |00:00:00.01 |      14 |     13 |
-------------------------------------------------------------------------------------------------------------------

We have NOT touched anything like the entire bit-string for the id_town_home index – a bit-string that spans 152 leaf blocks! Clearly Oracle is doing something clever to minimise the work, and it’s so clever that switching the order of these two extremely different indexes in the plan has made virtually no difference to the work done. Obviously I can’t tell you exactly what the code is doing, but I think I can produce a reasonable guess about what’s going on.

The pe_work index has the smaller number of leaf blocks per key, which makes it the better starting choice for the AND in this case, so the optimizer’s default starting action was to pick the first couple of chunks of that index key value; and Oracle immediately sees that the first rowid that it could possibly need in its result set is roughly in the middle of the table – remember that the “key” columns of a bitmap index are (real_key, first_rowid_of chunk, last_rowid_of_chunk, compressed_bitstring).

Since it now knows the lowest possible rowid that it could need Oracle can now probe the pe_home index by (id_town_home=50, {target_rowid}) – which will let it go to a bitmap index chunk that’s roughly in the middle of the full range of 152. Then Oracle can expand the bitstrings from the chunks it has, reading new chunks as needed from each of the indexes until the 18 chunks / 9 leaf block from the pe_work index have been used up (and that range would have aligned with just two or three chunks from the pe_home index) at which point Oracle can see there’s no more rows in the table that could match both predicates and it doesn’t need to read the next 75 chunks of the pe_home index.

Conversely, when I forced Oracle to use the (inappropriate) pe_home index first, it read the first couple of chunks, then read the first couple of chunks of the pe_work index, at which point it discovered that it didn’t need any of the pe_home index prior to (roughly) chunk 75, so it jumped straight to the right chunk to align with pe_work and carried on from there. That’s why the forced, less efficient, plan that visited pe_home first visited just a couple more leaf blocks than the plan the optimizer selected for itself.

Bottom line on performance (tl;dr) – Oracle is sufficiently smart about checking the start and end ranges on bitmap indexes (rather then arbitrarily expanding the entire bitmap for each key) that even for very large bitmap index entries it will probably only access a couple of “redundant” leaf blocks per index even if it picks the worst possible order for using the indexes. You’re far more likely to notice Oracle picking the wrong indexes (because you know the data better) than you are to spot it using the right indexes in the wrong order – and given that bitmap indexes tend to be relatively small and well buffered (compared to the tables), and given the relatively large number of rows we pick by random I/O from fact tables, a little extra work in the bitmap indexes is unlikely to make a significant difference to the performance of most queries.

Closing fact: in the unlikely circumstances that you do spot the special case where it will make a difference (and it will probably be a difference in CPU usage) then you can dictate the order of the indexes with the undocumented bitmap_tree() hint.  I may get round to writing up the variations one day but, for this simple case, the index_combine() hint that I used to force the BITMAP AND turned into the following bitmap_tree() hint in the outline:

bitmap_tree(@sel$1 pe@sel$1 and((people.id_town_work) (people.id_town_home)))

bitmap_tree( @query_block     table_name@query_block     and( ({first index definition}) ({second index definition}) ) )

Obviously not suitable to throw into production code casually – check with Oracle support if you think it’s really necessary – but if you wanted to reverse the order of index usage in this case you could just swap the order of the index definitions. If you thought there was a third index that should be used you could include its definition (note that it’s table_name.column_name – the index definition – in the brackets).

My reference: bitmap_control_02.sql


(Not So) Internal Dialogue

Tim Hall - Thu, 2016-01-28 04:52

internal-dialogueWhen I wrote about rehearsals in my public speaking tips series, I mentioned talking through small sections of the presentations as a means for rehearsals. I do this a lot! I live on my own, so this is not an internal dialogue. I say this stuff out loud.

This morning I was talking through some ideas as I left the house and cleared the ice off the car. I continued during the journey to work, including when I got out of the car to get a coffee from the Costa Express at the garage. Even as I was unlocking the office door.

If you happen see me doing this in the street, I’m not talking to an imaginary friend. It’s an imaginary audience. If you’re gonna do crazy, you might as well do it at scale… :)

Cheers

Tim…

(Not So) Internal Dialogue was first posted on January 28, 2016 at 11:52 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

The best OBIEE 12c feature that you’re probably not using

Rittman Mead Consulting - Thu, 2016-01-28 03:00

With the release of OBIEE 12c we got a number of interesting new features on the front-end.  We’re all talking about the cleaner look-and-feel, Visual Analyzer, and the ability to create data-mashups, etc.

While all this is incredibly useful, it’s one of the small changes you don’t hear about that’s truly got me excited.  I can’t tell you how thrilled I am that we can finally save a column back to the web catalog as an independent object (to be absolutely fair, this actually first shipped with 11.1.1.9).

For the most part, calculations should be pushed back to the RPD.  This reduces the complexity of the reports on the front-end, simplifies maintenance of these calculations, and ultimately assures that the same logic is used across the board in all dashboards and reports… all the logic should be in the RPD.  I agree with that 110%… at least in theory.  In reality, this isn’t always practical.  When it comes down to it, there’s always some insane deadline or there’s that pushy team (ahem, accounting…) riding you to get their dashboard updated and migrated in time for year end, or whatever.  It’s quite simply just easier sometimes to just code the calculation in the analysis.  So, rather than take the time to modify the RPD, you fat finger the calculation in the column formula.  We’ve all done it.  But, if you spend enough time developing OBIEE reports and dashboards, sooner or later you’ll find that this is gonna come back to bite you.

Six months, a year from now, you’ll have completely forgotten about that calculation.  But there will be a an org change, or a hierarchy was updated… something, to change the logic of that calculation and you’ll need make a change.  Only now, you know longer remember the specifics of the logic you coded, and even worse you don’t remember if you included that same calculation in any of the other analyses you were working on at the time.  Sound familiar?  Now, a change that should have been rather straightforward and could have been completed in an afternoon takes two to three times longer as you dig through all your old reports trying to make sense of things.  (If only you’d documented your development notes somewhere…)

Saving columns to the web catalog is that middle ground that gives us the best of both worlds… the convenience of quickly coding the logic on the front-end but the piece of mind knowing that the logic is all in one place to ensure consistency and ease maintenance.

After you update your column formula, click OK.

From the column dropdown, select the Save Column As option.

Save the column to the web catalog.  Also, be sure to use the description field.  The description is a super convenient place to store a few lines of text that your future self or others can use to understand the purpose of this column.

As an added bonus, this description field is also used when searching the web catalog.  So, if you don’t happen to remember what name you gave a column but included a little blurb about the calculation, all is not lost.

Saved columns can be added from the web catalog.

Add will continue to reference the original saved column, so that changes to made to the saved column will be reflected in your report.  Add Copy will add the column to your report, but future changes to the saved column will not be reflected.

One thing to note, when you add a saved column to a report it can no longer be edited from within the report.  When you click on Edit Formula you will still be able to see the logic, but you will need to open and edit that saved column directly to make any changes to the formula.

Try out the saved columns, you’ll find it’s invaluable and will greatly reduce the time it takes to update reports.  And with all that free time, maybe you’ll finally get to play around with the Visual Analyzer!

The post The best OBIEE 12c feature that you’re probably not using appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Checksum entire directory using md5sum

Michael Dinh - Wed, 2016-01-27 17:52

When you are backing up 3.5+ TB database, resulting in 600+ GB backupset, transferring to new DC, you would want to ensure there are no corruptions resulting from transfer.

Here is an example of how to perform checkum using md5sum for all the contents in the directory.

Taking the process further, split the .md5 file into multiple files for parallel processing.

oracle@arrow:hawklas:/oradata/keep
$ md5sum *MIG* > backup.md5


oracle@arrow:hawklas:/oradata/keep
$ cat backup.md5
080adb9ba716f3bf6f91efb06adc311e  HAWK_3130551611_20160127_hfqsdvfv_1_1_DB_MIG_1583_1
36bd695b6d87b25d153edc91ee79992f  HAWK_3130551611_20160127_hfqsdvfv_2_1_DB_MIG_1583_2
730c32ef5415bc9dd931d026fb42bcd7  HAWK_3130551611_20160127_hgqsdvg0_1_1_DB_MIG_1584_1
36fd7d0098bd628921ac789155f0a712  HAWK_3130551611_20160127_hgqsdvg0_2_1_DB_MIG_1584_2
072f757dfb808c96aef92555f80165eb  HAWK_3130551611_20160127_hgqsdvg0_3_1_DB_MIG_1584_3
767e06a7eea3cb72243d180a5591e2a2  HAWK_3130551611_20160127_hgqsdvg0_4_1_DB_MIG_1584_4
600ef7710995dcb1d11c0b27a6ee9971  HAWK_3130551611_20160127_hgqsdvg0_5_1_DB_MIG_1584_5
28c279a24cf47fb53b1f7840b7df7fc5  HAWK_3130551611_20160127_hgqsdvg0_6_1_DB_MIG_1584_6
e418ac58da6629f6aeec9c9f7bc47ca5  HAWK_3130551611_20160127_hgqsdvg0_7_1_DB_MIG_1584_7
3e6d8788ec1b5c2071b435c10b34b746  HAWK_3130551611_20160127_hgqsdvg0_8_1_DB_MIG_1584_8
5b1c964eabcc8bd602f6cef15482820a  HAWK_3130551611_20160127_hgqsdvg0_9_1_DB_MIG_1584_9
4655bac6d066ff47799ef8dcf423f532  HAWK_3130551611_20160127_hhqsdvg6_1_1_DB_MIG_1585_1
da4add20652a16726006f46a294cf90a  HAWK_3130551611_20160127_hhqsdvg6_2_1_DB_MIG_1585_2
a2ce4073fb16336c2f8a3cf78f1709ec  HAWK_3130551611_20160127_hhqsdvg6_3_1_DB_MIG_1585_3
301ef428887aba61aa33410d6c8b3c70  HAWK_3130551611_20160127_hhqsdvg6_4_1_DB_MIG_1585_4
47379e228a839bb4257aa141dbfd5107  HAWK_3130551611_20160127_hiqsdvgm_1_1_DB_MIG_1586_1
84e2cd2931f7272832b3c4cd3923e69d  HAWK_3130551611_20160127_hiqsdvgm_2_1_DB_MIG_1586_2
33da63364865b3b959491545905fdc9f  HAWK_3130551611_20160127_hiqsdvgm_3_1_DB_MIG_1586_3
06e04d3e05aff26a197621964fcb8617  HAWK_3130551611_20160127_hiqsdvgm_4_1_DB_MIG_1586_4
5436a855b3d287f9b6ed87ba07cefeb7  HAWK_3130551611_20160127_hiqsdvgm_5_1_DB_MIG_1586_5
0bc71d9930bf2653b6c8661dbf388989  HAWK_3130551611_20160127_hjqsdvh9_1_1_DB_MIG_1587_1
fa2447d3842b476ed365ef37c74db7d9  HAWK_3130551611_20160127_hkqsdvhd_1_1_DB_MIG_1588_1
beb4c1726c99335dff262224828925f5  HAWK_3130551611_20160127_hlqsdvhf_1_1_DB_MIG_1589_1
93aede1cc96dc286ff6c7d927d9505af  HAWK_3130551611_20160127_hmqsdvhh_1_1_DB_MIG_1590_1
eb0a7eb4cab45e5c9b053abb7c736f0d  HAWK_3130551611_20160127_hnqsdvhu_1_1_AL_MIG_1591_1
77808e8a0cb4ac766e7e4d868c7c81b4  HAWK_3130551611_20160127_hoqsdvhu_1_1_AL_MIG_1592_1
5994fe2cf07786495f99d2fd9f42b1f8  HAWK_3130551611_20160127_hpqsdvi5_1_1_AL_MIG_1593_1
784be893cd558cecf65aa51ba79b3e04  HAWK_3130551611_20160127_hpqsdvi5_2_1_AL_MIG_1593_2
5859c37d98d77174675f89e7590ed597  HAWK_3130551611_20160127_hqqsdvi6_1_1_AL_MIG_1594_1
6f2a8a68ab0e9847f8f2248de55cb51a  HAWK_3130551611_20160127_hrqsdvi9_1_1_AL_MIG_1595_1
446976ee788754e8cb48fb00a0acec92  HAWK_3130551611_20160127_hsqsdvid_1_1_AL_MIG_1596_1
73a488bc5aa0664fd80237a2b8da5ea8  HAWK_3130551611_20160127_htqsdvig_1_1_AL_MIG_1597_1
c0200bb23218859fb6a1edc3f7cba94d  HAWK_3130551611_20160127_huqsdvii_1_1_AL_MIG_1598_1
802c8f1524b7c6405d4d41a3b64f0a47  HAWK_3130551611_20160127_hvqsdvil_1_1_AL_MIG_1599_1
acf0c5b5ca6a3f9b58e7880eb093330a  HAWK_3130551611_20160127_i0qsdvil_1_1_AL_MIG_1600_1
dd1746fbaf90b1d867300286e297d2b3  HAWK_3130551611_20160127_i1qsdvin_1_1_AL_MIG_1601_1
7bb58056cac524304a88ba95a6837ac8  HAWK_3130551611_20160127_i2qsdvin_1_1_AL_MIG_1602_1
81ea52aadb6767ac3d3e2ae33acc9d64  HAWK_3130551611_20160127_i3qsdvio_1_1_AL_MIG_1603_1
8481443992c6858edbc03a481e13f579  HAWK_3130551611_20160127_i4qsdvip_1_1_AL_MIG_1604_1
539716837bd98835cf9b43b83cb60b79  HAWK_3130551611_20160127_i5qsdvj0_1_1_CF_MIG_1605_1
d2715ac45c5aa1e7dcd4c706c0a542ee  HAWK_3130551611_20160127_i6qsdvj5_1_1_CF_MIG_1606_1
9532408727adfd012728f989dd9a41ad  HAWK_3130551611_20160127_i7qsdvj8_1_1_CF_MIG_1607_1
840cd94839941643ecd1cbacc568ff9c  HAWK_3130551611_20160127_i8qsdvja_1_1_CF_MIG_1608_1


oracle@arrow:hawklas:/oradata/keep
$ md5sum -c -w backup.md5
HAWK_3130551611_20160127_hfqsdvfv_1_1_DB_MIG_1583_1: OK
HAWK_3130551611_20160127_hfqsdvfv_2_1_DB_MIG_1583_2: OK
HAWK_3130551611_20160127_hgqsdvg0_1_1_DB_MIG_1584_1: OK
HAWK_3130551611_20160127_hgqsdvg0_2_1_DB_MIG_1584_2: OK
HAWK_3130551611_20160127_hgqsdvg0_3_1_DB_MIG_1584_3: OK
HAWK_3130551611_20160127_hgqsdvg0_4_1_DB_MIG_1584_4: OK
HAWK_3130551611_20160127_hgqsdvg0_5_1_DB_MIG_1584_5: OK
HAWK_3130551611_20160127_hgqsdvg0_6_1_DB_MIG_1584_6: OK
HAWK_3130551611_20160127_hgqsdvg0_7_1_DB_MIG_1584_7: OK
HAWK_3130551611_20160127_hgqsdvg0_8_1_DB_MIG_1584_8: OK
HAWK_3130551611_20160127_hgqsdvg0_9_1_DB_MIG_1584_9: OK
HAWK_3130551611_20160127_hhqsdvg6_1_1_DB_MIG_1585_1: OK
HAWK_3130551611_20160127_hhqsdvg6_2_1_DB_MIG_1585_2: OK
HAWK_3130551611_20160127_hhqsdvg6_3_1_DB_MIG_1585_3: OK
HAWK_3130551611_20160127_hhqsdvg6_4_1_DB_MIG_1585_4: OK
HAWK_3130551611_20160127_hiqsdvgm_1_1_DB_MIG_1586_1: OK
HAWK_3130551611_20160127_hiqsdvgm_2_1_DB_MIG_1586_2: OK
HAWK_3130551611_20160127_hiqsdvgm_3_1_DB_MIG_1586_3: OK
HAWK_3130551611_20160127_hiqsdvgm_4_1_DB_MIG_1586_4: OK
HAWK_3130551611_20160127_hiqsdvgm_5_1_DB_MIG_1586_5: OK
HAWK_3130551611_20160127_hjqsdvh9_1_1_DB_MIG_1587_1: OK
HAWK_3130551611_20160127_hkqsdvhd_1_1_DB_MIG_1588_1: OK
HAWK_3130551611_20160127_hlqsdvhf_1_1_DB_MIG_1589_1: OK
HAWK_3130551611_20160127_hmqsdvhh_1_1_DB_MIG_1590_1: OK
HAWK_3130551611_20160127_hnqsdvhu_1_1_AL_MIG_1591_1: OK
HAWK_3130551611_20160127_hoqsdvhu_1_1_AL_MIG_1592_1: OK
HAWK_3130551611_20160127_hpqsdvi5_1_1_AL_MIG_1593_1: OK
HAWK_3130551611_20160127_hpqsdvi5_2_1_AL_MIG_1593_2: OK
HAWK_3130551611_20160127_hqqsdvi6_1_1_AL_MIG_1594_1: OK
HAWK_3130551611_20160127_hrqsdvi9_1_1_AL_MIG_1595_1: OK
HAWK_3130551611_20160127_hsqsdvid_1_1_AL_MIG_1596_1: OK
HAWK_3130551611_20160127_htqsdvig_1_1_AL_MIG_1597_1: OK
HAWK_3130551611_20160127_huqsdvii_1_1_AL_MIG_1598_1: OK
HAWK_3130551611_20160127_hvqsdvil_1_1_AL_MIG_1599_1: OK
HAWK_3130551611_20160127_i0qsdvil_1_1_AL_MIG_1600_1: OK
HAWK_3130551611_20160127_i1qsdvin_1_1_AL_MIG_1601_1: OK
HAWK_3130551611_20160127_i2qsdvin_1_1_AL_MIG_1602_1: OK
HAWK_3130551611_20160127_i3qsdvio_1_1_AL_MIG_1603_1: OK
HAWK_3130551611_20160127_i4qsdvip_1_1_AL_MIG_1604_1: OK
HAWK_3130551611_20160127_i5qsdvj0_1_1_CF_MIG_1605_1: OK
HAWK_3130551611_20160127_i6qsdvj5_1_1_CF_MIG_1606_1: OK
HAWK_3130551611_20160127_i7qsdvj8_1_1_CF_MIG_1607_1: OK
HAWK_3130551611_20160127_i8qsdvja_1_1_CF_MIG_1608_1: OK

oracle@arrow:hawklas:/oradata/keep
$ md5sum -c --quiet backup.md5

oracle@arrow:hawklas:/oradata/keep
$ vi backup.md5  -- create false error by modifying checksum.

oracle@arrow:hawklas:/oradata/keep
$ md5sum -c --quiet backup.md5
HAWK_3130551611_20160127_hfqsdvfv_1_1_DB_MIG_1583_1: FAILED
md5sum: WARNING: 1 of 43 computed checksums did NOT match
oracle@arrow:hawklas:/oradata/keep
$
oracle@arrow:hawklas:/oradata/keep
$ ls *MIG*|wc -l
43

oracle@arrow:hawklas:/oradata/keep
$ split -l 10 backup.md5 backup.md5 -- split file to contain 10 checksums per file.

oracle@arrow:hawklas:/oradata/keep
$ ll ba*
-rw-r--r--. 1 oracle oinstall 3698 Jan 27 12:52 backup.md5
-rw-r--r--. 1 oracle oinstall  860 Jan 27 12:56 backup.md5aa
-rw-r--r--. 1 oracle oinstall  860 Jan 27 12:56 backup.md5ab
-rw-r--r--. 1 oracle oinstall  860 Jan 27 12:56 backup.md5ac
-rw-r--r--. 1 oracle oinstall  860 Jan 27 12:56 backup.md5ad
-rw-r--r--. 1 oracle oinstall  258 Jan 27 12:56 backup.md5ae

oracle@arrow:hawklas:/oradata/keep
$ md5sum -c --quiet backup.md5aa
HAWK_3130551611_20160127_hfqsdvfv_1_1_DB_MIG_1583_1: FAILED
md5sum: WARNING: 1 of 10 computed checksums did NOT match

oracle@arrow:hawklas:/oradata/keep
$ md5sum -c --quiet backup.md5ab

oracle@arrow:hawklas:/oradata/keep
$ md5sum -c --quiet backup.md5ac

oracle@arrow:hawklas:/oradata/keep
$ md5sum -c --quiet backup.md5ad

oracle@arrow:hawklas:/oradata/keep
$ md5sum -c --quiet backup.md5ae

SQL Server Cumulative Update changes

Pythian Group - Wed, 2016-01-27 12:42

 

Microsoft recently released the first Cumulative Update for SQL Server 2012 SP3 This CU addresses 8 issues in the product. But more importantly, it also marks a big shift in the Cumulative Update message sent by Microsoft.

For a long time, there was a discussion between DBAs about when/if we should apply a cumulative update. The Microsoft official message always have been that we should apply a cumulative update only if we were facing an issue in our environment caused by a know and fixed bug. This was also evident by the fact that to be able to download a cumulative update it was necessary to register on their website, provide an email address and they would send a link to download the package to your email address.

So, what has changed? Starting now, the latest cumulative update package will be maintained in the Download Center instead of the hotfix server. This will eliminate the need to register to get the latest package, but this is not the only thing that has changed, the entire message that you can read in the knowledge base article has changed, and instead of a warning message saying that we should not install the package unless it was necessary, now we have:
“ Microsoft recommends ongoing, proactive installation of SQL Server CUs as they become available:
SQL Server CUs are certified to the same levels as Service Packs, and should be installed with the same level of confidence
Historical data show a significant number of support cases involve an issue that has already been addressed in a released CU
CUs may contain added value over and above hotfixes such as supportability, manageability, and reliability updates
As with SQL Server Service packs, Microsoft recommends that you test CUs before deploying to production environments”

This is a big change from what we had before. The concerns we had in the past were necessary because the hotfixes were not tested in the same levels as the service packs were. There were no regression tests and not all possible integration tests were executed. So there was a real concern that something could go wrong in specific scenarios that were not tested. But this has changed and now every cumulative update goes through all the same certification levels that are applied to Service Packs.

This is a trend that is happening not only with SQL Server, this is the result of an agile development effort that is happening throughout the entire Microsoft stack of products. Windows, both personal and server editions already have constant updates instead of Service Packs for some time now and it seems SQL Server will soon follow this road.

This big change in how Microsoft deliver updates bring us to an interesting discussion: how to manage frequent and constant product updates in your environment? The last item in the Microsoft message clearly says that you need to test CUs, just like you test Service Packs before applying. Are customers willing to go through testing and patching processes every couple of months when a new CU is released? How can we convince customers of the benefit of having the latest version of the product?

I believe people will eventually get used to this new model of constant updates and catch up, creating plans to update more often their environments, maybe not apply every single CU, but apply them every 2 releases, or every half year, etc.

What do you think? How do you see this new model fitting in your existing environment? I would love to know other people experience on this subject.

Oh, and before I forget: you can download the latest CU for SQL 2012 SP3 that I mentioned in the beginning of the article. The link will not change for every release, so you will always be able to download the latest version using this link.

Categories: DBA Blogs

How do you define IT innovation?

Pythian Group - Wed, 2016-01-27 10:57

Find out how the experts answered at Pythian’s Velocity of Innovation event in San Francisco

Once again, I had the pleasure of moderating another Velocity of Innovation event this past November in San Francisco. Both panelists and guests brought with them a varied range of insights, experiences and perspectives. And as always, this resulted in a thoughtful and lively discussion.

The format of these events is to put three IT leaders with expertise in a particular area in a room, and start a conversation. With our Velocity events, we always start our discussions with a few questions, and the panelists provide some answers. But the idea is to get a conversation going between the audience and panelists.

That day, we discussed a range of subjects from IT innovation, to security, to our favourite wrap-up topic: taking a look into the IT crystal ball and commenting on what current technology trends will really take hold in the future. This last one is always a lot of fun. In this blog post I will provide just some of the highlights from the first discussion topic at the event: innovation and agility in IT.

On our panel were three IT experts:

Sean Rich is the director of IT at Mozilla, leading their application services group. He takes care of web operations, along with pretty much everything data related.

Cory Isaacson is the CTO at Risk Management Solutions. He’s currently bringing big data and scalable systems together to create a new cloud-based platform.

Aaron Lee is Pythian’s VP of transformation services. He runs a team that specializes in helping clients harness technology to deliver real outcomes. Usually they involve things like big data, DevOps, cloud, advanced analytics. He’s involved in some of the most leading edge initiatives for Pythian customers.

I started the discussion by asking the panel, and the audience members, to discuss the notions of innovation and agility, and to try to describe what they have explicitly done to improve innovation and to make their own organizations and those of their customers more agile.

Cory: My business evaluates risk. Our customers are the biggest insurance companies in the world. We run catastrophe models for them so that we can actually see what an earthquake might cost them or a hurricane, for example. I run technology for the company and have to build all the software. We’re innovating tremendously and so now it’s funny because our executives ask us to evaluate the risk of our own projects. We’re trying to do some very, very innovative things. I don’t know if any of you have an insurance background, but it’s not the most up-to-date industry when it comes to technology. As you know it’s been around a long, long time. But at my company some of the things that we’re trying to do are, honestly, more advanced than most other things I’ve ever seen in my career. And that’s why I took the position. But when you’re doing innovation, it is risky. There’s no way around it. There are a lot to evaluate: from different algorithms to the risk models and the catastrophe models. How do you evaluate them? Can you actually run them? We’re going from a 25-year-old desktop application, running on Microsoft SQL server to a cloud-based implementation. We’re taking thousands of servers and trying to move all the customers into a cloud implementation.

Sean: In my role I’m interpreting it a little bit differently. Innovation is doing something new. In an effort toward agility, one of the things that we’re doing within our organization is enabling the agility of our business partners, by changing our own operating model. Traditional IT where we run all the services and infrastructure necessary to drive the business, actually taking more of an enabler or a partnership approach where we’re doing things like encouraging shadow IT, encouraging the use of SaaS applications and helping them really do that better through different service offerings like vendor management or change management when it comes to user adoption of certain platforms, data integration so that when we have work flows that span multiple areas of the business, we can complete those without moving data around manually and some of the other problems that come with that. That’s one way that we’re doing new things, looking at ourselves differently, what new capabilities do we need to develop, processes, tools and skills to enable agility for our marketing group or our product lines, as an example. That’s a little bit of one way I think about it.

Then I asked: What gets in the way of agility?

Aaron: I think it’s interesting considering we used to improve innovation by looking right at the root of the motivation for it. Why are we going down this path, trying to innovate something and what is the value of that thing we’re trying to innovate? It’s my belief that the thing that we should value in our colleagues is their judgment. The pre-conditions for being able to offer judgment are you need to be fully informed, you need to be aligned with an objective, there needs to be a reason and even an incentive to offer that kind of input. If the shared goals around innovation opportunities aren’t defined in a way that actually lead to success over time, then the business is just like any other organism: it gets its fingers burned, it starts to get more risk adverse and then it becomes harder and harder to execute any kind of change agenda. Planning in a way that is likely to have a good long-term outcome, even at the outset of any sort of initiative, is one key success criteria that we put in place to help ourselves and our customers get to a good place.

Attendee 1: Established companies who have the ability to have long-term leaders are losing the race because they’re not agile. Those leaders need to transform in their mind first to say, “This is something that needs to be done,” and commit to it and take maintain an attitude where, having given that direction, don’t penalize employees for failure as they run small experiments. A lot of companies have, complex projects where they spin-off a small team, say, “You go do whatever you want and we are going to give you some limited funding, but we are not going to ask you for results.” CEOs and COOs are looking for, “If I spend 10 million bucks, what am I going to get for it?” When you focus on bottom line results, then you hide the cost of innovation.

Sean: Yeah, it’s culturally endemic, sort of near-term focus on success instead of on the long term and the impact that has on innovation.

Cory: There are some companies, like Google who have been known to allow an engineer to take a day a week or a day every two weeks and just look at things. I think though, the challenge is you have to get your organization up to the point where this is an economically viable thing to do. That’d be something I’d love to do with our team, but they’re so stressed out on getting the next thing done. Once we get more ahead of the curve, I think we could do that kind of thing.

The discussion went on to cover operationalizing innovation, or making it a program within an organization, before we moved on to other similarly thought-provoking subjects.

Interested in being a part of a discussion like this one? VELOCITY OF INNOVATION is a series of thought-leadership events for senior IT management hosted by Pythian. Pythian invites leading IT innovators to participate in discussions about today’s disruptive technologies: big data, cloud, advanced analytics, DevOps, and more. These events are by invitation only.

If you are interested in attending an upcoming Velocity of Innovation event in a city near you, please contact events@pythian.com.

Categories: DBA Blogs

HCM Cloud Takes to the Airwaves

Linda Fishman Hoyle - Wed, 2016-01-27 10:44

A Guest Post by Frank Cowell,  Director, Oracle HCM Cloud Center of Excellence (pictured left)

Have you heard about HCM Talk Radio? It provides audio podcasts for our HCM Cloud customers and partners. Each show is about 20 minutes long and focuses on best practices and hot topics. Obviously in 20 minutes, we don’t go deep. We’re just trying to peak your interest in the topic and then point you to more in-depth resources on My Oracle Support, Customer Connect, and Help Center.

HCM Talk Radio is recorded at the UK Oracle Studios in Reading. Our guests are customers, partners, and employees, and they join the show either in person or remotely. The shows are published to the Oracle Podcast site. Many of our listeners have signed up for the RSS feeds; others download from iTunes or use Doc ID 2052534 on My Oracle Support Knowledge Base.

We are currently averaging more than 200 views per day. The total number of views since we started in March 2015 is 25,401 and climbing. Your feedback has been very positive and helps us keep our podcasts relevant to the needs of our HCM Cloud community.

We do hope you’ll tune in and see what we’re all about. Here are three of the most recent podcasts you may want to recommend to your prospects and customers: “Change Management best practices when adopting HCM Cloud Services,” “The new Role Copy feature in Release 10,” and “How to prepare for an HCM Cloud release upgrade.”

And one more thing: please contact me at frank cowell@oracle.com if you want to collaborate on a future broadcast.

HCM Talk Radio comes courtesy of the Oracle HCM Cloud Development Center of Excellence Team.


Up in the JCS Clouds !!

Tim Dexter - Wed, 2016-01-27 04:05
Hello Friends,

Oracle BI Publisher has been in the cloud for quite sometime ....as a part of Fusion Applications or few other Oracle product offerings. We now announce certification of BI Publisher in the Java Cloud Services!! 

BI Publisher on JCS

Oracle Java Cloud Service (JCS) is a part of the platform service offerings in Oracle Cloud. Powered by Oracle WebLogic Server, it provides a platform on top of Oracle's enterprise-grade cloud infrastructure for developing and deploying new or existing Java EE applications. Check for more details on JCS here. In this page, under "Perform Advanced Tasks" you can find a link to "Leverage your on-premise licenses". This page cites all the products certified for Java Cloud Services and now we can see BI Publisher 11.1.1.9 listed as one of the certified products using Fusion Middleware 11.1.1.7.


How to Install BI Publisher on JCS?

Here are the steps to install BI Publisher on JCS. The certification supports the Virtual Image option only.

Step 1: Create DBaaS Instance


Step 2: Create JCS Instance

To create an Oracle Java Cloud Service instance, use the REST API for Oracle Java Cloud Service. Do not use the Wizard in the GUI. The Wizard does not allow an option to specify the MWHOME partition size, whereas REST API allows us to specify this. The default size created by the Wizard is generally insufficient for BI Publisher deployments.

The detailed instructions to install JCS instance are available in the Oracle By Example Tutorial under "Setting up your environment", "Creating an Oracle Java Cloud Service instance".


Step 3:  Install and Configure BI Publisher

  1. Set up RCU on DBaaS
    • Copy RCU
    • Run RCU
  2. Install BI Publisher in JCS instance
    • Copy BI Installer in JCS instance
    • Run Installer
    • Use Software Only Install
  3. Configure BI Publisher
    • Extend Weblogic Domain
    • Configure Policy Store
    • Configure JMS
    • Configure Security

You can follow the detailed installation instructions as documented in "Oracle By Example" tutorial. 


Minimum Cloud Compute and Storage Requirements:

  1. Oracle Java Cloud Service: 1 OCPU, 7.5 GB Memory, 62 GB Storage
    • To install Weblogic instance
    • To Install BI Publisher
    • To set Temp File Directory in BI Publisher
  2. Oracle Database Cloud Service: 1 OCPU, 7.5 GB Memory, 90 GB Storage
    • To install RCU
    • To use DBaaS as a data source
  3. Oracle IaaS (Compute & Storage): (Optional - Depends on sizing requirements)
    • To Enable Local & Cloud Storage option in DBaaS (Used with Full Tooling option)

So now you can use your on-premise license to host BI Publisher as a standalone on the Java Cloud Services for all your highly formatted, pixel perfect enterprise reports for your cloud based applications. Have a great Day !!

Categories: BI & Warehousing

Add primary key.

Jonathan Lewis - Wed, 2016-01-27 03:07

I thought I had written this note a few years ago, on OTN or Oracle-L if not on my blog, but I can’t find any sign of it so I’ve decided it’s time to write it (again) – starting as a question about the following code:


create table t1
as
with generator as (
        select  rownum  id
        from            dual
        connect by
                        rownum <= 1000
)
select
        rownum                                  id,
        trunc((rownum-1)/50)                    clustered,
        mod(rownum,20000)                       scattered,
        lpad(rownum,10)                         vc_small,
        rpad('x',100,'x')                       vc_padding
from
        generator       g1,
        generator       g2
;

execute dbms_stats.gather_table_stats(user,'t1',method_opt=>'for all columns size 1')

alter system flush buffer_cache;

alter table t1 add constraint t1_pk primary key(id, scattered);

I’ve generated a table with 1,000,000 rows, including a column that’s guaranteed to be unique; then I’ve added a (two-column) primary key constraint to that table.

Because of the guaranteed unique column the call to add constraint will succeed. Because Oracle will automatically create a unique index to support that constraint it will have to do a tablescan of the table. So here’s the question: HOW MANY TIMES will it tablescan that table (and how many rows will it scan) ?

Space for thought …

The answer is three tablescans, 3 million rows.

Oracle will scan the table to check the validity of adding a NOT NULL definition and constraint for the id column, repeat the scan to do the same for the scattered column, then one final scan to accumulate the key data and rowids to sort and create the index.

Knowing this, you may be able to find ways to modify bulk data loading operations to minimise overheads.

The most recent version I’ve tested this on is 12.1.0.2.

See also: https://jonathanlewis.wordpress.com/2012/03/02/add-constraint/

My reference: pk_overhead.sql


Links for 2016-01-26 [del.icio.us]

Categories: DBA Blogs

Why use KEEP backup?

Michael Dinh - Tue, 2016-01-26 19:30

Question which may have been asked.

For 1 time backup to migrate/clone database, KEEP backup does not affect retention policy and not backed up 1 times clause.

Imagine the consequences for deleting 1 off backup which has archivelog backup and the main scripts use not backed up 1-2 times clause which could mean no archivelog for subsequent backups.

RMAN> list archivelog all;

using target database control file instead of recovery catalog
List of Archived Log Copies for database with db_unique_name HAWKLAS
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - --------------------
5067    1    806     A 2016-JAN-21 07:45:48
        Name: /oradata/fra/HAWKLAS/archivelog/2016_01_21/o1_mf_1_806_cb1zcp8y_.arc

5068    1    807     A 2016-JAN-21 07:47:01
        Name: /oradata/fra/HAWKLAS/archivelog/2016_01_21/o1_mf_1_807_cb1zd4ns_.arc

5070    1    808     A 2016-JAN-21 07:47:16
        Name: /oradata/fra/HAWKLAS/archivelog/2016_01_22/o1_mf_1_808_cb4tyo1y_.arc

5071    1    809     A 2016-JAN-22 09:50:11
        Name: /oradata/fra/HAWKLAS/archivelog/2016_01_22/o1_mf_1_809_cb4tytsp_.arc

5073    1    810     A 2016-JAN-22 09:50:18
        Name: /oradata/fra/HAWKLAS/archivelog/2016_01_26/o1_mf_1_810_cbh5st68_.arc

5072    1    811     A 2016-JAN-26 07:56:40
        Name: /oradata/fra/HAWKLAS/archivelog/2016_01_26/o1_mf_1_811_cbh5sz08_.arc

5074    1    812     A 2016-JAN-26 07:56:45
        Name: /oradata/fra/HAWKLAS/archivelog/2016_01_26/o1_mf_1_812_cbh7mc9h_.arc


RMAN> backup archivelog sequence 806 KEEP UNTIL TIME 'ADD_MONTHS(SYSDATE,1)' tag ARC_KEEP
2> ;

Starting backup at 2016-JAN-26 17:19:57
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=148 device type=DISK
backup will be obsolete on date 2016-FEB-26 17:19:57
archived logs required to recover from this backup will be backed up
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=806 RECID=5067 STAMP=901698422
channel ORA_DISK_1: starting piece 1 at 2016-JAN-26 17:19:58
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 01/26/2016 17:19:59
ORA-19811: cannot have files in DB_RECOVERY_FILE_DEST with keep attributes

RMAN> backup archivelog sequence 806 format '/tmp/U%' KEEP UNTIL TIME 'ADD_MONTHS(SYSDATE,1)' tag ARC_KEEP;

Starting backup at 2016-JAN-26 17:20:26
using channel ORA_DISK_1
backup will be obsolete on date 2016-FEB-26 17:20:26
archived logs required to recover from this backup will be backed up
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=806 RECID=5067 STAMP=901698422
channel ORA_DISK_1: starting piece 1 at 2016-JAN-26 17:20:26
channel ORA_DISK_1: finished piece 1 at 2016-JAN-26 17:20:27
piece handle=/tmp/U% tag=ARC_KEEP comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 2016-JAN-26 17:20:27

RMAN> backup archivelog sequence 806 not backed up 1 times tag ARC_BKUP;

Starting backup at 2016-JAN-26 17:21:02
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=806 RECID=5067 STAMP=901698422
channel ORA_DISK_1: starting piece 1 at 2016-JAN-26 17:21:02
channel ORA_DISK_1: finished piece 1 at 2016-JAN-26 17:21:03
piece handle=/oradata/fra/HAWKLAS/backupset/2016_01_26/o1_mf_annnn_ARC_BKUP_cbj6vyl7_.bkp tag=ARC_BKUP comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 2016-JAN-26 17:21:03

Starting Control File and SPFILE Autobackup at 2016-JAN-26 17:21:04
piece handle=/u01/app/oracle/product/11.2.0/dbhome_1/dbs/HAWK_c-3130551611-20160126-02 comment=NONE
Finished Control File and SPFILE Autobackup at 2016-JAN-26 17:21:07

RMAN> backup archivelog sequence 807 tag ARC_BACKUP;

Starting backup at 2016-JAN-26 17:21:30
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=807 RECID=5068 STAMP=901698436
channel ORA_DISK_1: starting piece 1 at 2016-JAN-26 17:21:30
channel ORA_DISK_1: finished piece 1 at 2016-JAN-26 17:21:31
piece handle=/oradata/fra/HAWKLAS/backupset/2016_01_26/o1_mf_annnn_ARC_BACKUP_cbj6wt9y_.bkp tag=ARC_BACKUP comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 2016-JAN-26 17:21:31

Starting Control File and SPFILE Autobackup at 2016-JAN-26 17:21:31
piece handle=/u01/app/oracle/product/11.2.0/dbhome_1/dbs/HAWK_c-3130551611-20160126-03 comment=NONE
Finished Control File and SPFILE Autobackup at 2016-JAN-26 17:21:34

RMAN> backup archivelog sequence 807 not backed up 1 times;

Starting backup at 2016-JAN-26 17:21:43
using channel ORA_DISK_1
skipping archived log of thread 1 with sequence 807; already backed up
Finished backup at 2016-JAN-26 17:21:43

RMAN> list backup of archivelog all summary;


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
1360    B  A  A DISK        2016-JAN-26 17:20:27 1       1       NO         ARC_KEEP
1361    B  A  A DISK        2016-JAN-26 17:21:02 1       1       NO         ARC_BKUP
1363    B  A  A DISK        2016-JAN-26 17:21:30 1       1       NO         ARC_BACKUP

RMAN> list backup of archivelog sequence 806;


List of Backup Sets
===================


BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ --------------------
1360    27.00K     DISK        00:00:01     2016-JAN-26 17:20:27
        BP Key: 1505   Status: AVAILABLE  Compressed: NO  Tag: ARC_KEEP
        Piece Name: /tmp/U%

  List of Archived Logs in backup set 1360
  Thrd Seq     Low SCN    Low Time             Next SCN   Next Time
  ---- ------- ---------- -------------------- ---------- ---------
  1    806     6645495    2016-JAN-21 07:45:48 6645595    2016-JAN-21 07:47:01

BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ --------------------
1361    27.00K     DISK        00:00:00     2016-JAN-26 17:21:02
        BP Key: 1506   Status: AVAILABLE  Compressed: NO  Tag: ARC_BKUP
        Piece Name: /oradata/fra/HAWKLAS/backupset/2016_01_26/o1_mf_annnn_ARC_BKUP_cbj6vyl7_.bkp

  List of Archived Logs in backup set 1361
  Thrd Seq     Low SCN    Low Time             Next SCN   Next Time
  ---- ------- ---------- -------------------- ---------- ---------
  1    806     6645495    2016-JAN-21 07:45:48 6645595    2016-JAN-21 07:47:01

RMAN> list backup of archivelog sequence 806 summary;


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
1360    B  A  A DISK        2016-JAN-26 17:20:27 1       1       NO         ARC_KEEP
1361    B  A  A DISK        2016-JAN-26 17:21:02 1       1       NO         ARC_BKUP

RMAN>

OOPS! Did you see my errors?


Adding community based Plugins to the CF CLI Tool

Pas Apicella - Tue, 2016-01-26 17:02
I needed a community based plugin recently and this is how you would add it to your CF CLI interface.

1. Add Community based REPO as shown below

$ cf add-plugin-repo community http://plugins.cfapps.io/

2. Check available plugins from REPO added above


pasapicella@Pas-MacBook-Pro:~/ibm$ cf repo-plugins community
Getting plugins from all repositories ...

Repository: CF-Community
name                      version   description
Download Droplet          1.0.0     Download droplets to your local machine
Firehose Plugin           0.8.0     This plugin allows you to connect to the firehose (CF admins only)
doctor                    1.0.1     doctor scans your deployed applications, routes and services for anomalies and reports any issues found. (CLI v6.7.0+)
manifest-generator        1.0.0     Help you to generate a manifest from 0 (CLI v6.7.0+)
Diego-Enabler             1.0.1     Enable/Disable Diego support for an app (CLI v6.13.0+)
3. Install plugin as shown below

pasapicella@Pas-MacBook-Pro:~/ibm/$ cf install-plugin "Live Stats" -r community

**Attention: Plugins are binaries written by potentially untrusted authors. Install and use plugins at your own risk.**

Do you want to install the plugin Live Stats? (y or n)> y
Looking up 'Live Stats' from repository 'community'
7874156 bytes downloaded...
Installing plugin /var/folders/rj/5r89y5nd6pd4c9hwkbvdp_1w0000gn/T/cf-plugin-stats...
OK
Plugin Live Stats v0.0.0 successfully installed.

4. View plugin commands

pasapicella@Pas-MacBook-Pro:~/ibm/$ cf plugins
Listing Installed Plugins...
OK

Plugin Name       Version   Command Name                                           Command Help
IBM-Containers    0.8.788   ic                                                     IBM Containers plug-in

Live Stats        N/A       live-stats                                             Show browser based stats
active-deploy     0.1.22    active-deploy-service-info                             Reports version information about the CLI and Active Deploy service. It also reports the cloud back ends enabled by the Active Deploy service instance.
http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

My BIWA Summit Presentations

Tanel Poder - Tue, 2016-01-26 17:01

Here are the two BIWA Summit 2016 presentations I delivered today. The first one is a collection of high level thoughts (and opinions) of mine and the 2nd one is more technical:

SQL in the Hybrid World from Tanel Poder

SQL Monitoring in Oracle Database 12c from Tanel Poder

 

NB! If you want to move to the "New World" - and benefit from the awesomeness of Hadoop, without having to re-engineer your existing applications - check out Gluent, my new startup that will make history! ;-)

Related Posts

Unsolved Case for Missing archived_log Backup

Michael Dinh - Tue, 2016-01-26 17:00

The project was to migrate database from one DC to another.

The decision we made was to perform RMAN KEEP backup so it does not interfere with existing retention policy.

Backup also resides in its own separate directory for easier checksum and transfer.

This is for 4 nodes RAC environment and backup was taken from node1 at 2016-JAN-21 14:12:10

RMAN backup scripts.
run {
ALLOCATE CHANNEL C1 DEVICE TYPE DISK FORMAT '/oracle/FRA/migration_backup/%d_%I_%T_%U_MIGRATION_%s' MAXPIECESIZE 4G MAXOPENFILES 1;
ALLOCATE CHANNEL C2 DEVICE TYPE DISK FORMAT '/oracle/FRA/migration_backup/%d_%I_%T_%U_MIGRATION_%s' MAXPIECESIZE 4G MAXOPENFILES 1;
ALLOCATE CHANNEL C3 DEVICE TYPE DISK FORMAT '/oracle/FRA/migration_backup/%d_%I_%T_%U_MIGRATION_%s' MAXPIECESIZE 4G MAXOPENFILES 1;
ALLOCATE CHANNEL C4 DEVICE TYPE DISK FORMAT '/oracle/FRA/migration_backup/%d_%I_%T_%U_MIGRATION_%s' MAXPIECESIZE 4G MAXOPENFILES 1;
ALLOCATE CHANNEL C5 DEVICE TYPE DISK FORMAT '/oracle/FRA/migration_backup/%d_%I_%T_%U_MIGRATION_%s' MAXPIECESIZE 4G MAXOPENFILES 1;
SQL 'ALTER SYSTEM CHECKPOINT';

BACKUP AS COMPRESSED BACKUPSET INCREMENTAL LEVEL 0 DATABASE FILESPERSET 1 
KEEP UNTIL TIME 'ADD_MONTHS(SYSDATE,1)' TAG='MIGRATION_KEEP';

BACKUP AS COMPRESSED BACKUPSET ARCHIVELOG from time 'trunc(sysdate)' FILESPERSET 2 
KEEP UNTIL TIME 'ADD_MONTHS(SYSDATE,1)' TAG='MIGRATION_KEEP';
}
run {
ALLOCATE CHANNEL C6 DEVICE TYPE DISK FORMAT '/oracle/FRA/migration_backup/CTL_%d_%I_%T_%U_MIGRATION_%s';
BACKUP AS COMPRESSED BACKUPSET CURRENT CONTROLFILE KEEP UNTIL TIME 'ADD_MONTHS(SYSDATE,1)' TAG='MIGRATION_KEEP';
}
LIST BACKUP OF DATABASE SUMMARY TAG='MIGRATION_KEEP';
LIST BACKUP OF ARCHIVELOG ALL SUMMARY TAG='MIGRATION_KEEP';
LIST BACKUP OF CONTROLFILE TAG='MIGRATION_KEEP';
REPORT SCHEMA;

When recovering database, we encountered the error below.

ERROR from database recovery
RMAN-06025: no backup of archived log for thread 1 with sequence 287407 and starting SCN of 198452997924 found to restore
According to gv$archived_log, the sequence has not been deleted.
SQL> select inst_id, thread#, sequence#, completion_time, status, deleted
from gv$archived_log
where thread#=1 and sequence# between 287406 and 287408
order by 1,2,3
;

  2    3    4    5  
   INST_ID    THREAD#  SEQUENCE# COMPLETION_TIME      S DEL
---------- ---------- ---------- -------------------- - ---
	 1	    1	  287406 2016-JAN-21 18:51:29 A NO
	 1	    1	  287407 2016-JAN-21 18:59:45 A NO
	 1	    1	  287408 2016-JAN-21 19:00:08 A NO
	 2	    1	  287406 2016-JAN-21 18:51:29 A NO
	 2	    1	  287407 2016-JAN-21 18:59:45 A NO
	 2	    1	  287408 2016-JAN-21 19:00:08 A NO
	 3	    1	  287406 2016-JAN-21 18:51:29 A NO
	 3	    1	  287407 2016-JAN-21 18:59:45 A NO
	 3	    1	  287408 2016-JAN-21 19:00:08 A NO
	 4	    1	  287406 2016-JAN-21 18:51:29 A NO
	 4	    1	  287407 2016-JAN-21 18:59:45 A NO
	 4	    1	  287408 2016-JAN-21 19:00:08 A NO

12 rows selected.

SQL> SQL> 
Backup was started at 2016-JAN-21 14:12:10.

Noticed sequence 287407 thread 1 was missing from the MIGRATION_KEEP backup.

RMAN> list backup of archivelog sequence 287406 thread 1 summary;


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
233366  B  A  A DISK        2016-JAN-21 18:59:30 1       1       YES        MIGRATION_KEEP
233374  B  A  A DISK        2016-JAN-21 19:23:41 1       1       YES        ARC021THU1923

RMAN> list backup of archivelog sequence 287407 thread 1 summary;


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
233375  B  A  A DISK        2016-JAN-21 19:23:46 1       1       YES        ARC021THU1923

RMAN> list backup of archivelog sequence 287408 thread 1 summary;


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
233372  B  A  A DISK        2016-JAN-21 19:00:16 1       1       YES        MIGRATION_KEEP
233377  B  A  A DISK        2016-JAN-21 19:23:47 1       1       YES        ARC021THU1923


RMAN> list backup summary tag MIGRATION_KEEP;


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
233092  B  0  A DISK        2016-JAN-21 14:12:10 2       1       YES        MIGRATION_KEEP
233093  B  0  A DISK        2016-JAN-21 14:12:19 2       1       YES        MIGRATION_KEEP

233306  B  0  A DISK        2016-JAN-21 18:48:31 1       1       YES        MIGRATION_KEEP
233307  B  0  A DISK        2016-JAN-21 18:48:32 1       1       YES        MIGRATION_KEEP

233308  B  F  A DISK        2016-JAN-21 18:48:37 1       1       YES        MIGRATION_KEEP
233309  B  A  A DISK        2016-JAN-21 18:50:20 1       1       YES        MIGRATION_KEEP
233310  B  A  A DISK        2016-JAN-21 18:50:47 1       1       YES        MIGRATION_KEEP
233311  B  A  A DISK        2016-JAN-21 18:50:48 1       1       YES        MIGRATION_KEEP
233312  B  A  A DISK        2016-JAN-21 18:50:54 1       1       YES        MIGRATION_KEEP
233313  B  A  A DISK        2016-JAN-21 18:50:58 1       1       YES        MIGRATION_KEEP
233314  B  F  A DISK        2016-JAN-21 18:51:12 1       1       YES        MIGRATION_KEEP
233315  B  A  A DISK        2016-JAN-21 18:52:00 1       1       YES        MIGRATION_KEEP

233366  B  A  A DISK        2016-JAN-21 18:59:30 1       1       YES        MIGRATION_KEEP
233367  B  A  A DISK        2016-JAN-21 18:59:32 1       1       YES        MIGRATION_KEEP
233368  B  A  A DISK        2016-JAN-21 18:59:32 1       1       YES        MIGRATION_KEEP
233369  B  A  A DISK        2016-JAN-21 18:59:35 1       1       YES        MIGRATION_KEEP
233370  B  F  A DISK        2016-JAN-21 18:59:54 1       1       YES        MIGRATION_KEEP
233371  B  F  A DISK        2016-JAN-21 19:00:04 1       1       YES        MIGRATION_KEEP
233372  B  A  A DISK        2016-JAN-21 19:00:16 1       1       YES        MIGRATION_KEEP
233373  B  F  A DISK        2016-JAN-21 19:00:22 1       1       YES        MIGRATION_KEEP

RMAN> list backup of controlfile summary tag MIGRATION_KEEP;

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
233314  B  F  A DISK        2016-JAN-21 18:51:12 1       1       YES        MIGRATION_KEEP
233370  B  F  A DISK        2016-JAN-21 18:59:54 1       1       YES        MIGRATION_KEEP
233373  B  F  A DISK        2016-JAN-21 19:00:22 1       1       YES        MIGRATION_KEEP --- This CF was restored.

RMAN> 
RMAN> restore controlfile from '/rman_bkp/FRA/migration_backup/CTL_3036635614_20160121_m6qrusa4_1_1_MIGRATION_235206';
RMAN> list backup of archivelog all summary;


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
233309  B  A  A DISK        21-JAN-2016 18:50:20 1       1       YES        MIGRATION_KEEP

233365  B  A  A DISK        21-JAN-2016 18:59:29 1       1       YES        MIGRATION_KEEP
233366  B  A  A DISK        21-JAN-2016 18:59:30 1       1       YES        MIGRATION_KEEP
233367  B  A  A DISK        21-JAN-2016 18:59:32 1       1       YES        MIGRATION_KEEP
233368  B  A  A DISK        21-JAN-2016 18:59:32 1       1       YES        MIGRATION_KEEP
233369  B  A  A DISK        21-JAN-2016 18:59:35 1       1       YES        MIGRATION_KEEP
233372  B  A  A DISK        21-JAN-2016 19:00:16 1       1       YES        MIGRATION_KEEP

RMAN> list backupset 233372;


List of Backup Sets
===================


BS Key  Size       Device Type Elapsed Time Completion Time     
------- ---------- ----------- ------------ --------------------
233372  35.84M     DISK        00:00:04     21-JAN-2016 19:00:16
        BP Key: 359665   Status: AVAILABLE  Compressed: YES  Tag: MIGRATION_KEEP
        Piece Name: /rman_bkp/FRA/migration_backup/CTL_3036635614_20160121_m5qrus9s_1_1_MIGRATION_235205
        Keep: BACKUP_LOGS        Until: 21-FEB-2016 19:00:12

  List of Archived Logs in backup set 233372
  Thrd Seq     Low SCN    Low Time             Next SCN   Next Time
  ---- ------- ---------- -------------------- ---------- ---------
  1    287408  198453187859 21-JAN-2016 18:59:44 198453194240 21-JAN-2016 19:00:08
  2    207046  198452998035 21-JAN-2016 18:51:29 198453187879 21-JAN-2016 18:59:44
  2    207047  198453187879 21-JAN-2016 18:59:44 198453193569 21-JAN-2016 19:00:05
  3    182524  198452999167 21-JAN-2016 18:51:31 198453188295 21-JAN-2016 18:59:47
  3    182525  198453188295 21-JAN-2016 18:59:47 198453194175 21-JAN-2016 19:00:08
  4    75721   198452999243 21-JAN-2016 18:51:32 198453188286 21-JAN-2016 18:59:47
  4    75722   198453188286 21-JAN-2016 18:59:47 198453194112 21-JAN-2016 19:00:08

RMAN> 
Even from the log file sequence 287407 is missing.
channel C4: backup set complete, elapsed time: 00:00:30
channel C4: starting compressed archived log backup set
channel C4: specifying archived log(s) in backup set
input archived log thread=4 sequence=75720 RECID=709008 STAMP=901738292
input archived log thread=1 sequence=287406 RECID=709005 STAMP=901738289
channel C4: starting piece 1 at 2016-JAN-21 18:59:28
channel C5: finished piece 1 at 2016-JAN-21 18:59:28
piece handle=/oracle/FRA/migration_backup/3036635614_20160121_lvqrus7u_1_1_MIGRATION_235199 tag=MIGRATION_KEEP comment=NONE
channel C5: backup set complete, elapsed time: 00:00:13
channel C5: starting compressed archived log backup set
channel C5: specifying archived log(s) in backup set
input archived log thread=2 sequence=207045 RECID=709006 STAMP=901738289
channel C5: starting piece 1 at 2016-JAN-21 18:59:29
channel C3: finished piece 1 at 2016-JAN-21 18:59:30
piece handle=/oracle/FRA/migration_backup/3036635614_20160121_luqrus7p_1_1_MIGRATION_235198 tag=MIGRATION_KEEP comment=NONE
channel C3: backup set complete, elapsed time: 00:00:20
channel C4: finished piece 1 at 2016-JAN-21 18:59:32
piece handle=/oracle/FRA/migration_backup/3036635614_20160121_m1qrus8g_1_1_MIGRATION_235201 tag=MIGRATION_KEEP comment=NONE
channel C4: backup set complete, elapsed time: 00:00:04
channel C5: finished piece 1 at 2016-JAN-21 18:59:32
piece handle=/oracle/FRA/migration_backup/3036635614_20160121_m2qrus8g_1_1_MIGRATION_235202 tag=MIGRATION_KEEP comment=NONE
channel C5: backup set complete, elapsed time: 00:00:03
channel C1: finished piece 1 at 2016-JAN-21 18:59:36
piece handle=/oracle/FRA/migration_backup/3036635614_20160121_ltqrus7p_1_1_MIGRATION_235197 tag=MIGRATION_KEEP comment=NONE
channel C1: backup set complete, elapsed time: 00:00:31
channel C2: finished piece 1 at 2016-JAN-21 18:59:36
piece handle=/oracle/FRA/migration_backup/3036635614_20160121_m0qrus83_1_1_MIGRATION_235200 tag=MIGRATION_KEEP comment=NONE
channel C2: backup set complete, elapsed time: 00:00:15
Finished backup at 2016-JAN-21 18:59:36
released channel: C1
released channel: C2
released channel: C3
released channel: C4
released channel: C5
                               
allocated channel: C6
channel C6: SID=373 instance=1 device type=DISK

Starting backup at 2016-JAN-21 18:59:44
current log archived

backup will be obsolete on date 2016-FEB-21 18:59:52
archived logs required to recover from this backup will be backed up
channel C6: starting compressed full datafile backup set
channel C6: specifying datafile(s) in backup set
including current control file in backup set
channel C6: starting piece 1 at 2016-JAN-21 18:59:53
channel C6: finished piece 1 at 2016-JAN-21 19:00:04
piece handle=/oracle/FRA/migration_backup/CTL_3036635614_20160121_m3qrus98_1_1_MIGRATION_235203 tag=MIGRATION_KEEP comment=NONE
channel C6: backup set complete, elapsed time: 00:00:11

backup will be obsolete on date 2016-FEB-21 19:00:04
archived logs required to recover from this backup will be backed up
channel C6: starting compressed full datafile backup set
channel C6: specifying datafile(s) in backup set
including current SPFILE in backup set
channel C6: starting piece 1 at 2016-JAN-21 19:00:04
channel C6: finished piece 1 at 2016-JAN-21 19:00:05
piece handle=/oracle/FRA/migration_backup/CTL_3036635614_20160121_m4qrus9k_1_1_MIGRATION_235204 tag=MIGRATION_KEEP comment=NONE
channel C6: backup set complete, elapsed time: 00:00:01

backup will be obsolete on date 2016-FEB-21 19:00:04
archived logs required to recover from this backup will be backed up
channel C6: starting compressed full datafile backup set
channel C6: specifying datafile(s) in backup set
including current SPFILE in backup set
channel C6: starting piece 1 at 2016-JAN-21 19:00:04
channel C6: finished piece 1 at 2016-JAN-21 19:00:05
piece handle=/oracle/FRA/migration_backup/CTL_3036635614_20160121_m4qrus9k_1_1_MIGRATION_235204 tag=MIGRATION_KEEP comment=NONE
channel C6: backup set complete, elapsed time: 00:00:01

current log archived
backup will be obsolete on date 2016-FEB-21 19:00:12
archived logs required to recover from this backup will be backed up
channel C6: starting compressed archived log backup set
channel C6: specifying archived log(s) in backup set
input archived log thread=2 sequence=207046 RECID=709010 STAMP=901738785
input archived log thread=3 sequence=182524 RECID=709011 STAMP=901738788
input archived log thread=4 sequence=75721 RECID=709012 STAMP=901738788
input archived log thread=1 sequence=287408 RECID=709016 STAMP=901738808
input archived log thread=2 sequence=207047 RECID=709013 STAMP=901738806
input archived log thread=4 sequence=75722 RECID=709014 STAMP=901738808
input archived log thread=3 sequence=182525 RECID=709015 STAMP=901738808
channel C6: starting piece 1 at 2016-JAN-21 19:00:13
channel C6: finished piece 1 at 2016-JAN-21 19:00:20
piece handle=/oracle/FRA/migration_backup/CTL_3036635614_20160121_m5qrus9s_1_1_MIGRATION_235205 tag=MIGRATION_KEEP comment=NONE
channel C6: backup set complete, elapsed time: 00:00:07

backup will be obsolete on date 2016-FEB-21 19:00:20
archived logs required to recover from this backup will be backed up
channel C6: starting compressed full datafile backup set
channel C6: specifying datafile(s) in backup set
including current control file in backup set
channel C6: starting piece 1 at 2016-JAN-21 19:00:21
channel C6: finished piece 1 at 2016-JAN-21 19:00:31
piece handle=/oracle/FRA/migration_backup/CTL_3036635614_20160121_m6qrusa4_1_1_MIGRATION_235206 tag=MIGRATION_KEEP comment=NONE
channel C6: backup set complete, elapsed time: 00:00:10
Finished backup at 2016-JAN-21 19:00:31
released channel: C6
Any ideas as to why the archived log was missing from backup?

BTW, I have already deleted the backups to save space.