Skip navigation.

DBA Blogs

Breaking Down Barriers with DevOps

Pythian Group - Wed, 2016-03-16 08:53

In a recently recorded webinar,  How To Deliver Faster, Better Software Releases with DevOps, Pythian’s Chief Digital Officer, Aaron Lee and guest speaker Amy DeMartine, Senior Research Analyst at Forrester Research, discussed DevOps and the important role it plays in the race to deliver the best customer experience with faster and better software releases.

One of the webinar participants asked about the critical success factors when implementing a rapid release software model. 

The case for adopting agile operations is very compelling. Rapid release cycles can reduce costs, provide better and more stable solutions, increase customer and employee satisfaction, and most importantly, outpace competition and drive revenue.

Driving both efficiency and effectiveness of operations is easier said than done. DeMartine highlighted 7 habits of effective DevOps, but stressed that culture is the number one enemy.

She said, “this fear of change is a huge cultural gap. DevOps is at the point of cultural change, and this is one of the biggest ones that IT has to break”.

Historically, IT teams have held large and lengthy change boards, requiring a strong reliance on rigid approval processes. Combine that history with the complexity and degree of unknowns that operations faces, and it isn’t surprising that some teams are risk averse and consensus driven.

While new technologies, processes, and automation are critical to implementing rapid release models and enhancing DevOps, both DeMartine and Lee agreed that it boils down to the people, and the way Dev and Ops interact with each other.

“It’s clear what the responsibilities of a product manager are. It’s clear where the developers responsibilities start and stop,” said Lee. Often this leads to operations “picking up everything else” and facing an increased amount of unknowns.

Part of the solution is understanding how to embrace and maximize the opportunity for change, as opposed to minimizing the occurrence of change.

Most importantly, Dev and Ops need to know and understand each other. By setting and balancing clear boundaries and expectations, each team can understand the role the other is playing, and begin to see the environment under a common goal.

For more on this topic, download the on-demand webinar below:

devopswebinar_CTA (1)

Categories: DBA Blogs

Converting Hortonworks Sandbox to run on Hyper-V

Pythian Group - Tue, 2016-03-15 09:58

It looks like Hortonworks recently decided to stop hosting a version of their Sandbox VM for Windows Hyper-V. I only see VirtualBox and VMware versions listed.

What if, like me, your primary learning lab machine runs Hyper-V?

Well, you can convert it fairly easily. My method is to use VirtualBox to do this.

I run VirtualBox on my Mac because it’s free, it has free conversion tools and I usually only run 1-2 VMs on it, but my Mac isn’t my learning lab. This tip WILL work on a Windows machine that has VirtualBox installed.

Note that VirtualBox and Hyper-V may not get along well if installed on the same device, hence my using two machines to do this.

In order to convert it, here’s what you need to do.

  • Download the VirtualBox Sandbox VM here.
  • Follow Hortonworks’ instructions to import the appliance into VirtualBox.
  • Find the disk that it created by looking at the properties of the VM you just created.
  • Open a terminal and navigate to that directory.
  • From that directory, run this:

VBoxManage clonehd Hortonworks_sanbox_with_hdp_2_4_virtualbox-disk1.vmdk HDP2.4.vhd --format vhd

This process runs for a bit and creates a copy in VHD format, which you can copy onto, and run from; any Hyper-V machine.

Simply create a new Hyper-V machine, as you normally would, but instead of creating a new disk, choose this one and fire it up.

On the subject of VM Config, you should give it access to your internal network so that you can access it via browser, a couple processors and on memory, a word of caution: when I did this with dynamic memory enabled, the VM took all of my available system memory, so you may want to limit consumption to a number that reserves some computing power for the host and any other VMs you may want to run in parallel.

After mounting and starting my new Hyper-V, VM I found that I hadn’t allocated enough RAM or processor and it was “dying” on boot so I upped the RAM to 6 GB and 4 processors from 2 GB and 1 respectively.

Next up, eth0 wasn’t found on boot so I checked what Google had to say and found this article.

I edited the first file, and upon checking the second (/etc/sysconfig/network-scripts/ifcfg-eth0) I found that the MAC address was not recorded so didn’t have anything to do.

I saved, rebooted, and watched and eth0 was found at this time – of course the VirtualBox add-in failed at boot, but that isn’t a big deal.

When the VM came up, it instructed me to connect to 127.0.0.1:8888 which didn’t work. I looked up the IP assigned by my router put that IP (without a port) into my browser and was able to connect without issue.

Happy learning!

Categories: DBA Blogs

There’s more to IT than just coding

Pythian Group - Tue, 2016-03-15 09:00

 

March 2, 2016 was officially the midpoint of the Technovation Challenge in Ottawa. The 2016 competition started on Sunday January 17, at Carleton University, where Anar Simpson, Global Ambassador for Technovation, kicked off the program.

Technovation is a global technology entrepreneurship competition for young women that sets out to prove that there’s more than just coding in the technology sector. The program is designed to inspire and educate young women to pursue a career in technology by showing them all aspects of starting a technology business.

Regional Technovation Chapters contact local high schools to introduce the program and recruit teams of high school girls. Thanks to the efforts of Jennifer Francis, chair of Women Powering Technology, an Ottawa Chapter of Technovation that started up in January 2015. The pilot was such a success, participation in 2016 has doubled with over 100 high school and middle school girls participating and 30 female mentors from the Ottawa tech sector.

In addition to IBM, Shopify, and L-Spark, Pythian is a proud sponsor of the 2016 competition. Having just announced the Pythia program , it was a natural fit for Pythian to sponsor Technovation. The Pythia program focuses on increasing the percentage of talented women who work at Pythian, especially in tech roles. It also encourages and supports the participation of girls and women in STEM fields, which is exactly what Technovation is all about.

The support of the sponsors allows the teams to meet weekly at the  sponsor’s facilities. Here the teams, along with their mentors, guest speakers and instructor from Carleton University’s Technology Innovation Management (TIM) program, focus on identifying a problem facing their community, creating an App to solve the problem, code the App, build a company, and pitch their business plan to experts in the field! It’s pretty impressive considering the high school girls squeeze this competition in on top of their day-to-day school classes and after-school activities. They are all committed and dedicated – a great sign of future leaders!

“My views of working in the technology sector have changed, since it feels like something anyone can be a part of, whereas it was a distant idea before,” said 17-year-old Doris Feng, a student at Merivale High School and member of the team Women With Ambition. “I came in with the notion that we would be coding during the first week, but it turns out much of the development takes place off screen, with many hours dedicated to brainstorming, surveying users, drawing a paper prototype, and mulling over the ideas with team members.”

I couldn’t have said it better Doris! This is exactly what happens in the real world.

Technovation is a program designed to inspire women to pursue the entrepreneurial spirit in all of us. For more information on Technovation and starting your own local chapter, visit Technovation online. Globally, Technovation is sponsored by Adobe Foundation, Google, Verizon, CA Technologies, Intel and Oracle, in partnership with UN Women, UNESCO and MIT Media Lab.

Categories: DBA Blogs

How to run OpenTSDB with Google Bigtable

Pythian Group - Mon, 2016-03-14 11:49

In a previous post (OpenTSDB and Google Cloud Bigtable) we discussed OpenTSDB, an open source distributed database specifically designed for storing timeseries data. We also explained how OpenTSDB relies on Apache HBase for a reliable and scalable data backend. However, deployment and administration of an HBase cluster is not a trivial task, as it requires a full Hadoop setup. This means that it takes a big data engineer (or better a team of them) to plan for the cluster sizing, provision the machines and setup the Hadoop nodes, configure all services and tune them for optimal performance. If this is not enough, Operations teams have to constantly monitor the cluster, deal with hardware and service failures, perform upgrades, backup regularly, and a ton of other tasks that make maintenance of a Hadoop cluster and OpenTSDB a challenge for most organizations.

With the release of Google Bigtable as a cloud service and its support for the HBase API, it was obvious that if we managed to integrate OpenTSDB with Google Bigtable, we would enable more teams to have access to the powerful functionality of OpenTSDB by removing the burden from maintaining an HBase cluster.

Nevertheless, integration of OpenTSDB with Bigtable was not as seamless as dropping a few jars in its release directory. This happened because the OpenTSDB developers went over and above the standard HBase libraries, by implementing their very own asynchbase library. Asynchbase is a fully asynchronous, non-blocking, thread-safe, high-performance HBase API. And no one can put it better than the asynchbase developers themselves who claim that ‘This HBase client differs significantly from HBase’s client. Switching to it is not easy as it requires one to rewrite all the code that was interacting with any HBase API.’

This meant that integration with Google Bigtable required OpenTSDB to switch back to the standard HBase API. We saw the value of such an effort here at Pythian and set about developing this solution.

The asyncbigtable library

Today, we are very happy to announce the release of the asyncbigtable library. The asyncbigtable library is a 100% compatible implementation of the great asynchbase library that can be used as a drop in replacement and enable OpenTSDB to use Google Bigtable as a storage backend.

Thanks to support from the OpenTSDB team, the asyncbigtable code is hosted in the OpenTSDB GitHub repository.

Challenges

To create asyncbigtable we had to overcome two great challenges. The first one was that OpenTSDB assumes that the underlying library (until now asynchbase) performs asynchronous and non-blocking operations. On the other hand, the standard HBase API only supports synchronous and blocking calls. As a workaround for this, we used the BufferedMutator  implementation that collects all Mutation operations in a buffer and performs them in batches, allowing for mutations with an extremely low latency.

The second challenge stemmed from the fact that the OpenTSDB project has a very limited set of jar dependencies, that are explicitly defined in Makefiles. Contrary to this spartan approach, HBase and Bigtable client libraries have a significant number of transitive dependencies. Since, adding those dependencies one-by-one in the OpenTSDB build process would complicate its dependency management, we decided to  package all asyncbigtable dependencies in an uber-jar using the Maven assembly plugin. Therefore, building OpenTSDB with asyncbigtable support is now as simple as downloading a single beefy jar.

Build stepsBefore you start

Before you build OpenTSDB with Google Bigtable support, you must complete the following required steps:

  1. Create a Google Bigtable cluster (https://cloud.google.com/bigtable/docs/creating-cluster)
  1. Install HBase shell with access to the Google Bigtable cluster (https://cloud.google.com/bigtable/docs/installing-hbase-shell)
  1. Download and install the required tools for compiling OpenTSDB from source (http://opentsdb.net/docs/build/html/installation.html#compiling-from-source)
Build and run OpenTSDB
  1. Clone and build the modified source code from the Pythian github repository:

git clone -b bigtable git@github.com:pythian/opentsdb.git
cd opentsdb
sh build-bigtable.sh

  1. Create OpenTSDB tables

OpenTSDB provides a script that uses HBase shell to create its tables.  To create the tables run the following command:
env COMPRESSION=NONE HBASE_HOME=/path/to/hbase-1.1.2 \
./src/create_table.sh

  1. Run OpenTSDB

export HBASE_CONF=/path/to/hbase-1.1.2/conf
mkdir -p <tmp_dir>
./build/tsdb tsd --port=4242 --staticroot=build/staticroot \
--cachedir=<tmp_dir>

Future work

By all means our work on asyncbigtable does not stop here. We are putting great effort towards improving the library to achieve the high quality standards of the rest of OpenTSDB code. Our first priority is to test the library against most real world scenarios and achieve the highest quality. In the future, we plan to benchmark the performance of OpenTSDB with Bigtable and compare how it competes against HBase.

We are also working on building a true asynchronous implementation of the asyncbigtable library by integrating deeper with the Google Bigtable API.

Acknowledgements

We would like to thank the OpenTSDB developers (Benoît Sigoure and Chris Larsen) for their brilliant work in building such great software and for embracing the asyncbigtable library. Their insights and code contributions helped us deal with some serious issues. Also, we would like to thank the Google Cloud Bigtable team because they expressed genuine interest in this project and they were very generous in providing us with cloud infrastructure and excellent support.

Categories: DBA Blogs

#EMd360 … OEM health checks made easy

DBASolved - Mon, 2016-03-14 11:01

Oracle Enterprise Manager 12c is a great tool! Now that 13c is out, it is getting even better. This post however it not really about Oracle Enterprise Manager, rather than a quick and simple health check tool that I’ve put together. With the help of of some really cool co-workers (Carlos Sierra and Mauro Pagano), I’ve put together a small diagnostic tool call EMd360.

EMd360 stands for Enterprise Manager d360. The concept behind this tool is just like other tools that have been released with the 360 concept (edb360 and sqld360); to provide a quick and easy approach to checking an environment. As with edb360 and sqld360, EMd360 is a completely free tool for anyone to use.

So, why is there a need for EMd360? It is quite simple, there are so many things that go into OEM and you get so much out of OEM it is overwhelming. As a consultant, I’ve been asked to review a lot of OEM architectures and the associated performance. A lot of this information is in the OMR and often time I’m using other tools like REPVFY and OMSVFY, plus a handful of scripts. I’ve decided to make my life (and hopefully yours) a bit easier by building EMd360.

The first (base) release of EMd360 is now live on GitHub (https://github.com/dbasolved/EMd360.git). Go and get it! Test it out!

Download

If you are interested in trying out EMd360, you can download it from GitHub.

Instructions

Download EMd360 from GitHub as a zip file
Unzip EMd360-master.zip on the OMR server and navigate to the directory where you unzipped it
Connect to the OMR using SQL*Plus and execute @emd360.sql

Options

The @emd360.sql script take two variables. You will be prompted for them if not passed on the sql command line.

Variable 1 – Server name of the Oracle Management Service (without domain names)
Variable 2 – Oracle Management Repository name (database SID)

Example:

$ sqlplus / as sysdba
SQL> @emd360 pebble oemrep

Let me know your thoughts and if there is something you would like to see in it. Every environment is different and there maybe something you are looking for that is not provided. Let me know via email or blog comment and I’ll try to get it added in the next release.

Enjoy!!!

about.me: http://about.me/dbasolved


Filed under: OEM
Categories: DBA Blogs

#EMd360 … OEM health checks made easy

DBASolved - Mon, 2016-03-14 11:01

Oracle Enterprise Manager 12c is a great tool! Now that 13c is out, it is getting even better. This post however it not really about Oracle Enterprise Manager, rather than a quick and simple health check tool that I’ve put together. With the help of of some really cool co-workers (Carlos Sierra and Mauro Pagano), I’ve put together a small diagnostic tool call EMd360.

EMd360 stands for Enterprise Manager d360. The concept behind this tool is just like other tools that have been released with the 360 concept (edb360 and sqld360); to provide a quick and easy approach to checking an environment. As with edb360 and sqld360, EMd360 is a completely free tool for anyone to use.

So, why is there a need for EMd360? It is quite simple, there are so many things that go into OEM and you get so much out of OEM it is overwhelming. As a consultant, I’ve been asked to review a lot of OEM architectures and the associated performance. A lot of this information is in the OMR and often time I’m using other tools like REPVFY and OMSVFY, plus a handful of scripts. I’ve decided to make my life (and hopefully yours) a bit easier by building EMd360.

The first (base) release of EMd360 is now live on GitHub (https://github.com/dbasolved/EMd360.git). Go and get it! Test it out!

Download

If you are interested in trying out EMd360, you can download it from GitHub.

Instructions

Download EMd360 from GitHub as a zip file
Unzip EMd360-master.zip on the OMR server and navigate to the directory where you unzipped it
Connect to the OMR using SQL*Plus and execute @emd360.sql

Options

The @emd360.sql script take two variables. You will be prompted for them if not passed on the sql command line.

Variable 1 – Server name of the Oracle Management Service (without domain names)
Variable 2 – Oracle Management Repository name (database SID)

Example:

$ sqlplus / as sysdbaSQL> @emd360 pebble oemrep

Let me know your thoughts and if there is something you would like to see in it. Every environment is different and there maybe something you are looking for that is not provided. Let me know via email or blog comment and I’ll try to get it added in the next release.

Enjoy!!!

about.me: http://about.me/dbasolved


Filed under: OEM
Categories: DBA Blogs

Bug in Ointment: ORA-600 in Online Datafile Move

Pythian Group - Mon, 2016-03-14 08:02

Instead of using ‘fly in ointment’, I have used ‘Bug in Ointment’ because in this prolonged Australian summer, my backyard is full of bugs (to the sheer delight of my bug-loving son, at the same time causing much anxiety among the rest of us). When your backyard is full of bugs and you get bugs in a database, it’s only natural to customize the idioms.

Oracle 12c has been warming up the hearts of database aficionados in various ways with its features. One of the celebrated features is the online datafile moving and renaming. Lots has been written about it and suffice to say that we don’t need any down time in order to move, rename, or copy the data files anymore. It’s an online operation with zero down time incurring a slight performance overhead.

I was playing with this feature on my test system with Oracle 12.1 on OEL 6, and when moving a datafile in a pluggable database I got this error:

ORA-600 [kpdbGetOperLock-incompatible] from ALTER PLUGGABLE DATABASE .. DATAFILE ALL ONLINE

Well, I tried searching for this error using ORA-600 look up tool, but it didn’t turn up anything and simply informed me:

An Error document for ORA-600 [kpdbgetoperlock-incompatible] is not registered with the tool.

Digging more in My Oracle Support pulled out following associated bug:

Bug 19329654 – ORA-600 [kpdbGetOperLock-incompatible] from ALTER PLUGGABLE DATABASE .. DATAFILE ALL ONLINE (Doc ID 19329654.8)

The good news was that the bug was fixed in the 12.1.0.2.1 (Oct 2014) Database Patch Set Update. And it’s true, after applying this PSU, everything was hunky-dory.

Categories: DBA Blogs

Oracle Mobile Cloud Service Update (v1.2): New Features and Enhancements

Oracle Mobile Cloud Service (MCS) provides the services you need to develop a comprehensive strategy for mobile app development and delivery. It provides everything you need to establish an...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Rename all exported files to their original names after exporting from Oracle database using Oracle SQL Developer’s Shopping Cart

Ittichai Chammavanijakul - Sun, 2016-03-13 14:08

If you’re searching for “export Oracle BLOB”, the article, by Jeff Smith, titled “Exporting Multiple BLOBs with Oracle SQL Developer” using Oracle SQL Developer” is usually at the top of the search result. The SQL Developer features the Shopping Cart without using scripts to export BLOBs out of database. I don’t want to go into detail as Jeff already explained well in his post what it is and how to use it. One main issue of using this approach is that sometime you want the actual file names instead of the exported names. This can be overcame easily using a post-run script. I wrote this simple script in Python as it suites well with name manipulation. (I’m not a Python expert, but it is one of programming languages that is very easy to learn.)

The script is just reply read from the FND_LOBS_DATA_TABLE.ldr file, which contains information about original filename and new exported filename (in the format of FND_LOBS_DATA_TABLExxxxx).

# Sample data
 1889399|"CF.xlsx"|"application/octet-stream"|FND_LOBS_DATA_TABLE694b44cc-0150-1000-800d-0a03f42223fd.ldr|2014-05-20 12:11:41||"FNDATTCH"||"US"|"WE8MSWIN1252"|"binary"|{EOL} 1889403|"PriceList_quotation (20 May 2014) cust.xls"|"application/vnd.ms-excel"|FND_LOBS_DATA_TABLE694b4587-0150-1000-800e-0a03f42223fd.ldr|2014-05-20 12:18:02||"FNDATTCH"||"US"|"WE8MSWIN1252"|"binary"|{EOL} 1889807|"MS GROUP NORTH AMERICA INC1.pdf"|"application/pdf"|FND_LOBS_DATA_TABLE694b4613-0150-1000-800f-0a03f42223fd.ldr|||||"US"|"AL32UTF8"|"binary"|{EOL}

# 1st = File ID (Media ID)
# 2nd = Original File Name
# 4th = Exported File Name
# The remaining information is not relevant.

The script separates all information, which is stored in a single line, by string {EOL} into multiple lines. It continues to split into each column based positions. The information we’re interested in is in the 1st, 2nd and 4th position. It then just calls the operating system to rename file.

The content of the script rename.py as follows:


from sys import argv
import string
import shutil
import os
# Script to rename exported BLOB files from Oracle SQL Developer tool
#
# Pre-requisite: Python 3.x https://www.python.org/downloads/
#
# Execution:
# (1) Copy the script to the folder containing mapping file - "FND_LOBS_DATA_TABLE.ldr" and all exported files.
# (2) Execute the script as follows
#      C:\> cd deploy
#      C:\> rename.py FND_LOBS_DATA_TABLE.ldr

# Take parameters
script, filename = argv
# Open file in read-only mode
file = open(filename, 'r', encoding="utf8")

# Sample data - everything is stored in one line.
# 1889399|"EPR - CF.xlsx"|"application/octet-stream"|FND_LOBS_DATA_TABLE694b44cc-0150-1000-800d-0a03f42223fd.ldr|2014-05-20 12:11:41||"FNDATTCH"||"US"|"WE8MSWIN1252"|"binary"|{EOL} 1889403|"PriceList_quotation_murata (20 May 2014) cust.xls"|"application/vnd.ms-excel"|FND_LOBS_DATA_TABLE694b4587-0150-1000-800e-0a03f42223fd.ldr|2014-05-20 12:18:02||"FNDATTCH"||"US"|"WE8MSWIN1252"|"binary"|{EOL} 1889807|"MGS GROUP NORTH AMERICA INC1.pdf"|"application/pdf"|FND_LOBS_DATA_TABLE694b4613-0150-1000-800f-0a03f42223fd.ldr|||||"US"|"AL32UTF8"|"binary"|{EOL}
# 1st = File ID (Media ID)
# 2nd = Actual/Original File Name
# 3rd = File Type
# 4th = Exported File Name
# The remaining = Not relevant

# First, split each by string {EOL} 
splitted_line = file.read().split('{EOL}')

# For each splitted line, split into each word, separated by |
for s in splitted_line:
 # Split by |
 splitted_word = s.split('|')
 
 # If reaching the last line, which contains only [''], exit the loop.
 if len(splitted_word) == 1:
 break
 
 # The Original file name is in the 2nd word (list position #1)
 # Strip out double quotes and leading & trailing spaces if any
 orig_name = splitted_word[1].strip('"').strip() 
 
 # The Exported file name is in the 4th word (list position #3) 
 exported_name = splitted_word[3].strip() # Strip out leading & trailing spaces if any
 
 # We plan to prefix each file with its unique FILE_ID.
 # This is to avoid file name collision if two or more files have the same name
 # Also, strip out leading & trailing spaces if any
 file_id = splitted_word[0].strip() 
 
 # Rename file
 # Adjust the new file name according to your needs
 os.rename(exported_name, file_id + '_' + orig_name)

After unzipping the deploy.zip, which is the default exported file from SQL Developer, copy the rename.py into this unzipped folder.

C:\> cd deploy
C:\> dir
02/23/2016 07:57 PM 2,347 rename.py
02/23/2016 07:57 PM 34,553 export.sql
02/23/2016 07:52 PM 1,817 FND_LOBS.sql
02/23/2016 07:57 PM 276 FND_LOBS_CTX.sql
02/23/2016 07:57 PM 614 FND_LOBS_DATA_TABLE.ctl
02/23/2016 07:52 PM 88,193 FND_LOBS_DATA_TABLE.ldr
02/23/2016 07:57 PM 78,178 FND_LOBS_DATA_TABLE10fa4165-0153-1000-8001-0a2a783f1605.ldr
02/23/2016 07:57 PM 27,498 FND_LOBS_DATA_TABLE10fa4339-0153-1000-8002-0a2a783f1605.ldr
02/23/2016 07:57 PM 17,363 FND_LOBS_DATA_TABLE10fa43c5-0153-1000-8003-0a2a783f1605.ldr
02/23/2016 07:57 PM 173,568 FND_LOBS_DATA_TABLE10ff189d-0153-1000-8219-0a2a783f1605.ldr
:
:

C:\> rename.py FND_LOBS_DATA_TABLE.ldr

C:\> dir
02/23/2016 07:57 PM 2,347 rename.py
02/23/2016 07:57 PM 34,553 export.sql
02/23/2016 07:52 PM 1,817 FND_LOBS.sql
02/23/2016 07:57 PM 276 FND_LOBS_CTX.sql
02/23/2016 07:57 PM 614 FND_LOBS_DATA_TABLE.ctl
02/23/2016 07:52 PM 88,193 FND_LOBS_DATA_TABLE.ldr
02/23/2016 07:57 PM 78,178 689427_DATACOM SOUTH ISLAND LTD.htm
02/23/2016 07:57 PM 27,498 698623_lincraft.htm
02/23/2016 07:57 PM 17,363 772140_275131.htm
02/23/2016 07:57 PM 173,568 3685533_RE 新办公室地址.MSG
:
:


        
  
Categories: DBA Blogs

Compression -- 3 : Index (Key) Compression

Hemant K Chitale - Sun, 2016-03-13 03:34
Unlike Table Compression that uses deduplication of column values, Index Compression is based on the keys.  Key Compression is also called Prefix Compression.

This relies on repeated leading key values being eliminated.  Thus, for example, if the leading column of the composite index has frequently repeated values and because an Index is always an organised (sorted) structure, we find the repeated values appearing as if "sequentially".  Key Compression can eliminate the repeated values.

Thus, it becomes obvious that Index Key Compression is usable for
a.  A Composite Index of 2 or more columns
b.  Repeated appearances of values in the *leading* key columns
c.  Compression defined for a maximum of n-1 columns  (where n is the number of columns in the index).  That is, the last column cannot be compressed.
Note that a Non-Unique Index automatically has the ROWID appended to it, so Key Compression can be applied to all the columns defined.

Let's look at a few examples.

Starting with creating a fairly large table (that is a multiplied copy of DBA_OBJECTS)

PDB1@ORCL> create table target_data as select * from source_data where 1=2;

Table created.

PDB1@ORCL> insert /*+ APPEND */ into target_data select * from source_data;

364496 rows created.

PDB1@ORCL> commit;

Commit complete.

PDB1@ORCL> insert /*+ APPEND */ into target_data select * from source_data;

364496 rows created.

PDB1@ORCL> commit;

Commit complete.

PDB1@ORCL> insert /*+ APPEND */ into target_data select * from source_data;

364496 rows created.

PDB1@ORCL> commit;

Commit complete.

PDB1@ORCL>
PDB1@ORCL> desc target_data
Name Null? Type
----------------------------------------- -------- ----------------------------
OWNER VARCHAR2(128)
OBJECT_NAME VARCHAR2(128)
SUBOBJECT_NAME VARCHAR2(128)
OBJECT_ID NUMBER
DATA_OBJECT_ID NUMBER
OBJECT_TYPE VARCHAR2(23)
CREATED DATE
LAST_DDL_TIME DATE
TIMESTAMP VARCHAR2(19)
STATUS VARCHAR2(7)
TEMPORARY VARCHAR2(1)
GENERATED VARCHAR2(1)
SECONDARY VARCHAR2(1)
NAMESPACE NUMBER
EDITION_NAME VARCHAR2(128)
SHARING VARCHAR2(13)
EDITIONABLE VARCHAR2(1)
ORACLE_MAINTAINED VARCHAR2(1)

PDB1@ORCL>


What composite index is a good candidate for Key Compression ?
*Not* an Index that begins with OBJECT_ID as that is a Unique value.

Let's compare two indexes (compressed and non-compressed) on (OWNER, OBJECT_TYPE, OBJECT_NAME).

PDB1@ORCL> create index target_data_ndx_1_comp on
2 target_data (owner, object_type, object_name) compress 2;

Index created.

PDB1@ORCL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_1_COMP');

PL/SQL procedure successfully completed.

PDB1@ORCL> select leaf_blocks
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_1_COMP'
4 /

LEAF_BLOCKS
-----------
5629

PDB1@ORCL>


PDB1@ORCL> drop index target_data_ndx_1_comp
2 /

Index dropped.

PDB1@ORCL> create index target_data_ndx_2_nocomp on
2 target_data (owner, object_type, object_name) ;

Index created.

PDB1@ORCL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_2_NOCOMP');

PL/SQL procedure successfully completed.

PDB1@ORCL> select leaf_blocks
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_2_NOCOMP'
4 /

LEAF_BLOCKS
-----------
7608

PDB1@ORCL>


Note the "compress 2" specification for the first index.  That is an instruction to compress based on the leading 2 columns.
Thus, the compressed index is 5,629 blocks but the normal, non-compressed index is 7,608 blocks.  We make a gain of 26% in the index size.

Why did I choose OWNER, OBJECT_TYPE as the leading columns ?  Because I expected a high level of repetition on these column names.


Note : I have not explored Advanced Index Compression available in 12.1.0.2
.
.

Categories: DBA Blogs

Changes to DBCA Patch Application Behaviour Causes PDB Cloning to Fail

Pythian Group - Fri, 2016-03-11 07:23
Background

A test upgrade from 11g to 12c and conversion to a container and pluggable database recently pointed out some important 12c behavior differences with respect to the DBCA and whether or not it automatically applies PSUs installed in the Oracle Home.

The original objective was to take an existing 11.2.0.4 database and upgrade it to 12.1.0.2 and convert it to a PDB.

From a high level the procedure was:

  • Install the Oracle 12.1.0.2 software and apply the latest PSU (in this case the JAN2016 PSU).
  • Create a new CDB to house the upgraded database.
  • Upgrade the 11.2.0.4 database to 12.1.0.2 in-place using the DBUA.
  • Convert the upgraded 12c database to a PDB (via the clone through DB link method).

Seemed pretty straight forward. However as part of the PDB conversion (running of the noncdb_to_pdb.sql script), the following error was encountered:

SQL> DECLARE
  2     threads pls_integer := &&1;
  3  BEGIN
  4     utl_recomp.recomp_parallel(threads);
  5  END;
  6  /
DECLARE
*
ERROR at line 1:
ORA-04045: errors during recompilation/revalidation of SYS.DBMS_SQLPATCH
ORA-00600: internal error code, arguments: [kql_tab_diana:new dep], [0x0CF59D0B8], [0x7F1525B91DE0], [1], [2], [], [], [], [], [], [], []
ORA-06512: at "SYS.DBMS_UTILITY", line 1294
ORA-06512: at line 1

 

The noncdb_to_pdb.sql script can only be run once so at this point the PDB conversion has failed and must be restarted. But first we must understand what went wrong or what steps we missed.

Root Cause: DBCA no longer automatically applies PSUs

It’s obvious from the ORA-04045 error that the issue is related to patching. But the question is still what was missed in the process since the 12c Oracle Home was fully patched before creating or upgrading any databases?

The problem is that DBAs have maybe become complacent with respect to PSU applications after creating databases. With Oracle Database 11g whenever we created a database via the DBCA, the latest PSU was automatically applied. It doesn’t matter if we created the database from a template or used a custom install. Regardless of which DBCA method was used, after DB creation we’d see something similar to:

SQL> select comments, action_time from dba_registry_history
  2  where bundle_series like '%PSU' order by 2;

COMMENTS                       ACTION_TIME
------------------------------ ------------------------------
PSU 11.2.0.4.160119            04-MAR-16 02.43.52.292530 PM

SQL>

 

Clearly the latest PSU (JAN2016 in this case) installed in the Oracle Home was applied automatically by the DBCA. And of course this is reflected in the official README documentation (in this example for DB PSU patch 21948347 [JAN2016] – requires a My Oracle Support login to view) which states:

There are no actions required for databases that have been upgraded or created after installation of PSU 11.2.0.4.160119.

 

However this functionality has completely changed with Oracle Database 12c! The change in behaviour is documented in My Oracle Support (MOS) Note: “12.1:DBCA (Database Creation) does not execute “datapatch” (Doc ID 2084676.1)” which states:

DBCA does not execute datapatch in Oracle 12.1.0.X. The solution is to apply the SQL changes manually after creating a new Database

 

Similarly the 12c JAN2016 DB PSU (patch 21948354) README documentation states:

You must execute the steps in Section 3.3.2, “Loading Modified SQL Files into the Database” for any new or upgraded database.

 

This is a significant change in behaviour and is the root cause of the PDB creation error!

 

Resolving the “ORA-00600 [kql_tab_diana:new dep]” error

Back to the CDB creation error, the first logical place to check whenever experiencing plug-in or PDB creation errors is the PDB_PLUG_IN_VIOLATIONS view:

SQL> CREATE PLUGGABLE DATABASE MY_PROD FROM NON$CDB@clone_link FILE_NAME_CONVERT=('/u01/app/oracle/oradata/MY_PROD','/u01/app/oracle/oradata/CPRD1/MY_PROD');

Pluggable database created.

SQL> SELECT name, type, status, message, action FROM pdb_plug_in_violations ORDER BY 1,2;

NAME     TYPE      STATUS    MESSAGE                                  ACTION
-------- --------- --------- ---------------------------------------- ----------------------------------------
MY_PROD  ERROR     PENDING   PDB plugged in is a non-CDB, requires no Run noncdb_to_pdb.sql.
                             ncdb_to_pdb.sql be run.

MY_PROD  WARNING   PENDING   CDB parameter compatible mismatch: Previ Please check the parameter in the curren
                             ous '11.2.0.4.0' Current '12.1.0.2.0'    t CDB

MY_PROD  WARNING   PENDING   Service name or network name of service  Drop the service and recreate it with an
                             MY_PROD in the PDB is invalid or conflic  appropriate name.
                             ts with an existing service name or netw
                             ork name in the CDB.


SQL>

 

Nothing there is really concerning yet. It’s pretty much what we’d expect to see at this point. However, taking the next step in the PDB clone process encounters the error:

SQL> ALTER SESSION SET CONTAINER=MY_PROD;

Session altered.

SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql
...
...
SQL> DECLARE
  2     threads pls_integer := &&1;
  3  BEGIN
  4     utl_recomp.recomp_parallel(threads);
  5  END;
  6  /
DECLARE
*
ERROR at line 1:
ORA-04045: errors during recompilation/revalidation of SYS.DBMS_SQLPATCH
ORA-00600: internal error code, arguments: [kql_tab_diana:new dep],
[0x062623070], [0x7FB582065DE0], [1], [2], [], [], [], [], [], [], []
ORA-06512: at "SYS.DBMS_UTILITY", line 1294
ORA-06512: at line 1


Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

 

Now looking in the PDB_PLUG_IN_VIOLATIONS view the error is evident:

SQL> SELECT name, type, status, message, action FROM pdb_plug_in_violations ORDER BY 1,2;

NAME     TYPE      STATUS    MESSAGE                                            ACTION
-------- --------- --------- -------------------------------------------------- --------------------------------------------------
MY_PROD  ERROR     PENDING   SQL patch ID/UID 22139226/19729684 (Database PSU 1 Call datapatch to install in the PDB or the CDB
                             2.1.0.2.160119, Oracle JavaVM Component (Jan2016))
                             : Installed in the PDB but not in the CDB.

MY_PROD  ERROR     PENDING   PDB plugged in is a non-CDB, requires noncdb_to_pd Run noncdb_to_pdb.sql.
                             b.sql be run.

MY_PROD  WARNING   RESOLVED  Service name or network name of service MY_PROD in Drop the service and recreate it with an appropria
                              the PDB is invalid or conflicts with an existing  te name.
                             service name or network name in the CDB.

MY_PROD  WARNING   RESOLVED  CDB parameter compatible mismatch: Previous '11.2. Please check the parameter in the current CDB
                             0.4.0' Current '12.1.0.2.0'

MY_PROD  WARNING   PENDING   Database option OLS mismatch: PDB installed versio Fix the database option in the PDB or the CDB
                             n NULL. CDB installed version 12.1.0.2.0.

MY_PROD  WARNING   PENDING   Database option DV mismatch: PDB installed version Fix the database option in the PDB or the CDB
                              NULL. CDB installed version 12.1.0.2.0.


6 rows selected.

SQL>

 

At this point since the CDB clone has failed and since the noncdb_to_pdb.sql script cannot be run twice, the new PDB should be dropped. Resolving the root cause of the error by patching and then repeating the clone is necessary.

Applying the PSU

Fortunately the fix is conceptually simple: apply the PSU patch into the database. Though the catch is that I actually had installed the “Combo of 12.1.0.2.160119 OJVM PSU and 12.1.0.2.160119 DB PSU (Jan 2016)” bundle patch (22191659) into the Oracle Home. This combo includes the DB PSU (patch 21948354) plus the OJVM PSU (patch 22139226). And while the DB PSU can be applied without outage, the OJVM patch cannot. Instead for the OJVM patch or the combo, the CDB and the PDBs must all be restarted in UPGRADE mode.

Restarting in UPGRADE mode is fine in this case study where the CDB was just recently created to house the newly upgraded PDB. But if trying to plug the new database into an existing CDB with other applications running in production, shutting down the entire CDB to run datapatch may cause a problem.

Following the README documentation for the just the JAN2016 DB PSU (patch 21948354) doesn’t help. It states that the patch can be applied the database and pluggable databases open (section “3.3.2 Loading Modified SQL Files into the Database“). However because I’ve installed the combo patch into the Oracle Home, trying to patch with the database open will cause the patching to fail:

$ ./datapatch -verbose
SQL Patching tool version 12.1.0.2.0 on Fri Mar  4 15:45:27 2016
Copyright (c) 2015, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_3260_2016_03_04_15_45_27/sqlpatch_invocation.log

Connecting to database...OK
Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of SQL patches:
Patch 22139226 (Database PSU 12.1.0.2.160119, Oracle JavaVM Component (Jan2016)):
  Installed in the binary registry only
Bundle series PSU:
  ID 160119 in the binary registry and not installed in any PDB

Adding patches to installation queue and performing prereq checks...
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED
    Nothing to roll back
    The following patches will be applied:
      22139226 (Database PSU 12.1.0.2.160119, Oracle JavaVM Component (Jan2016))
      21948354 (Database Patch Set Update : 12.1.0.2.160119 (21948354))

Error: prereq checks failed!
  patch 22139226: The pluggable databases that need to be patched must be in upgrade mode
Prereq check failed, exiting without installing any patches.

Please refer to MOS Note 1609718.1 and/or the invocation log
/u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_3260_2016_03_04_15_45_27/sqlpatch_invocation.log
for information on how to resolve the above errors.

SQL Patching tool complete on Fri Mar  4 15:45:52 2016
$

 

The solution to this error is to start the CDB and PDBs in UPGRADE mode (as per the OJVM patch documentation) and then re-run datapatch:

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup upgrade;
ORACLE instance started.

Total System Global Area 2097152000 bytes
Fixed Size                  2926320 bytes
Variable Size             603982096 bytes
Database Buffers         1476395008 bytes
Redo Buffers               13848576 bytes
Database mounted.
Database opened.
SQL> alter pluggable database all open upgrade;

Pluggable database altered.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

$ ./datapatch -verbose
SQL Patching tool version 12.1.0.2.0 on Fri Mar  4 15:50:59 2016
Copyright (c) 2015, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_5137_2016_03_04_15_50_59/sqlpatch_invocation.log

Connecting to database...OK
Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of SQL patches:
Patch 22139226 (Database PSU 12.1.0.2.160119, Oracle JavaVM Component (Jan2016)):
  Installed in the binary registry only
Bundle series PSU:
  ID 160119 in the binary registry and not installed in any PDB

Adding patches to installation queue and performing prereq checks...
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED
    Nothing to roll back
    The following patches will be applied:
      22139226 (Database PSU 12.1.0.2.160119, Oracle JavaVM Component (Jan2016))
      21948354 (Database Patch Set Update : 12.1.0.2.160119 (21948354))

Installing patches...
Patch installation complete.  Total patches installed: 8

Validating logfiles...
Patch 22139226 apply (pdb CDB$ROOT): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/22139226/19729684/22139226_apply_CPRD1_CDBROOT_2016Mar04_15_51_23.log (no errors)
Patch 21948354 apply (pdb CDB$ROOT): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/21948354/19553095/21948354_apply_CPRD1_CDBROOT_2016Mar04_15_51_24.log (no errors)
Patch 22139226 apply (pdb PDB$SEED): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/22139226/19729684/22139226_apply_CPRD1_PDBSEED_2016Mar04_15_51_28.log (no errors)
Patch 21948354 apply (pdb PDB$SEED): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/21948354/19553095/21948354_apply_CPRD1_PDBSEED_2016Mar04_15_51_29.log (no errors)
SQL Patching tool complete on Fri Mar  4 15:51:31 2016
$

 

Now retrying the CDB cloning process:

SQL> ALTER SESSION SET CONTAINER=MY_PROD;

Session altered.

SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql
...
...

SQL> alter session set container = "&pdbname";

Session altered.

SQL>
SQL> -- leave the PDB in the same state it was when we started
SQL> BEGIN
  2    execute immediate '&open_sql &restricted_state';
  3  EXCEPTION
  4    WHEN OTHERS THEN
  5    BEGIN
  6      IF (sqlcode  -900) THEN
  7        RAISE;
  8      END IF;
  9    END;
 10  END;
 11  /

PL/SQL procedure successfully completed.

SQL>
SQL>
SQL> WHENEVER SQLERROR CONTINUE;
SQL> ALTER PLUGGABLE DATABASE MY_PROD OPEN;

Warning: PDB altered with errors.

SQL> connect / as sysdba
Connected.
SQL> SELECT name, type, status, message, action FROM pdb_plug_in_violations ORDER BY 1,2;

NAME     TYPE      STATUS    MESSAGE                                            ACTION
-------- --------- --------- -------------------------------------------------- --------------------------------------------------
MY_PROD  ERROR     RESOLVED  PDB plugged in is a non-CDB, requires noncdb_to_pd Run noncdb_to_pdb.sql.
                             b.sql be run.

MY_PROD  ERROR     PENDING   PSU bundle patch 160119 (Database Patch Set Update Call datapatch to install in the PDB or the CDB
                              : 12.1.0.2.160119 (21948354)): Installed in the C
                             DB but not in the PDB.

MY_PROD  WARNING   RESOLVED  Service name or network name of service MY_PROD in Drop the service and recreate it with an appropria
                              the PDB is invalid or conflicts with an existing  te name.
                             service name or network name in the CDB.

MY_PROD  WARNING   PENDING   Database option OLS mismatch: PDB installed versio Fix the database option in the PDB or the CDB
                             n NULL. CDB installed version 12.1.0.2.0.

MY_PROD  WARNING   PENDING   Database option DV mismatch: PDB installed version Fix the database option in the PDB or the CDB
                              NULL. CDB installed version 12.1.0.2.0.

MY_PROD  WARNING   RESOLVED  CDB parameter compatible mismatch: Previous '11.2. Please check the parameter in the current CDB
                             0.4.0' Current '12.1.0.2.0'


6 rows selected.

SQL>

 

Note that first time the error was related to the OJVM PSU patch and stated that the PDB was patched but the CDB was not. Now after patching the CDB the error message states that the DB PSU patch is installed in the CDB but not the PDB.

Again the solution is to run datapatch one more time. Fortunately since we’re only patching a PDB, we no longer need to worry about starting the CDB and PDBs in UPGRADE mode to apply the OJVM patch. The OJVM patch does not apply to the PDBs.  Hence we can patch successfully with both the CDB and PDBs open:

$ ./datapatch -verbose
SQL Patching tool version 12.1.0.2.0 on Fri Mar  4 16:19:06 2016
Copyright (c) 2015, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_9245_2016_03_04_16_19_06/sqlpatch_invocation.log

Connecting to database...OK
Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of SQL patches:
Patch 22139226 (Database PSU 12.1.0.2.160119, Oracle JavaVM Component (Jan2016)):
  Installed in binary and CDB$ROOT PDB$SEED MY_PROD
Bundle series PSU:
  ID 160119 in the binary registry and ID 160119 in PDB CDB$ROOT, ID 160119 in PDB PDB$SEED

Adding patches to installation queue and performing prereq checks...
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED
    Nothing to roll back
    Nothing to apply
  For the following PDBs: MY_PROD
    Nothing to roll back
    The following patches will be applied:
      21948354 (Database Patch Set Update : 12.1.0.2.160119 (21948354))

Installing patches...
Patch installation complete.  Total patches installed: 1

Validating logfiles...
Patch 21948354 apply (pdb MY_PROD): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/21948354/19553095/21948354_apply_CPRD1_MY_PROD_2016Mar04_16_19_31.log (no errors)
SQL Patching tool complete on Fri Mar  4 16:19:32 2016
$

 

And finally the cloned PDB can be opened successfully:

SQL> ALTER PLUGGABLE DATABASE MY_PROD CLOSE;

Pluggable database altered.

SQL> ALTER PLUGGABLE DATABASE MY_PROD OPEN;

Pluggable database altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 MY_PROD                        READ WRITE NO

SQL> SELECT name, type, status, message, action FROM pdb_plug_in_violations ORDER BY 1,2;

NAME     TYPE      STATUS    MESSAGE                                            ACTION
-------- --------- --------- -------------------------------------------------- --------------------------------------------------
MY_PROD  ERROR     RESOLVED  PDB plugged in is a non-CDB, requires noncdb_to_pd Run noncdb_to_pdb.sql.
                             b.sql be run.

MY_PROD  ERROR     RESOLVED  PSU bundle patch 160119 (Database Patch Set Update Call datapatch to install in the PDB or the CDB
                              : 12.1.0.2.160119 (21948354)): Installed in the C
                             DB but not in the PDB.

MY_PROD  WARNING   RESOLVED  Service name or network name of service MY_PROD in Drop the service and recreate it with an appropria
                              the PDB is invalid or conflicts with an existing  te name.
                             service name or network name in the CDB.

MY_PROD  WARNING   PENDING   Database option OLS mismatch: PDB installed versio Fix the database option in the PDB or the CDB
                             n NULL. CDB installed version 12.1.0.2.0.

MY_PROD  WARNING   PENDING   Database option DV mismatch: PDB installed version Fix the database option in the PDB or the CDB
                              NULL. CDB installed version 12.1.0.2.0.

MY_PROD  WARNING   RESOLVED  CDB parameter compatible mismatch: Previous '11.2. Please check the parameter in the current CDB
                             0.4.0' Current '12.1.0.2.0'


6 rows selected.

SQL>

The warnings marked as “PENDING” can be safely ignored.

Conclusion

What started out as an issue when cloning a non-CDB into a PDB led to some learning about patching with Oracle Database 12c.

The most important take-away is that Oracle Database 12c introduces a change in behaviour when it comes to patch applications through the DBCA. This change is well documented in both the patch and MOS documents. So if a DBA reads through the documentation thoroughly, they won’t have a problem.  However if the DBA is used to doing things the “old way” and only skims through the documentation they may unexpectedly get caught with errors such as the ORA-00600 encountered when creating a PDB through cloning.

References

My Oracle Support (MOS) Documents:

  • 12.1:DBCA(Database Creation) does not execute “datapatch” (Doc ID 2084676.1)
  • How to Convert Non PDB to PDB Database in 12c – Testcase (Doc ID 2012448.1)
  • How to migrate an existing pre12c database(nonCDB) to 12c CDB database ? (Doc ID 1564657.1)
  • Complete Checklist for Upgrading to Oracle Database 12c Release 1 using DBUA (Doc ID 1516557.1)

Pythian Blogs:

 

Categories: DBA Blogs

Centralized authentication with Red Hat Directory Server for Linux systems

Pythian Group - Thu, 2016-03-10 15:14

User management on Linux systems can be tedious, and when you add in more than 10 systems the chances are it is going to take a good amount of time for you to manage user accounts on all systems individually.

There are various tools available to overcome this, and all of these use LDAP in some way.

The same goes for Red Hat Directory Server, which is an extension of LDAP by Red Hat to provide centralized user management. Though I have primarily demonstrated integration with Red Hat Directory Server with Linux systems, it can be used on all systems which supports LDAP authentication.

You can find the official Red Hat Directory Server installation guide here.

For our test scenario I used two RHEL 5 servers named as Server101 which is the Red Hat Directory Server and Server201 which is the client.

For RHEL based systems you need to make sure that you are subscribed to RHDS repo for installing Red Hat Directory Server. If you are using CentOS or other derivatives you can use 389-Directory Server which is upstream for Red Hat Directory Server.

Once you have the prerequisite ready you can start with installation.

Installing Red Hat Directory Server

I have designated server101 as Red Hat Directory Server.

Below are the steps to Install packages required for Red Hat Directory Server.

[root@server101 ~]#yum install redhat-ds -y

yum install redhat-ds

Installing RHDS

RHDS005

RHDS006

 

Once the installation is complete we can move to configuring Red Hat Directory Server.

 

Configuring Red Hat Directory Server

 

[root@server101 ~]# setup-ds-admin.pl

Once you run this command you will be prompted for inputs by the setup script which are mostly straight forward.

But there are few things that need to be taken care of before we proceed with the configuration.

We want to run the ldap service as ldap user, so create ldap user and group if its not there.

Then open the below ports on your firewall/iptables so that directory server can work properly.

  • 389 for LDAP service
  • 686 for secure LDAP service
  • 9830 for directory server admin console connectivity

You should also increase the number of file descriptors as it can help Red Hat Directory Server access files more efficiently. Editing the maximum number of file descriptors the kernel can allocate can also improve file access speeds.

  • First, check the current limit for file descriptors in  /proc/sys/fs/file-max
  • If the setting is lower than 64000, edit the /etc/sysctl.conf file, and reset the fs.file-max parameter and set it to 64000 or up.
  • Then increase the maximum number of open files on the system by editing the /etc/security/limits.conf configuration file. Add the following entry
    *        -        nofile        8192

 

After this we can start configuring Red Hat Directory Server with setup-ds-admin.pl command.

Once it is executed it will prompt for inputs which are mostly self explanatory, like below. Mostly we will accept the default values, as this is a fresh installation. We will only change the system user and group which will run ldap service from nobody to ldap user we created earlier. Don’t forget to make a note of passwords for admin and Directory Manager as it will be used to login to Admin Console.

 

[root@server101 ~]# setup-ds-admin.pl -k

==============================================================================
This program will set up the Red Hat Directory and Administration Servers.

It is recommended that you have “root” privilege to set up the software.
Tips for using this program:
– Press “Enter” to choose the default and go to the next screen
– Type “Control-B” then “Enter” to go back to the previous screen
– Type “Control-C” to cancel the setup program

Would you like to continue with set up? [yes]: yes

==============================================================================
BY SETTING UP AND USING THIS SOFTWARE YOU ARE CONSENTING TO BE BOUND BY
AND ARE BECOMING A PARTY TO THE AGREEMENT FOUND IN THE
LICENSE.TXT FILE. IF YOU DO NOT AGREE TO ALL OF THE TERMS
OF THIS AGREEMENT, PLEASE DO NOT SET UP OR USE THIS SOFTWARE.

Do you agree to the license terms? [no]: yes

==============================================================================
Your system has been scanned for potential problems, missing patches,
etc. The following output is a report of the items found that need to
be addressed before running this software in a production
environment.

Red Hat Directory Server system tuning analysis version 10-AUGUST-2007.

NOTICE : System is i686-unknown-linux2.6.18-308.el5 (1 processor).

WARNING: 502MB of physical memory is available on the system. 1024MB is recommended for best performance on large production system.

NOTICE: The net.ipv4.tcp_keepalive_time is set to 7200000 milliseconds
(120 minutes). This may cause temporary server congestion from lost
client connections.

WARNING: There are only 1024 file descriptors (hard limit) available, which
limit the number of simultaneous connections.

WARNING: There are only 1024 file descriptors (soft limit) available, which
limit the number of simultaneous connections.

Would you like to continue? [no]: yes

==============================================================================
Choose a setup type:

1. Express
Allows you to quickly set up the servers using the most
common options and pre-defined defaults. Useful for quick
evaluation of the products.

2. Typical
Allows you to specify common defaults and options.

3. Custom
Allows you to specify more advanced options. This is
recommended for experienced server administrators only.

To accept the default shown in brackets, press the Enter key.

Choose a setup type [2]:

==============================================================================
Enter the fully qualified domain name of the computer
on which you’re setting up server software. Using the form
<hostname>.<domainname>
Example: eros.example.com.

To accept the default shown in brackets, press the Enter key.

Computer name [server101.suratlug.org]: server101.example.com

==============================================================================
The servers must run as a specific user in a specific group.
It is strongly recommended that this user should have no privileges
on the computer (i.e. a non-root user). The setup procedure
will give this user/group some permissions in specific paths/files
to perform server-specific operations.

If you have not yet created a user and group for the servers,
create this user and group using your native operating
system utilities.

System User [nobody]: ldap
System Group [nobody]: ldap

==============================================================================
Server information is stored in the configuration directory server.
This information is used by the console and administration server to
configure and manage your servers.  If you have already set up a
configuration directory server, you should register any servers you
set up or create with the configuration server. To do so, the
following information about the configuration server is required: the
fully qualified host name of the form
<hostname>.<domainname>(e.g. hostname.example.com), the port number
(default 389), the suffix, the DN and password of a user having
permission to write the configuration information, usually the
configuration directory administrator, and if you are using security
(TLS/SSL). If you are using TLS/SSL, specify the TLS/SSL (LDAPS) port
number (default 636) instead of the regular LDAP port number, and
provide the CA certificate (in PEM/ASCII format).

If you do not yet have a configuration directory server, enter ‘No’ to
be prompted to set up one.

Do you want to register this software with an existing
configuration directory server? [no]:

==============================================================================
Please enter the administrator ID for the configuration directory
server. This is the ID typically used to log in to the console.  You
will also be prompted for the password.

Configuration directory server
administrator ID [admin]:
Password:
Password (confirm):

==============================================================================
The information stored in the configuration directory server can be
separated into different Administration Domains. If you are managing
multiple software releases at the same time, or managing information
about multiple domains, you may use the Administration Domain to keep
them separate.

If you are not using administrative domains, press Enter to select the
default. Otherwise, enter some descriptive, unique name for the
administration domain, such as the name of the organization
responsible for managing the domain.

Administration Domain [example.com]:

==============================================================================
The standard directory server network port number is 389. However, if
you are not logged as the superuser, or port 389 is in use, the
default value will be a random unused port number greater than 1024.
If you want to use port 389, make sure that you are logged in as the
superuser, that port 389 is not in use.

Directory server network port [389]:

==============================================================================
Each instance of a directory server requires a unique identifier.
This identifier is used to name the various
instance specific files and directories in the file system,
as well as for other uses as a server instance identifier.

Directory server identifier [server101]:

==============================================================================
The suffix is the root of your directory tree.  The suffix must be a valid DN.
It is recommended that you use the dc=domaincomponent suffix convention.
For example, if your domain is example.com,
you should use dc=example,dc=com for your suffix.
Setup will create this initial suffix for you,
but you may have more than one suffix.
Use the directory server utilities to create additional suffixes.

Suffix [dc=example, dc=com]:

==============================================================================
Certain directory server operations require an administrative user.
This user is referred to as the Directory Manager and typically has a
bind Distinguished Name (DN) of cn=Directory Manager.
You will also be prompted for the password for this user. The password must
be at least 8 characters long, and contain no spaces.
Press Control-B or type the word “back”, then Enter to back up and start over.

Directory Manager DN [cn=Directory Manager]:
Password:
Password (confirm):

==============================================================================
The Administration Server is separate from any of your web or application
servers since it listens to a different port and access to it is
restricted.

Pick a port number between 1024 and 65535 to run your Administration
Server on. You should NOT use a port number which you plan to
run a web or application server on, rather, select a number which you
will remember and which will not be used for anything else.

Administration port [9830]:

==============================================================================
The interactive phase is complete.  The script will now set up your
servers.  Enter No or go Back if you want to change something.

Are you ready to set up your servers? [yes]:
Creating directory server . . .
Your new DS instance ‘server101’ was successfully created.
Creating the configuration directory server . . .
Beginning Admin Server creation . . .
Creating Admin Server files and directories . . .
Updating adm.conf . . .
Updating admpw . . .
Registering admin server with the configuration directory server . . .
Updating adm.conf with information from configuration directory server . . .
Updating the configuration for the httpd engine . . .
Starting admin server . . .
The admin server was successfully started.
Admin server was successfully created, configured, and started.
Exiting . . .
Log file is ‘/tmp/setupZa3jGe.log’

[root@server101 ~]#

RHDS012

RHDS013

RHDS020

RHDS021

RHDS022

 

Now that we have installed and configured Red Hat Directory Server its not set to autostart during system boot.

So we need to make Red Hat directory service and redhat directory console admin service to start at boot.

[root@server101 ~]# chkconfig dirsrv-admin --list 
dirsrv-admin   0:off1:off2:off3:off4:off5:off6:off 
[root@server101 ~]# chkconfig dirsrv --list 
dirsrv         0:off1:off2:off3:off4:off5:off6:off 
[root@server101 ~]# chkconfig dirservrv on 
[root@server101 ~]# chkconfig dirsrv-admin on 
[root@server101 ~]#

Now that we have our server ready, we need to add a user to it.

We will use Directory Server admin console to connect to the GUI and will create ldap user from there.

We can invoke directory server admin console gui with redhat-idm-console. It will open a GUI like below.

Directory Server Admin Console GUI

Directory Server Admin Console GUI

The user id is directory manager which was created during directory server setup, mostly it will be default as cn=Directory Manager. Now put your password and Administration url as http://server101:9830.

Directory Server Admin Console

Directory Server Admin Console

Once you login you will be presented with console screen as below.

RHDS051

Now click on Users and Groups tab and then click on create button, there select user from the menu.

RHDS031  RHDS032

Now Select organizational unit, mostly we will use the default and will select people from the list as below.

RHDS033

It will open Create User menu.

RHDS034

Now we will create ldapuser account as shown below. Fill in required details. Also select posix user tab as we need the account for unix system login. Fill up required details for posix account as well.

RHDS035   RHDS036

RHDS037

Now that we have created user account we can start configuring client.

 

Configuring Linux client for LDAP login

I have created server201 which we will configure for LDAP login.

For that we need to execute authconfig-tui from console.

It will open a terminal ui to configure authconfig to use LDAP.

[root@server201 pam.d]# authconfig-tui

RHDS040

Select Use LDAP for user information.

RHDS041

Select Use LDAP Authentication.

RHDS042

After this we need to make sure when user login on the server with LDAP authentication the home dir is created automatically, which is not enabled by default.

We can do this by executing below command at console.

[root@server201 pam.d]# authconfig –enablemkhomedir –update

RHDS052

Once this is done you can now use your ldap user to login to client server.

Now that we have created LDAP, we can use it to centralized login for all linux systems in the environment.

The user management is easy from single location.

We can also configure TLS and do replication for redundancy.

We can define schema and policies as well but that is for another time.

 

Categories: DBA Blogs

Log Buffer #464: A Carnival of the Vanities for DBAs

Pythian Group - Thu, 2016-03-10 13:42

This Log Buffer Edition delves deep into the realms of Oracle, SQL Server and MySQL while gathering up some nifty blog posts for this week.


Oracle:

Speed, Security, and Best Practices in the Cloud: Oracle Releases Market-Leading Retail Demand Forecasting Solution

OBIEE 12c – Your Answers After Upgrading

Using the SQL ACCESS Advisor PL/SQL interface

How has JD Edwards EnterpriseOne 9.2 Transformed your Business?

In the article you will have a look at an example of configuring Fast Start Failover (FSFO).

SQL Server:

How to show Quarters Dynamically in SQL

Azure SQL Data Warehouse is a fully-managed and scalable cloud service. It is still in preview, but solid.

The occasional problems that you can get with POST and GET are typical of the difficulties of separating any command and query operations.

4 Convenient Ways To Run PowerShell Scripts

10 New Features Worth Exploring in SQL Server 2016

MySQL:

Maintaining mission critical databases on our pitchfork wielding brother, the “Daemon” of FreeBSD, seems quite daunting, or even absurd, from the perspective of a die-hard Linux expert, or from someone who has not touched it in a long time.

Planets9s: Sign up for our best practices webinar on how to upgrade to MySQL 5.7

Using jemalloc heap profiling with MySQL

Sometimes a Variety of Databases is THE Database You Need

Taking the new MySQL 5.7 JSON features for a test drive

Categories: DBA Blogs

Debugging Kibana using Chrome developer tools

Pythian Group - Tue, 2016-03-08 17:53

Amazon Elasticsearch Service is a managed service to implement Elasticsearch in AWS. Underlying instances are managed by AWS and interaction with the service is available through API and AWS GUI.

Kibana is also integrated with Amazon Elasticsearch Service. We came across an issue which caused Kibana4 to show the following error message, when searching for *.

Courier Fetch: 10 of 60 shards failed.

Error is not very descriptive.

As Amazon Elasticsearch service is an endpoint only and we do not have direct access to the instances. We also have access to few API tools.

We decided to see what can be found from the chrome browser.

The Chrome Developer Tools (DevTools) contains lots of useful debugging possibilities.

DevTools can be started using several methods.

1. Right click and click Inspect.
2. From Menu -> More Tools -> Developer Tools
3. Press F12

Network tab under DevTools can be used to debug wide variety of issues. It records every requests made when a web page is loading. It captures wide range of information about every request like HTTP access Method, status and time took to complete the request etc.

By clicking on any of the requested resource, we will be able to get more information on the request.

In this case, the interesting bit was under the Preview tab. The Preview tab captures the data chrome got back from the search and store it as objects.

A successful query would look like the image below captured from Kibana3 of public website logstash.openstack.org.

kibana-es

We checked “_msearch?timeout=3000..” and received following errors messages under the nested values (For example “responses” -> “0” -> “_shards” -> “failures” -> “0”)

{index: “logstash-2016.02.24”, shard: 1, status: 500,…}index: “logstash-2016.02.24″reason: “RemoteTransportException[[Leech][inet[/10.212.25.251:9300]][indices:data/read/search[phase/query]]]; nested: ElasticsearchException[org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [5143501209/4.7gb]]; nested: UncheckedExecutionException[org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [5143501209/4.7gb]]; nested: CircuitBreakingException[[FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [5143501209/4.7gb]]; “shard: 1status: 500

So the issue is clear, fielddata usage is above the limit.

As per Amazon documentation,

Field Data Breaker –
Percentage of JVM heap memory allowed to load a single data field into memory. The default value is 60%. We recommend raising this limit if you are uploading data with large fields.
indices.breaker.fielddata.limit
For more information, see Field data in the Elasticsearch documentation.

Following url documents the supported Amazon Elasticsearch operations.

http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-gsg-supported-operations.html

On checking the current heap usage (second column) of the data nodes, we can see that heap usage is very high,

$ curl -XGET “http://elasticsearch.abc.com/_cat/nodes?v”
host ip heap.percent ram.percent load node.role master name
x.x.x.x   10   85   0.00   –   m   Drax the Destroyer
x.x.x.x   7   85   0.00   –   *   H.E.R.B.I.E.
x.x.x.x   78   64   1.08   d   –   Black Cat
x.x.x.x   80   62   1.41   d   – Leech
x.x.x.x   7   85   0.00   –   m   Alex
x.x.x.x   78   63   0.27   d   –   Saint Anna
x.x.x.x   80   63   0.28   d   –   Martinex
x.x.x.x   78   63   0.59   d   –   Scorpio

Following command can be used to increase the indices.breaker.fielddata.limit value. This can be used as a workaround.

$ curl -XPUT elasticsearch.abc.com/_cluster/settings -d ‘{ “persistent” : { “indices.breaker.fielddata.limit” : “89%” } }’

Running the command allowed the kibana search to run without issues and show the data.

The real solution would be to increase the number of nodes or reduce the amount of field data that need to be loaded by limiting number of indexes.

AWS Lamda can be used to to run a script to cleanup indices as a scheduled event.

Categories: DBA Blogs

Partner Webcast – Oracle PaaS: Application Container Cloud Service

Oracle Application Container Cloud Service, a new Oracle Cloud Platform (PaaS) offering, leverages Docker containers and provides a lightweight infrastructure so that you can run Java SE and...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Can The Public Cloud Meet the Needs of Your Enterprise Applications?

Pythian Group - Tue, 2016-03-08 12:19

 

Any applications your company runs on premise can also be run in the public cloud. But does that mean they should be?

While the cloud offers well-documented benefits of flexibility, scalability, and cost efficiency, some applications — and especially business-critical enterprise applications — have specific characteristics that can make them tricky to move into a public cloud environment.

That’s not to say you shouldn’t consider the cloud as an option, but you should be aware of the following enterprise application needs before you make any migration decisions:

1. Highly customized infrastructure

Enterprise applications often rely on software components that are uniquely configured: they may need very specific storage layouts and security settings or tight integration with certain third-party tools. That makes it hard to replace them with generic platform-as-a-service (PaaS) alternatives in the cloud.
The same is true on the infrastructure side: application software components often need particular network configurations and controls that aren’t available from a typical infrastructure-as-a-service (IaaS) offering. (An example would be the way Oracle Real Application Clusters have to allow the cluster software to manipulate network settings, such as controlling IP addresses and network interfaces.)

2. Tightly coupled components

Today’s cloud application architectures are based on “microservices” — collections of services that perform specific tasks. When combined, these answer the whole of the application requirements. With enterprise applications, there are so many interdependencies between the various software components that it can be extremely difficult to change, upgrade, move, or scale an individual component without having a huge impact on the rest of the system.

3. Siloed IT departments

Enterprise applications are usually supported by siloed enterprise IT operations — DBAs, system administrators, storage administrators, network administrators and the like — each with their own responsibilities. Cloud deployment, on the other hand, requires much greater focus on collaboration across the IT environment. This means breaking down traditional silos to create full-stack teams with vertical application ownership. Some teams are likely to resist this change as they could end up with significantly less work and responsibility once the management of application components has shifted to the cloud vendor. So migrating to the cloud isn’t just a technical decision; it has people-process implications, too.

4. Costly infrastructure upgrades

Every company knows upgrading enterprise applications is a major undertaking and can often cause downtime and outages. This is true when the application stays inside your own data center — and doubly so when it moves to a cloud provider due to how long it takes to move massive amounts of data through the Internet and risks associated with unknown issues on the new virtual platform. For these reasons, significant financial commitment is often required to build and maintain an IT team with the right skills to do upgrades quickly and effectively as well as maintain the system.

5. Inflexible licensing models

The components used in enterprise applications are often proprietary products with licensing models that are not compatible with the elasticity of the cloud. For example, many Oracle licenses are for legacy applications and can used only on particular systems. Transferring those licenses to a cloud-based infrastructure is not an easy task.

In addition, perpetual software licenses are often not portable to the typical pay-as-you-go model used by most cloud providers. Plus, most software vendors don’t have any incentive to transition their customers from locked-in perpetual licenses with a steady maintenance revenue stream to a model that allows them to switch to a competitive product at any time.

Even though the nature of enterprise applications makes them difficult to migrate to the cloud, the benefits of doing so — in costs savings, availability, and business agility — still make it a very compelling proposition. In my next blog, I’ll take a look at some of the paths available to you should you decide to move your enterprise applications to the public cloud.

For more on this topic, check out our white paper on Choosing the Right Public Cloud Platform For Your Enterprise Applications Built on Oracle Database.

Oracle-White-Paper-Blog-CTA

 

Categories: DBA Blogs

Renaming #EM12c / #EM13c Targets

DBASolved - Tue, 2016-03-08 09:20

Oracle Enterprise Manager is a complex piece of software that many organizations are running now. Some organizations set out with a formalized naming standard; some do not. Those who do not ofter end up identifying a naming standard later down the road and then making requests to change the names of the targets being monitored. In order to do this, there are two ways:

1. Delete and rediscover the target and rename at time of discovery
2. Change the name from the backend using EMCLI

The first way is painful to say the least, especially when you have thousands upon thousands of targets. So this post is going to focus on how to change the name from the backend using EMCLI and a few other little tips.

EMCLI is a nice tool to use. It provides two options for renaming of targets. The first option is rename_target and the second is modify_target. The rename_target option is used to rename the target on the repository side, while the modify_target option is used to rename at the agent level. Both options are required when renaming a target because the target needs to stay in-sync to retain the history of the target.

To make this process a bit more automated, I’ve created a perl script that will do the renaming for me based on information in a space delimited flat file. The script is as follows:

#!/usr/bin/perl -w
use strict;
use warnings;

##########################
#Notes
##########################
#
#To help with renaming the entity_name in the repository database,
#comment out block of code in SYSMAN.EM_TARGET from line 8028 thru 8035
#
##########################
#GLOBAL Variables
##########################
my $oem_home_bin = "";
my $time_now = localtime();
my ($variable, $sysman_pwd) = @ARGV;
my $count = 0;
my @columns;

##########################
#Main Program
##########################

open (FILE, "< $variable") or die "$!\n";
@columns = ("", 0, 0, 0);
print "\nStart time: ".$time_now."\n";
emcli_login();
while()
{
	my $line = $_;
	@columns = split(' ',$line, 4);
	rename_target(@columns);
	$count = $count+1;
} #end file read
close (FILE) or die "$!\n";
my $end_time=localtime();
print "\nNumber of changes: ".$count;
print "\nEnd time: ".$end_time."\n";
emcli_logout();

##########################
#Sub-Programs
##########################
sub emcli_login{
	print "\n";
	system($oem_home_bin.'/emcli login -username=sysman -password='.$sysman_pwd);
	system($oem_home_bin.'/emcli sync');
	print "\n";
}

sub emcli_logout{
	print "\n";
	system($oem_home_bin.'/emcli logout');
	print "\n";
}

sub rename_target{
	#Parameters
	my ($target_name, $target_type, $server_name )=@columns;
	my $mod_target;
	my $new_name;
	my $cmd;
	my $cmd1;

	if ($target_type =~ /rac_database/)
	{
		chomp($target_name);
		chomp($server_name);
		$mod_target = $target_name;
		$target_name = substr($target_name, 0, -4);
		$new_name = $target_name."_".$server_name;
		#print $new_name;
		print "\n";
		$cmd = 'emcli modify_target -name="'.$mod_target.'" -type="'.$target_type.'" -display_name="'.$new_name.'" -on_agent';
		print $cmd."\n";
		#print "\n!!!!Executing on agent side!!!!\n";
		#system($oem_home_bin.'/'.$cmd);
		$cmd1 = 'emcli rename_target -target_type="'.$target_type.'" -target_name="'.$mod_target.'" -new_target_name="'.$new_name.'"';
		print $cmd1."\n";
		#print "\n!!!!Executing on repository side!!!!\n";
		#system($oem_home_bin.'/'.$cmd);
	}
}

Notice that I’m doing the renaming at the agent side along with the repository side. Although this looks pretty simple and straight forward, I’ve found that the EMCLI command to rename (rename_target) is actually driven by the package EM_TARGET in the SYSMAN schema. There is a small set of code in this package that will prevent renaming of certain target types if they are currently being monitored and managed by OEM.

To identify what targets are managed, the following SQL can be used:

SELECT ENTITY_TYPE, ENTITY_NAME, DISPLAY_NAME FROM EM_MANAGEABLE_ENTITIES 
WHERE ENTITY_TYPE='oracle_database' and promote_status=3 and manage_status=‘2';

The SQL above will provide you with the target type (entity_type), name (entity_name), and display name (display_name). These three columns are important because they directly correlate to what you will see in OEM. About 90% of the screen in OEM use the display_name column. The other 10% of the screens use the entity_name. When you start renaming, you will want these names to match, just keep in mind they may not over the long haul.

Now, back to the code in the EM_TARGET package. When renaming targets, some target will report back that the target cannot be changed. This is due to the target already being managed by OEM. In order to by-pass this, you need to update the EM_TARGET package body and comment out a small set of code (make sure you back up the package before doing anything). The lines of code that need to be commented out are between 8028 and 8035.

-- we will implement rename of agent side targets when it is fully
     -- supported by agent
    --IF ( l_trec.manage_status = MANAGE_STATUS_MANAGED AND
    --     l_trec.emd_url IS NOT NULL) 
    --THEN
    --  raise_application_error(MGMT_GLOBAL.INVALID_PARAMS_ERR,
    --      MGMT_GLOBAL.INVALID_PARAMS_ERR||' Not allowed') ;
    --END IF ;

After commenting out these lines of code, recompile the package. Then you will be able to rename repository targets using EMCLI even though they are already managed targets. This will effect the entity_name column and allow you to update the other 10% of pages that are not immediately changed.
 
Another way to change names of targets once the EM_TARGET package has been updated, is to use SQL to make the changes.

exec sysman.em_target.rename_target(target_type, current_name, new_name, new_name);
commit;

Once the commit has happened, then the OEM pages can be refreshed and the new entity_name will be displayed.

Well, I hope this has provided you some explanation on how to change existing targets within the EM framework.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: EMCLI, OEM
Categories: DBA Blogs

Renaming #EM12c / #EM13c Targets

DBASolved - Tue, 2016-03-08 09:20

Oracle Enterprise Manager is a complex piece of software that many organizations are running now. Some organizations set out with a formalized naming standard; some do not. Those who do not ofter end up identifying a naming standard later down the road and then making requests to change the names of the targets being monitored. In order to do this, there are two ways:

1. Delete and rediscover the target and rename at time of discovery
2. Change the name from the backend using EMCLI

The first way is painful to say the least, especially when you have thousands upon thousands of targets. So this post is going to focus on how to change the name from the backend using EMCLI and a few other little tips.

EMCLI is a nice tool to use. It provides two options for renaming of targets. The first option is rename_target and the second is modify_target. The rename_target option is used to rename the target on the repository side, while the modify_target option is used to rename at the agent level. Both options are required when renaming a target because the target needs to stay in-sync to retain the history of the target.

To make this process a bit more automated, I’ve created a perl script that will do the renaming for me based on information in a space delimited flat file. The script is as follows:

#!/usr/bin/perl -w
use strict;
use warnings;

##########################
#Notes
##########################
#
#To help with renaming the entity_name in the repository database,
#comment out block of code in SYSMAN.EM_TARGET from line 8028 thru 8035
#
##########################
#GLOBAL Variables
##########################
my $oem_home_bin = "";
my $time_now = localtime();
my ($variable, $sysman_pwd) = @ARGV;
my $count = 0;
my @columns;

##########################
#Main Program
##########################

open (FILE, "< $variable") or die "$!\n";
@columns = ("", 0, 0, 0);
print "\nStart time: ".$time_now."\n";
emcli_login();
while()
{
	my $line = $_;
	@columns = split(' ',$line, 4);
	rename_target(@columns);
	$count = $count+1;
} #end file read
close (FILE) or die "$!\n";
my $end_time=localtime();
print "\nNumber of changes: ".$count;
print "\nEnd time: ".$end_time."\n";
emcli_logout();

##########################
#Sub-Programs
##########################
sub emcli_login{
	print "\n";
	system($oem_home_bin.'/emcli login -username=sysman -password='.$sysman_pwd);
	system($oem_home_bin.'/emcli sync');
	print "\n";
}

sub emcli_logout{
	print "\n";
	system($oem_home_bin.'/emcli logout');
	print "\n";
}

sub rename_target{
	#Parameters
	my ($target_name, $target_type, $server_name )=@columns;
	my $mod_target;
	my $new_name;
	my $cmd;
	my $cmd1;

	if ($target_type =~ /rac_database/)
	{
		chomp($target_name);
		chomp($server_name);
		$mod_target = $target_name;
		$target_name = substr($target_name, 0, -4);
		$new_name = $target_name."_".$server_name;
		#print $new_name;
		print "\n";
		$cmd = 'emcli modify_target -name="'.$mod_target.'" -type="'.$target_type.'" -display_name="'.$new_name.'" -on_agent';
		print $cmd."\n";
		#print "\n!!!!Executing on agent side!!!!\n";
		#system($oem_home_bin.'/'.$cmd);
		$cmd1 = 'emcli rename_target -target_type="'.$target_type.'" -target_name="'.$mod_target.'" -new_target_name="'.$new_name.'"';
		print $cmd1."\n";
		#print "\n!!!!Executing on repository side!!!!\n";
		#system($oem_home_bin.'/'.$cmd);
	}
}

Notice that I’m doing the renaming at the agent side along with the repository side. Although this looks pretty simple and straight forward, I’ve found that the EMCLI command to rename (rename_target) is actually driven by the package EM_TARGET in the SYSMAN schema. There is a small set of code in this package that will prevent renaming of certain target types if they are currently being monitored and managed by OEM.

To identify what targets are managed, the following SQL can be used:

SELECT ENTITY_TYPE, ENTITY_NAME, DISPLAY_NAME FROM EM_MANAGEABLE_ENTITIES 
WHERE ENTITY_TYPE='oracle_database' and promote_status=3 and manage_status=‘2';

The SQL above will provide you with the target type (entity_type), name (entity_name), and display name (display_name). These three columns are important because they directly correlate to what you will see in OEM. About 90% of the screen in OEM use the display_name column. The other 10% of the screens use the entity_name. When you start renaming, you will want these names to match, just keep in mind they may not over the long haul.

Now, back to the code in the EM_TARGET package. When renaming targets, some target will report back that the target cannot be changed. This is due to the target already being managed by OEM. In order to by-pass this, you need to update the EM_TARGET package body and comment out a small set of code (make sure you back up the package before doing anything). The lines of code that need to be commented out are between 8028 and 8035.

-- we will implement rename of agent side targets when it is fully
     -- supported by agent
    --IF ( l_trec.manage_status = MANAGE_STATUS_MANAGED AND
    --     l_trec.emd_url IS NOT NULL) 
    --THEN
    --  raise_application_error(MGMT_GLOBAL.INVALID_PARAMS_ERR,
    --      MGMT_GLOBAL.INVALID_PARAMS_ERR||' Not allowed') ;
    --END IF ;

After commenting out these lines of code, recompile the package. Then you will be able to rename repository targets using EMCLI even though they are already managed targets. This will effect the entity_name column and allow you to update the other 10% of pages that are not immediately changed.
 
Another way to change names of targets once the EM_TARGET package has been updated, is to use SQL to make the changes.

exec sysman.em_target.rename_target(target_type, current_name, new_name, new_name);
commit;

Once the commit has happened, then the OEM pages can be refreshed and the new entity_name will be displayed.

Well, I hope this has provided you some explanation on how to change existing targets within the EM framework.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: EMCLI, OEM
Categories: DBA Blogs