Feed aggregator

How to Configure The SSL Certificate For Oracle Warehouse Builder Repository Browser

Antonio Romero - Tue, 2013-08-27 22:09

  The Repository Browser is a browser-based tool that generates reports from data stored in Oracle Warehouse Builder (OWB) repositories. It use OC4j as the web server. Users need to use HTTPS to access the web interface. ( HTTP on top of the SSL/TLS protocol)

If the Repository Browser Listener is running on a computer named owb_server, then typing the following address will start the Repository Browser:

   https://owb_server:8999/owbb/RABLogin.uix?mode=design

   or

   https://owb_server:8999/owbb/RABLogin.uix?mode=runtime


On the server side, the SSL certificate for the browser is required. Users can create it by themselves.

First, uses can user the JRE's util "keytool" to generate a keystore, name it keystore.jks.

For example: keytool -genkey -keyalg RSA -alias mykey -keystore keystore.jks -validity 2000 -storepass  welcome1

Please pay attention to the password of the store, it need to be the same as the credentials of keystoreadmin in the file called "system-jazn-data.xml".


If the password is not the same, the error message like "Keystore was tampered with, or password was incorrect" will be generated.


In order to change the credentials, there are two files you can edit.


  • http-web-site.xml: It is in the path of %OWB_HOME%/j2ee/config. The password is stored as clear text in the http-web-site.xml, Users can change it to fit the password they use to generate the keysotre. For the security reason, if users don't want to store clear text, they can use the point (->keystoreadmin) to point another file named system-jazn-data.xml.


  • system-jazn-data.xml: User can find "system-jazn-data.xml" in the %OWB_HOME%/j2ee/config. There is a entry in it called "keystoreadmin".  Password store in this file is encrypted password. The pointer mentioned above is pointing to this place. In order to change the password, you can edit "system-jazn-data.xml",  change the value "<credentials>" of the entry "keystoreadmin". Please added "!" in front of your password. For example, if you want to change the password to welcome,change it to <credentials>!welcome</credentials>

The next time OC4J reads "system-jazn-data.xml", it will rewrite the file with all passwords obfuscated and unreadable.(So  your clear text like "!welcome" will become encrypted password, something like '{903}dnHlnv/Mp892K8ySQan+zGTlvUDeFYyW'

Data Pump 12c – Pumping Data with the LOGTIME Parameter

alt.oracle - Tue, 2013-08-27 09:38
Since its release, Oracle Data Pump has been a worthy successor to the traditional exp/imp tools.  However, one area lacking with Data Pump has been something as simple as the ability to identify how long each step of a Data Pump job actually takes.  The log will show start time at the top of the log and end time at the bottom, but the time of execution for each step is a mystery.  Oracle 12c solves this problem with the LOGTIME parameter, which adds a timestamp to the execution of each step of the Data Pump job.  Here’s what it looks like without the parameter.

/home/oracle:test1:expdp altdotoracle/altdotoracle \
> directory=data_pump_dir dumpfile=expdp.dmp \
> tables=employee

Export: Release 12.1.0.1.0 - Production on Tue Aug 13 09:32:38 2013

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "ALTDOTORACLE"."SYS_EXPORT_TABLE_01":  altdotoracle/******** directory=data_pump_dir dumpfile=expdp.dmp tables=employee
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported "ALTDOTORACLE"."EMPLOYEE"                   10.93 KB      16 rows
Master table "ALTDOTORACLE"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for ALTDOTORACLE.SYS_EXPORT_TABLE_01 is:
  /oracle/base/admin/test1/dpdump/expdp.dmp
Job "ALTDOTORACLE"."SYS_EXPORT_TABLE_01" successfully completed at Tue Aug 13 09:32:51 2013 elapsed 0 00:00:11

With the LOGTIME parameter, each step is prefixed with a timestamp, indicating the start time for each event that is processed.

/home/oracle:test1:expdp altdotoracle/altdotoracle \
> directory=data_pump_dir dumpfile=expdp.dmp \
> tables=employee LOGTIME=ALL

Export: Release 12.1.0.1.0 - Production on Tue Aug 13 09:34:54 2013

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
13-AUG-13 09:34:56.757: Starting "ALTDOTORACLE"."SYS_EXPORT_TABLE_01":  altdotoracle/******** directory=data_pump_dir dumpfile=expdp.dmp tables=employee LOGTIME=ALL
13-AUG-13 09:34:57.019: Estimate in progress using BLOCKS method...
13-AUG-13 09:34:57.364: Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
13-AUG-13 09:34:57.396: Total estimation using BLOCKS method: 64 KB
13-AUG-13 09:34:57.742: Processing object type TABLE_EXPORT/TABLE/TABLE
13-AUG-13 09:34:57.894: Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
13-AUG-13 09:34:57.964: Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
13-AUG-13 09:35:04.853: . . exported "ALTDOTORACLE"."EMPLOYEE"   10.93 KB      16 rows
13-AUG-13 09:35:05.123: Master table "ALTDOTORACLE"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
13-AUG-13 09:35:05.127: ******************************************************************************
13-AUG-13 09:35:05.128: Dump file set for ALTDOTORACLE.SYS_EXPORT_TABLE_01 is:
13-AUG-13 09:35:05.131:   /oracle/base/admin/test1/dpdump/expdp.dmp
13-AUG-13 09:35:05.134: Job "ALTDOTORACLE"."SYS_EXPORT_TABLE_01" successfully completed at Tue Aug 13 09:35:05 2013 elapsed 0 00:00:09

The parameter works similarly with Data Pump Import.  Note that, although it is documented, the LOGTIME parameter is not described when you do a expdp help=y or impdp help=y command.

Categories: DBA Blogs

DBA or Developer?

Chet Justice - Mon, 2013-08-26 16:01
I've always considered myself a developer and a LOWER(DBA). I may have recovered perhaps one database and that was just a sandbox, nothing production worthy. I've built out instances for development and testing and I've installed the software a few hundred times, at least. I've done DBA-like duties, but I just don't think of myself that way. I'm a power developer maybe? Whatevs.

I'm sure it would be nearly impossible to come up with One True Definition of The DBA ™. So I won't.

I've read that Tom Kyte does not consider himself a DBA, but I'm not sure most people know that. From Mr. Kyte himself:



At the same conference, I asked Cary Millsap the same question:



I read Cary for years and always assumed he was a DBA. I mean, have you read his papers? Have you read Optimizing Oracle Performance? Performance? That's what DBAs do (or so I used to think)!

It was only after working with him at #kscope11 on the Building Better Software track that I learned otherwise.

Perhaps I'll make this a standard interview question in the future...

Semi-related discussions:

1. Application Developers vs. Database Developers
2. Application Developers vs. Database Developers: Part II
Categories: BI & Warehousing

IPython at Ohio LinuxFest 2013

Catherine Devlin - Mon, 2013-08-26 04:56

Are you signed up yet for Ohio LinuxFest on Sep. 13-15? I'll be there to present

IPython for non-Pythonistas Break out of your (bash) shell! IPython and the IPython Notebook have swept over the Python programming community, but they're not just for Python programmers - they make for high-powered shell replacements even with little to no Python knowledge. They'll also let you document your work and collaborate with others like never before. Find out how these beautiful tools can improve your daily Linux work!

At PyOhio, I argued that all Python programmers need IPython. At OLF, I'll make the case that non-Pythonistas need IPython, too. Perhaps my next talk will be "Even Your Cat Needs IPython".

Also at OLF, look for PyOhio's booth for info on next year's PyOhio, other Python events around the region, and general Python love!

Configuring ODBC to MySQL from Oracle

Barry McGillin - Sun, 2013-08-25 03:14
Sometimes people want to connect to MySQL from Oracle and copy table data between the databases.  You can do that with Oracle Hetrogenous Services via ODBC.  This post will show how to create an odbc connection to your MySQL database which is the first part of this.

For my example, I'm using unixODBC and its on the Oracle public yum repository
[root@localEL5 ~]$ yum install unixODBC
Loaded plugins: security
Setting up Install Process
Resolving Dependencies
> Running transaction check
> Processing Dependency: libboundparam.so.1 for package: unixODBC-devel
> Processing Dependency: libesoobS.so.1 for package: unixODBC-devel
> Processing Dependency: libgtrtst.so.1 for package: unixODBC-devel
> Processing Dependency: libmimerS.so.1 for package: unixODBC-devel
> Processing Dependency: libnn.so.1 for package: unixODBC-devel
> Processing Dependency: libodbc.so.1 for package: unixODBC
.....
> Running transaction check
> Package unixODBC-devel.i386 0:2.2.11-10.el5 set to be updated
> Package unixODBC-libs.i386 0:2.2.11-10.el5 set to be updated
> Finished Dependency Resolution

Dependencies Resolved

================================================================================
Package Arch Version Repository Size
================================================================================
Updating:
unixODBC i386 2.2.11-10.el5 el5_latest 290 k
Installing for dependencies:
unixODBC-libs i386 2.2.11-10.el5 el5_latest 551 k
Updating for dependencies:
unixODBC-devel i386 2.2.11-10.el5 el5_latest 738 k

Transaction Summary
================================================================================
Install 1 Package(s)
Upgrade 2 Package(s)

Total download size: 1.5 M
Is this ok [y/N]: y
Downloading Packages:
(1/3): unixODBC-2.2.11-10.el5.i386.rpm | 290 kB 00:02
(2/3): unixODBC-libs-2.2.11-10.el5.i386.rpm | 551 kB 00:04
(3/3): unixODBC-devel-2.2.11-10.el5.i386.rpm | 738 kB 00:17
--------------------------------------------------------------------------------
Total 60 kB/s | 1.5 MB 00:26
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : unixODBC-libs 1/5
warning: /etc/odbcinst.ini created as /etc/odbcinst.ini.rpmnew
Updating : unixODBC 2/5
Updating : unixODBC-devel 3/5
Cleanup : unixODBC 4/5
Cleanup : unixODBC-devel 5/5

Dependency Installed:
unixODBC-libs.i386 0:2.2.11-10.el5

Updated:
unixODBC.i386 0:2.2.11-10.el5

Dependency Updated:
unixODBC-devel.i386 0:2.2.11-10.el5

Complete!
[root@localEL5 ~]$


Now make sure odbc connector is installed for MySQL. Again, we're using our friend yum to provide it


[root@localEL5 ~]$ yum install mysql-connector-odbc
Loaded plugins: security
Setting up Install Process
Resolving Dependencies
There are unfinished transactions remaining. You might consider running yum-complete-transaction first to finish them.
The program yum-complete-transaction is found in the yum-utils package.
> Running transaction check
...
> Finished Dependency Resolution

Dependencies Resolved

=================================================================================================
Package Arch Version Repository Size
=================================================================================================
Installing:
mysql-connector-odbc i386 3.51.26r1127-2.el5 el5_latest 159 k
Installing for dependencies:
libtool-ltdl i386 1.5.22-7.el5_4 el5_latest 37 k

Transaction Summary
=================================================================================================
Install 2 Package(s)
Upgrade 0 Package(s)

Total download size: 196 k
Is this ok [y/N]: y
Downloading Packages:
(1/2): libtool-ltdl-1.5.22-7.el5_4.i386.rpm | 37 kB 00:04
(2/2): mysql-connector-odbc-3.51.26r1127-2.el5.i386.rpm | 159 kB 00:01
-------------------------------------------------------------------------------------------------
Total 21 kB/s | 196 kB 00:09
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : libtool-ltdl 1/2
Installing : mysql-connector-odbc 2/2

Installed:
mysql-connector-odbc.i386 0:3.51.26r1127-2.el5

Dependency Installed:
libtool-ltdl.i386 0:1.5.22-7.el5_4

Complete!
[root@localEL5 ~]$

Now lets  check driver locations and DSNs. Firstly we can check the installed drivers now in the file /etc/odbcinst.ini

# driver definitinions
#
#

# Included in the unixODBC package
[PostgreSQL]
Description = ODBC for PostgreSQL
Driver = /usr/lib/libodbcpsql.so
Setup = /usr/lib/libodbcpsqlS.so
FileUsage = 1


# Driver from the MyODBC package
# Setup from the unixODBC package
[MySQL]
Description = ODBC for MySQL
Driver = /usr/lib/libmyodbc.so
Setup = /usr/lib/libodbcmyS.so
FileUsage = 1

Then, we can specify a DSN to connect with in /etc/odbc.ini (Be careful here the option names are case sensitive.

[sakila-connector]
driver=MySQL
Database=sakila
Socket=/var/lib/mysql/mysql.sock
User=root
Password=oracle
Finally, we can now check our dsn defined above.  We'll use iSQL from the unixODBC package here.
[oracle@Unknown-08:00:27:c8:2a:1c lib]$ isql -v sakila-connector
+---------------------------------------+
| Connected! |
| |
| sql-statement |
| help [tablename] |
| quit |
| |
+---------------------------------------+
Cool. When we get this we are connected via odbc to the DSN.  Now we can prove it by doing a show tables or something to prove its working.
NB: If you get an error at this stage asking for libraries, its likely you specified your Drivers incorrectly in the odbcinst.ini.
Now we have this working we can setup HS on the Oracle side.

People, Performance and Perception

TalentedApps - Sun, 2013-08-25 01:04

A commonly heard complaint: “My manager is a control-freak and practice micro-management. He asks for suggestions but provide them all himself. He doesn’t believe in imperfection and try to fit us into his unrealistic expectations.”

Perceiver believes that it is the reality but he might be just focusing on the aspects which reinforce his existing beliefs.  Your friends, co-workers or the society don’t know “a real you” but “they know you as they perceive you” as they always have perception about you.  But when most of the decision makers in an organization shares the same perception about someone than it doesn’t really matter what “reality is” as “perception becomes the reality”.Perception

It’s hard for an organization to get required contribution from an individual who is not able to accomplish his own goals within organization and in order to achieve your purpose you need to know that how people perceive you. Any undesired perception about you, your products or services will not go away if you deny its existence.

It‘s not possible for you to communicate with each perceiver to explain who you really are, in case you are not happy about it. But before you make any effort to change perception, you need to understand three critical factors which contribute to perception formation process. And they are:

  1. Your performance: How you perform in a given context contributes majorly towards perception about you. Initial performance is a foundation stone in this process and often takes a lot of effort to change, in case you want to change it later, for better.
  2. Your competitor’s performance: Comparison and competition are unavoidable and if you are afraid of them, then you really fear your own incompetence.  You need to know what your competitor’s are doing before they walk over you.
  3. Perceivers’ viewpoint:  You are dealing with humans and they are prone to mistakes and prejudices. It can go in your favor or against you. These people can be your customers, key influencers in your organization or anyone who is a stakeholder in your current or future endeavors.

You may want to manage one or more of these factors depending on your power of influence but at minimal you should always be in a position to improve your own performance.

If you don’t want to be defined by what you are not, if you want to feed your opportunities and starve your problems; you need to take charge to change perception about you and the time is now. But what if you have already tried your best and are fully convinced that perception about you is unchangeable? You are neither the first nor the last person to feel it, recharge your batteries and hit the trail again. New jobs are waiting to be done, new teams are waiting to be led and new ideas are waiting to be born…

Photo Credit: Unknown


Tagged: perception, performance

Configuring MySQL on EL5, Setting Passwords and Network Access

Barry McGillin - Sat, 2013-08-24 10:35
I find myself installing and running mysql of different versions in different places for different reasons all the time (well often enough to do it and not remember the little things that cost time when setting up)   Its with that in mind, I'm making notes for myself and you guys as well to help you along.

We use MySQL a lot with Oracle SQLDeveloper and many use SQLDeveloper to co-exist between MySQL and Oracle.

For most versions of Oracle Linux, we will install SQL Server from the Yum repository. If you dont have one set up you can configure one under /etc/yum.repos.d.  These notes for yum are a reference (blatant copy) from the Oracle Linux Admin guide
  1. As root, change directory to /etc/yum.repos.d.
    # cd /etc/yum.repos.d
  2. Use the wget utility to download the repository configuration file that is appropriate for your system.
    # wget http://public-yum.oracle.com/public-yum-release.repo
    For Oracle Linux 6, enter:
    # wget http://public-yum.oracle.com/public-yum-ol6.repo
    The /etc/yum.repos.d directory is updated with the repository configuration file, in this example, public-yum-ol6.repo.
  3. You can enable or disable repositories in the file by setting the value of the enabled directive to 1 or 0 as required.
Now we are ready to install MySQL. If you havent used yum before play with some of the options to list packages and switch repos as you need them.  Its a great tool saving us all lots of time with dependencies.

root@localEl5# yum install mysql-server
You can see if its installed by doing
root@localEl5> yum list mysql-server
Loaded plugins: security
el5_latest | 1.4 kB 00:00
Installed Packages
mysql-server.i386 5.0.95-5.el5_9 installed
root@localEl5>
You can then start it with
root@localEL5> /etc/init.d/mysqld start
and check its running by
root@localEL5> /etc/init.d/mysqld status

mysqld (pid 31298) is running...
In general, you can start mysql on the server without a server password in order to set one up for yourself. My one caveat here, is that all this is for development folks, some one with a security hat on will complain (bitterly).  I'm going to show you how to clear down all permissions so you can connect from any machine.
root@localEL5> /etc/init.d/mysqld stop
root@localEL5> /etc/init.d/mysqld status
root@localEL5> mysqld_safe --skip-grant-tables &amp;
mysql -uroot
Now we are logged into mysql as root with no passwords.  We can check what users are here and what permissions they have. Now, in this case, I have 
mysql> select user, host, password from user; 
+-------+-------------+-------------------------------------------+
| user | host | password |
+-------+--------------+-------------------------------------------+
| root | localhost | *2447D497B9A6A15F2776055CB2D1E9F86758182F |
| root | 192.168.1.201| *2447D497B9A6A15F2776055CB2D1E9F86758182F |
| barry | localhost | *2447D497B9A6A15F2776055CB2D1E9F86758182F |
+-------+--------------+-------------------------------------------+

The first thing I want to do is to remove duplicate entries for my user
mysql> delete from user where user='root' and host ='192.168.1.201';
now we have
+-------+--------------+-------------------------------------------+
| user | host | password |
+-------+--------------+-------------------------------------------+
| root | localhost | *2447D497B9A6A15F2776055CB2D1E9F86758182F |
| barry | localhost | *2447D497B9A6A15F2776055CB2D1E9F86758182F |
+-------+--------------+-------------------------------------------+
Now, next I want to update the hosts to any host which is '%' in mysql

 mysql> update user set host='%';

which now gives me

+-------+------+-------------------------------------------+
| user | host | password |
+-------+------+-------------------------------------------+
| root | % | *2447D497B9A6A15F2776055CB2D1E9F86758182F |
| barry | % | *2447D497B9A6A15F2776055CB2D1E9F86758182F |
+-------+------+-------------------------------------------+
2 rows in set (0.00 sec)


Now, if you want to change your passwords, make sure you do that now.  If you are on 5.1 and over secure_auth is set on and old passwords are off  by default. In my version 5.0, I need to set them to get new passwords and secure_auth which is default on all mysql clients now.  This is done in /etc/my.conf followed by a restart of mysql

old_passwords=0
secure-auth=1

mysql> update user set password=PASSWORD('oracle') where user='root';

lastly flush privileges and exit

mysql> flush privileges;

Lastly, I like my prompts to be informative so, You can also set this in your profile to setup your prompts.

export MYSQL_PS1="\u@\h [\d] > "

It'll give you a prompt like this when I log in with

root@localEl5> mysql -uroot -poracle -Dmysql

giving this prompt in mysql

root@localEL5 [mysql] >

Now, you are all set to connect from SQL Developer to the this instance.  We can also install the sample databases from http://dev.mysql.com/doc/index-other.html

Custom Jersey WADL generation

Edwin Biemond - Thu, 2013-08-22 14:46
I had a situation where the auto generated WADL did not match with my Rest services. The first difference was that the response is presented as an object instead of a collection of objects and the second one is that it could not handle JSONWithPadding as response.  Because I use this WADL in my Rest client generation, I need to fix these issues. Lucky for me, Jersey JAX-RS allows us to provide

All you need is Log, All you need is Log, ..., Log is all you need.

Marcelo Ochoa - Thu, 2013-08-22 11:44
Paraphrasing Beatles's song I am starting this post telling that logging is one of the most important features that developers, sysadmins and DBA needs to successful develop, deploy, test and tune Java applications inside the RDBMS.
Inside the OJVM this feature is even more important because there is no visual feedback when you are running Java code.
Starting with 10g, I have been using this feature as is explained in my posts Getting logging entering, exiting and throwing work at Oracle JVM, which according to Google Analytics is one of most visited page, but most of tweak requires uploading new classes and configuring it with certain knowledge.
Starting with 12c, the logging feature is directly integrated into the OJVM also for PDB.
This mean that by simply loading a logging.properties file into a user's schema you can control the logging facilities for Java code running with an effective user (Oracle JVM performs specific steps to configure logging options).
For example if I have a class loaded into SCOTT's schema and this class run with an effective user SCOTT, to activate the logging level for that class and user you have to execute:
loadjava -u scott/tiger@mydb -v javavm/lib/logging.propertiesif the logging.properties file includes something like:
$ cat javavm/lib/logging.properties
# Limit the message that are printed on the console to INFO and above.
java.util.logging.ConsoleHandler.level = ALL
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
# Scotas OLS
com.scotas.level = INFO
org.apache.level = INFO
remember that for using logging inside the OJVM SCOTT user should have SYS:java.util.logging.LoggingPermission = control, to do that connected as SYS excecute:
$ sqlplus "sys/pwd@mydb as sysdba"
SQL> exec dbms_java.grant_permission( 'SCOTT', 'SYS:java.util.logging.LoggingPermission', 'control', '');
SQL> commit;
also your application can control logging level properties during run time, above configuration are static and defined when the OJVM start the execution of the Java code for a specific DB session.
The example below is Scotas OLS web page available for controlling logging properties during run time (the old logging configuration Servlet of Apache Solr).
as is shown in the screen shot logging level for each hierarchy is defined with a default setting inherit from logging.properties file, but we can change that value to any other value using above form.
Finally where my logs goes?
Independent if you are running your Java application in a traditional RDBMS installation or in a new CDB/PDB configuration log information will goes to the .trc file associated to the session, this is at:
$ORACLE_BASE/diag/rdbms/$ORACLE_SID/$ORACLE_SID/tracethe .trc file is named using this filename convention $ORACLE_SID_jnnn_$PID.trc, for example:
$ cat orclc_j020_11761.trc
Trace file /u01/app/oracle/diag/rdbms/orclc/orclc/trace/orclc_j020_11761.trc
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
ORACLE_HOME = /u01/app/oracle/product/12.1.0.1.0/dbhome_1
System name: Linux
Node name: localhost
Release: 3.2.0-52-generic
Version: #78-Ubuntu SMP Fri Jul 26 16:21:44 UTC 2013
Machine: x86_64
Instance name: orclc
Redo thread mounted by this instance: 1
Oracle process number: 7
Unix process pid: 11761, image: oracle@pocho (J020)

*** 2013-08-22 11:29:41.805
*** SESSION ID:(387.581) 2013-08-22 11:29:41.805
*** CLIENT ID:() 2013-08-22 11:29:41.805
*** SERVICE NAME:(SYS$USERS) 2013-08-22 11:29:41.805
*** MODULE NAME:(DBMS_SCHEDULER) 2013-08-22 11:29:41.805
*** ACTION NAME:(SOLRSERVLETSERV#1) 2013-08-22 11:29:41.805
*** CONTAINER ID:(3) 2013-08-22 11:29:41.805
.....
*** 2013-08-22 11:52:37.581
INFO: Unset log level on 'root'.
Aug 22, 2013 11:52:37 AM com.scotas.solr.servlet.LogLevelSelection doPost
INFO: Set 'com.scotas' to INFO level.
is important that even if you have multiple PDBs, all log files will go to the trace directory of the CDB because all PDBs re-use the process infra-structure of the CDB.
I will post another blog entry about the impact of the process infra-structure of the Multitenant Environment and the OJVM applications.

An efficient way to do massive inserts with Oracle JDBC

Marcelo Ochoa - Wed, 2013-08-21 10:34
  I was exchanging emails with my friend Kuassi Mensah (@kmensah) to see how to use JDBC from what is known as array DML.
  Typically known for the DBA of the form:
INSERT INTO TEST_SOURCE_BIG_A
        SELECT a.line a_line, a.name a_name, a.owner a_owner, a.text a_text, a.type a_type
           FROM TEST_SOURCE_BIG a;  if the two tables have same structure the RDBMS realizes that and is putting together batchs between 140-170 rows each.
  This will cause that any index associated (such as Scotas OLS) with a call to the function:
FUNCTION ODCIIndexInsert (
   ODCIIndexInfo ia,
   ridlist ODCIRidList,
   newvallist varray_of_column_type,
   ODCIEnv env)
RETURN NUMBER  where the argument takes an array ridlist with all ROWIDs.
  The point is how can I exploit that functionality from JDBC, or how to make a massive inserts efficiently from Java (very important if your app. is inserting twits or FB comments)?
  The tip is: SetExecuteBatch method of preparedStatment.
  Broadly speaking, the code should be of the form:
        PreparedStatement ps =
            conn.prepareStatement ("insert into test_source_big_a values ​​(?,?,?,?,?)");
        / / Change batch size for this statement to 500
        ((OraclePreparedStatement) ps). SetExecuteBatch (500);
        for (int i = 0; i <= 500, i + +) {
            ps.setInt (1, i);
            ps.setString (2, "name-" + i);
            ps.setString (3, "owner-" + i);
            ps.setString (4, "this is a long test using sendBatch - statement" + i);
            ps.setString (5, "type-" + i);
            ps.executeUpdate () / / JDBC this for later execution queues
        }
        ((OraclePreparedStatement) ps). SendBatch () / / JDBC sends the queued request
        conn.commit ();   calling that piece of code generates 6 batchs of rows:
       100 + (4 x 93) + 98
   which from the point of view of the index implementation will be much more efficient because instead of enqueuing 500 messages in the AQ, just going to enqueue 6!!
   If you send more values with parameter rows in the batch (500) will go automatically sending without waiting SendBatch or commit.
   A parameter which Kuassi recommended me to consider is the SDU (session data unit), which is defined at the level of SQLNet to avoid SQLNet packet fragmentation.
  Note that not only is going to be much more efficient from the point of the insert tables but also with respect to the use of the network since it generates far fewer round-trip from the App. side and RDBMS.

12C ADG Enhancements

Fairlie Rego - Tue, 2013-08-20 22:27
  •          Explain plan now works on Standby
SQL> select database_role from v$database;
DATABASE_ROLE
----------------
PHYSICAL STANDBY

SQL> select open_mode from v$database;
OPEN_MODE
--------------------
READ ONLY WITH APPLY

SQL> select * from v$version;

BANNER                                                                               CON_ID
-------------------------------------------------------------------------------- ----------
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production              0
PL/SQL Release 12.1.0.1.0 - Production                                                    0
CORE    12.1.0.1.0      Production                                                        0
TNS for Linux: Version 12.1.0.1.0 - Production                                            0
NLSRTL Version 12.1.0.1.0 - Production                                                    0

SQL> explain plan for select * from dual;

Explained.

SQL> @?/rdbms/admin/utlxpls

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 272002086

----------------------------------------------------------------------------------
| Id  | Operation                 | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |      |     1 |     2 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS STORAGE FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------------

  •         Data files can be moved in ADG mode. 

In 11G this could not be done and a shutdown of the standby was needed if you needed to move
datafiles as part of an ILM process if you were running an active dataguard configuration

In 11.2.0.3
SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ ONLY WITH APPLY

SQL> select thread#,sequence# from v$managed_standby where process like 'MRP%';

   THREAD#  SEQUENCE#
---------- ----------
         2       1652

SQL>  select tablespace_name from dba_tablespaces where status='READ ONLY';

TABLESPACE_NAME
------------------------------
TEST_2009Q1

SQL> select file_name from dba_data_files where tablespace_name='TEST_2009Q1';

FILE_NAME
-------------------------------------------------------------------------------------------------------------------------------------------------
+DATA/FOOdr/datafile/TEST_2009Q1.510.dbf

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

[oracle@exadb03:FOODR1]/home/oracle => rman target /

Recovery Manager: Release 11.2.0.3.0 - Production on Tue Aug 13 11:08:52 2013

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: FOO (DBID=2633655758)

RMAN> backup as copy tablespace TEST_2009Q1 format='+RECO';

Starting backup at 13-AUG-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=4906 instance=FOODR1 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00640 name=+DATA/FOOdr/datafile/TEST_2009Q1.510.dbf
output file name=+RECO/FOOdr/datafile/TEST_2009Q1.18424.823345739 tag=TAG20130813T110857 RECID=1539 STAMP=823345738
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
Finished backup at 13-AUG-13

SQL> recover managed standby database cancel;
Media recovery complete.
SQL>

RMAN> switch tablespace TEST_2009Q1 to copy;

using target database control file instead of recovery catalog
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of switch to copy command at 08/13/2013 11:11:17
RMAN-06572: database is open and datafile 640 is not offline

RMAN>


At this stage you need to shutdown the standby and restart it in mount mode and then run the switch to copy command.

This obviously impacts service if you have database services which run only on the standby and *DO NOT* run on the primary.

The 12C feature ALTER DATABASE MOVE DATAFILE solves this problem. Details from the documentation below

Python Workshop for Women Indy #2 and CMH #2 coming up!

Catherine Devlin - Mon, 2013-08-19 14:59

The Midwest Python Workshop for women and their friends is back! We've got new workshops scheduled, ready to take new batches of students:

Indianapolis Python Workshop, Sep. 27-28, 2013; sponsored by Six Feet Up and hosted at Launch Fishers

Columbus Python Workshop, Oct. 18-19, 2013; sponsored by LeadingEdje and hosted at The Forge by Pillar

The Workshop is a free, friendly, hands-on introduction to computer programming using Python. Women of all ages and backgrounds are the primary target (but you can bring a male participant as your guest).

Please spread the word!

Deprecated database init. parameters in 12c

Syed Jaffar - Sun, 2013-08-18 01:39
Here is a very short blog about all the deprecated database initialization parameters in 12c.  It is sometime becomes essential to be familiar with those database features and init. parameters that are either de-supported or deprecated in a new release of Oracle database before proceeding with the upgrade. This is because, some of the legacy applications upgrade decision either directly or in-directly depends on those changes.
To obtain a list of deprecated database parameter in 12c, use the following SQL statement:

SQL> SELECT name from v$parameter WHERE isdeprecated = 'TRUE' ORDER BY name;
 
Here is the list:
 
active_instance_count
background_dump_dest
buffer_pool_keep
buffer_pool_recycle
commit_write
cursor_space_for_time
fast_start_io_target
global_context_pool_size
instance_groups
lock_name_space
log_archive_local_first
log_archive_start
max_enabled_roles
parallel_automatic_tuning
parallel_io_cap_enabled
parallel_server
parallel_server_instances
plsql_v2_compatibility
remote_os_authent
resource_manager_cpu_allocation
sec_case_sensitive_logon
serial_reuse
sql_trace
standby_archive_dest
user_dump_dest
 
I am little bit surprised to see the sec_case_sensitive_logon parameter in the list.

Session based sequences in 12c.

Gary Myers - Sat, 2013-08-17 21:40
Catching up on some blogs, and I saw this 12c gem by Syed Jaffar Hussain.

Sequences are great. But sometimes they're just a little more unique than you actually need. Sort of like GUIDs. Start churning through them and they're quickly nine or ten digits. There's nothing wrong with long keys from a technical perspective as the extra byte or two of storage is rarely significant. But they can be greedy from a screen layout point of view (especially in tabular forms). And there's a greater chance of getting digits mixed up in email or conversations about them.

If you are doing a nightly or weekly load, it can be nice to load up your half-a-million rows with a 'batch id' and a 'sequence in batch'. Session based sequences are a nice concept that give you scope for keeping those values in a smaller range.

Another trick for reducing the size of IDs is to covert them into HEX. You'll probably only save one character there though, but the mix of alphabetic and numeric characters. TO_CHAR and TO_NUMBER are both happy to use 'XXXXXXXX' format masks to convert between decimal and hex.


Oracle SOA Suite 11g Performance Tuning Cookbook

Antony Reynolds - Tue, 2013-08-13 17:39

Just received this to review.

It’s a Java World

The first chapter identifies tools and methods to identify performance bottlenecks, generally covering low level JVM and database issues.  Useful material but not really SOA specific and the authors I think missed the opportunity to share the knowledge they obviously have of how to relate these low level JVM measurements into SOA causes.

Chapter 2 uses the EMC Hyperic tool to monitor SOA Suite and so this chapter may be of limited use to many readers.  Many but not all of the recipes could have been accomplished using the FMW Control that ships and is included in the license of SOA Suite.  One of the recipes uses DMS, which is the built in FMW monitoring system built by Oracle before the acquisition of BEA.  Again this seems to be more about Hyperic than SOA Suite.

Chapter 3 covers performance testing using Apache JMeter.  Like the previous chapters there is very little specific to SOA Suite, indeed in my experience many SOA Suite implementations do not have a Web Service to initiate composites, instead relying on adapters.

Chapter 4 covers JVM memory management, this is another good general Java section but has little SOA specifics in it.

Chapter 5 is yet more Java tuning, in this case generic garbage collection tuning.  Like the earlier chapters, good material but not very SOA specific.  I can’t help feeling that the authors could have made more connections with SOA Suite specifics in their recipes.

Chapter 6 is called platform tuning, but it could have been titled miscellaneous tuning.  This includes a number of Linux optimizations, WebLogic optimizations and JVM optimizations.  I am not sure that yet another explanation of how to create a boot.properties file was needed.

Chapter 7 homes in on JMS & JDBC tuning in WebLogic.

SOA at Last

Chapter 8 finally turns to SOA specifics, unfortunately the description of what dispatcher invoke threads do is misleading, they only control the number of threads retrieving messages from the request queue, synchronous web service calls do not use the request queue and hence do not use these threads.  Several of the recipes in this chapter do more than alter the performance characteristics, they also alter the semantics of the BPEL engine (such as “Changing a BPEL process to be transient”) and I wish there was more discussion of the impacts of these in the chapter.  I didn’t see any reference to the impact on recoverability of processes when turning on in-memory message delivery.  That said the recipes do cover a lot of useful optimizations, and if used judiciously will cause a boost in performance.

Chapter 9 covers optimizing the Mediator, primarily tweaking Mediator threading.  THe descriptions of the impacts of changes in this chapter are very good, and give some helpful indications on whether they will apply to your environment.

Chapter 10 touches very lightly on Rules and Human Workflow, this chapter would have benefited from more recipes.  The two recipes for Rules do offer very valuable advice.  The two workflow recipes seem less valuable.

Chapter 11 takes into the area where the greatest performance optimizations are to be found, the SOA composite itself.  7 generally useful recipes are provided, and I would have liked to see more in this chapter, perhaps at the expense of some of the java tuning in the first half of the book.  I have to say that I do not agree with the “Designing BPEL processes to reduce persistence” recipe, there are better more maintainable and faster ways to deal with this.  The other recipes provide valuable ideas that may help performance of your composites.

Chapter 12 promises “High Performance Configuration”.  Three of the recipes on creating a cluster, configuring an HTTP plug in and setting up distributed queues are covered better in the Oracle documentation, particularly the Enterprise Deployment Guide.  There are however some good suggestions in the recipe about deploying on virtualized environments, I wish they had spoken more about this.  The use of JMS bridges recipe is also a very valuable one that people should be aware of.

The Good, the Bad, and the Ugly

A lot of the recipes are really just trivial variations on other recipes, for example they have one recipe on “Increasing the JVM heap size” and another on “Setting Xmx and Xms to the same value”.

Although the book spends a lot of time on Java tuning, that of itself is reasonable as a lot fo SOA performance tuning is tweaking JVM and WLS parameters.  I would have found it more useful if the dots were connected to relate the Java/WLS tuning sections to specific SOA use cases.

As the authors say when talking about adapter tuning “The preceding sets of recipes are the basics … available in Oracle SOA Suite. There are many other properties that can be tuned, but their effectiveness is situational, so we have chosen to focus on the ones that we feel give improvement for the most projects.”.  They have made a good start, and maybe in a 12c version of the book they can provide more SOA specific information in their Java tuning sections.

Add the book to your library, you are almost certain to find useful ideas in here, but make sure you understand the implications of the changes you are making, the authors do not always spell out the impact on the semantics of your composites.

A sample chapter is available on the Packt Web Site.

Conditional Formatting of Calculated Items in OBIEE 11g

Chet Justice - Tue, 2013-08-13 15:24
By Victor Fagundo

Calculated items in OBIEE Pivot tables can be very useful in certain reporting circumstances, either for ease of development, or to meet specific report requirements. While calculated items in OBIEE are easy, and flexible, they do have one important drawback: they take on the data and display formatting of the fact column they are calculated against.

The most common case is the calculation of a % change across a time dimension in financial reporting ( Year over Year, Quarter over Quarter, etc.). 1   This type of calculation usually takes the form of a percent change calculation similar to below:

 (( $2 - $1 ) / $1) *100 

By default, if you perform this calculation against a numerical fact ( sales, customers) you will run into the problem of how to display the % change in the correct format, since the calculated % will want to take the form of the fact it is calculated against, as can be seen in the example 2 below:


Figure 1 - Pivot Table

Figure 2 - Calculated item

Figure 3 - Results

Not very pretty at all.

As people searched for a work around to this problem 3 common solutions have arisen:
  1. Use HTML formatting tricks to “hide” trigger text in the results, then conditionally format off those triggers. While inventive, as the comments note, this solution falls flat if the report is ever printed, as the PDF engine will pick up and display all of the hidden characters.

  2. Convert the pivot table to a regular table with some complex column formulas. Very time consuming and cumbersome, would also not solve the requirement of showing the dimension values noted as noted in Footnote1.

  3.  Convert the calculated result to text and manually add your formatting characters. I don’t think this actually works since the calculated fields won’t accept logical SQL functions, and this would be very cumbersome.
Now with 11g providing conditional formatting that allows you to override the default data format, this is possible via the following steps:
  1. Add a column that is a COUNT DISTINCT on the dimension that you are calculating across ( in the displayed example, “Time T05 Per Name Year”. This column will serve as your “trigger” to apply your conditional formatting.

    Figure 4 - Column Formula

  2. For each of your facts, apply a conditional format that is triggered when the above column value is zero. In the formatting, apply whatever visual and data formats you desire. In this example we will format the data as a percent, with one decimal place.

    Figure 5 - Condition

    Figure 6 - Format when condition is met
  3. Exclude the “trigger” column from your pivot view. View your results and be satisfied:

    Figure 7 - Correct formatting of calculated item.

* Note that this would also allow you to apply visual formatting if you wanted to distinguish this row/column as a total.

Why it worksThe use of conditional formatting that applies a data type as part of the format is a straightforward leap of logic, but what to use as the trigger? Most people will try to use the dimension they have setup the calculation in. However, if you try to use the text description given to the calculated item you will find that the condition is never applied:

Figure 8 - Condition on dimension
Figure 9 - Condition never met, format never applied
If you try to setup a filter that is true when the dimension is not in reasonable range of values ( in this example we try to format off all years not in the 2000s ) you will find that your calculated item is skipped as well (this has the added vulnerability of being very explicit):

Figure 10 - Condition on dimension values

Figure 11- Condition never met, format never applied

The reason for all this is that the calculated item “borrows” EACH of the dimension values it operates against. Hence, no matter how inventive your filter is, as long as you are trying to somehow separate the calculated member away from the members it is operating on, you will never succeed. This member “borrowing” is apparent if you add the dimension it operates against to the query a 2nd time, and look at the table view.

Figure 12 - Calculated item "borrows" members
But since the “member value” given to the calculated item does not actually exist in the dimension, if you try to perform a count distinct against it, you will always get zero.

Figure 13 - Count distinct against dimension
There is your difference; there is your “trigger.” The rest is basic formatting.

1: You might suggest that this requirement is better served using column(s) with time series calculations, and you might be right. However, more often than not the user will want to SEE the time periods being compared ( 2012 vs 2011, or 08/07/2012 vs 08/07/2011). When using facts with time series calculations you will only be able to show “this year” vs “last year” since the column heading of the time series calculated fact will always be static. In these cases you will need to use the base fact and a time dimension, along with the solution provided here.

2: All screen shots, and examples used in this post are performed in Sample App V305. An XML of the final correctly formatted report can be downloaded here.
Categories: BI & Warehousing

WebLogic Admin Cookbook Review

Antony Reynolds - Tue, 2013-08-13 14:42
Review of Oracle WebLogic Server 12c Advanced Administration Cookbook

Like all of Packts cookbook titles, the book follows a standard format of a recipe followed by an explanation of how it works and then a discussion of additional recipe related features and extensions.

When reading this book I tried out some of the recipes on an internal beta of 12.1.2 and they seemed to work fine on that future release.

The book starts with basic installation instructions that belie its title.  The author is keen to use console mode, which is often needed for servers that have no X11 client libraries, however for all but the most simple of domains I find console mode very slow and difficult to use and would suggest that where possible you persuade the OS admin to make X11 client libraries available, at least for the duration of the domain configuration.

Another pet peeve of mine is using nohup to start servers/services and not redirecting output, with the result that you are left with nohup.out files scattered all over your disk.  The book falls into this trap.

However we soon sweep into some features of WebLogic that I believe are less understood such as using the pack/unpack commands and customizing the console screen.  The “Protecting changes in the Administration Console” recipe is particularly useful.

The next chapter covers HA configuration.  One of the nice things about this book is that most recipes are illustrated not only using the console but also using WLST.  The coverage of multiple NICs and dedicated network channels is very useful in the Exalogic world as well as regular WLS clusters.  One point I would quibble with is the setting up of HA for the AdminServer.  I would always do this with a shared file system rather than copying files around, I would also prefer a floating IP address to avoid having to update the DNS.

Other chapters cover JDBC & JMS, Monitoring, Stability, Performance and Security.

Overall the recipes are useful, I certainly learned some new ways of doing things.  The WLST example code is a real plus.  Well worth being added to your WebLogic Admin library.

The book is available on the Packt website.

RMAN 12c – More SQL without ‘sql’

alt.oracle - Mon, 2013-08-12 19:03
Up until version 12c, the RMAN command line parser was capable of parsing certain DML and DDL statements.  To do this, we use the “sql” command from the RMAN command line, like this.

11gHost> rman

Recovery Manager: Release 11.2.0.2.0 - Production on Wed Aug 7 15:29:19 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target
connected to target database: TEST2 (DBID=1111111111)

RMAN> sql 'create table altdotoracle (col1 number)';
sql statement: create table altdotoracle (col1 number)

RMAN> sql 'insert into altdotoracle values (1)';
sql statement: insert into altdotoracle values (1)

However, one operation we could not do was SELECT.  The RMAN parser lacked the facility to return arrays of data to the screen.  This shouldn’t be surprising, considering that RMAN is intended to be used to run backup and restore operations.

RMAN> sql 'select * from altdotoracle';
sql statement: select * from altdotoracle

The statement does not error, but no data is returned.  Now, in version 12c, we get two new benefits in this area.  First, we no longer need to use the ‘sql’ command.  For example…

12cHost> rman

Recovery Manager: Release 12.1.0.1.0 - Production on Wed Aug 7 15:35:27 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target
connected to target database: TEST1 (DBID=1212121212)

RMAN> create table altdotoracle (col1 number);
Statement processed

RMAN> insert into altdotoracle values (1);
Statement processed

Additionally, the RMAN parser in 12c allows us to run SELECT commands, as well as DESCRIBE commands.

RMAN> select * from altdotoracle;

      COL1
----------
         1

RMAN> describe altdotoracle

 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 COL1                                               NUMBER

The ability to use the ‘sql’ RMAN command is still supported for backward compatibility.  A simple thing, but useful!
Categories: DBA Blogs

I'm available

Catherine Devlin - Sun, 2013-08-11 21:11

I'm available for hire! If you need a database expert with lots of programming skill, or a Python programmer with deep database experience, please check out:

Resume / portfolio

But: you must be telecommute-friendly, or in the Dayton area. I'm sorry, but I'm not available to relocate.

Coherence 12.1.2 Rest application build with OEPE

Edwin Biemond - Sun, 2013-08-11 16:20
With WebLogic 12.1.2 Oracle also released a new version of Coherence and OEPE. The 12.1.2 release contains many new Coherence features like WebLogic Managed Coherence Servers and Coherence Grid Archive ( GAR ) which can be included in an normal EAR. Coherence also has some nice new REST features like direct & named queries,  Custom Query engines and new Security options. Plus with OEPE you can

Pages

Subscribe to Oracle FAQ aggregator