Feed aggregator

Understanding DBMS_STATS.SET_*_PREFS procedures

Oracle Optimizer Team - Tue, 2009-08-11 21:25
In previous Database releases you had to use the DBMS_STATS.SET_PARM procedure to change the default value for the parameters used by the DBMS_STATS.GATHER_*_STATS procedures. The scope of any changes that were made was all subsequent operations. In Oracle Database 11g, the DBMS_STATS.SET_PARM procedure has been deprecated and it has been replaced with a set of procedures that allow you to set a preference for each parameter at a table, schema, database, and Global level. These new procedures are called DBMS_STATS.SET_*_PREFS and offer a much finer granularity of control.

However there has been some confusion around which procedure you should use when and what the hierarchy is among these procedures. In this post we hope to clear up the confusion. Lets start by looking at the list of parameters you can change using the DBMS_STAT.SET_*_PREFS procedures.

  • AUTOSTATS_TARGET (SET_GLOBAL_PREFS only)
  • CASCADE
  • DEGREE
  • ESTIMATE_PERCENT
  • METHOD_OPT
  • NO_INVALIDATE
  • GRANULARITY
  • PUBLISH
  • INCREMENTAL
  • STALE_PERCENT

As mentioned above there are four DBMS_STATS.SET_*_PREFS procedures.

  1. SET_TABLE_PREFS

  2. SET_SCHEMA_PREFS

  3. SET_DATABASE_PREFS

  4. SET_GLOBAL_PREFS


The DBMS_STATS.SET_TABLE_PREFS procedure allows you to change the default values of the parameters used by the DBMS_STATS.GATHER_*_STATS procedures for the specified table only.

The DBMS_STATS.SET_SCHEMA_PREFS procedure allows you to change the default values of the parameters used by the DBMS_STATS.GATHER_*_STATS procedures for all of the existing objects in the specified schema. This procedure actually calls DBMS_STATS.SET_TABLE_PREFS for each of the tables in the specified schema. Since it uses DBMS_STATS.SET_TABLE_PREFS calling this procedure will not affect any new objects created after it has been run. New objects will pick up the GLOBAL_PREF values for all parameters.

The DBMS_STATS.SET_DATABASE_PREFS procedure allows you to change the default values of the parameters used by the DBMS_STATS.GATHER_*_STATS procedures for all of the user defined schemas in the database. This procedure actually calls DBMS_STATS.SET_TABLE_PREFS for each of the tables in each of the user defined schemas. Since it uses DBMS_STATS.SET_TABLE_PREFS this procedure will not affect any new objects created after it has been run. New objects will pick up the GLOBAL_PREF values for all parameters. It is also possible to include the Oracle owned schemas (sys, system, etc) by setting the ADD_SYS parameter to TRUE.

The DBMS_STATS.SET_GLOBAL_PREFS procedure allows you to change the default values of the parameters used by the DBMS_STATS.GATHER_*_STATS procedures for any object in the database that does not have an existing table preference. All parameters default to the global setting unless there is a table preference set or the parameter is explicitly set in the DBMS_STATS.GATHER_*_STATS command. Changes made by this procedure will affect any new objects created after it has been run as new objects will pick up the GLOBAL_PREF values for all parameters.

With GLOBAL_PREFS it is also possible to set a default value for one additional parameter, called AUTOSTAT_TARGET. This additional parameter controls what objects the automatic statistic gathering job (that runs in the nightly maintenance window) will look after. The possible values for this parameter are ALL,ORACLE, and AUTO. ALL means the automatic statistics gathering job will gather statistics on all objects in the database. ORACLE means that the automatic statistics gathering job will only gather statistics for Oracle owned schemas (sys, sytem, etc) Finally AUTO (the default) means Oracle will decide what objects to gather statistics on. Currently AUTO and ALL behave the same.

In summary, DBMS_STATS obeys the following hierarchy for parameter values, parameters values set in the DBMS_STAT.GATHER*_STATS command over rules everything. If the parameter has not been set in the command we check for a table level preference. If there is no table preference set we use the global preference.

Understanding DBMS_STATS.SET_*_PREFS procedures

Inside the Oracle Optimizer - Tue, 2009-08-11 21:25
In previous Database releases you had to use the DBMS_STATS.SET_PARM procedure to change the default value for the parameters used by the DBMS_STATS.GATHER_*_STATS procedures. The scope of any changes that were made was all subsequent operations. In Oracle Database 11g, the DBMS_STATS.SET_PARM procedure has been deprecated and it has been replaced with a set of procedures that allow you to set a preference for each parameter at a table, schema, database, and Global level. These new procedures are called DBMS_STATS.SET_*_PREFS and offer a much finer granularity of control.

However there has been some confusion around which procedure you should use when and what the hierarchy is among these procedures. In this post we hope to clear up the confusion. Lets start by looking at the list of parameters you can change using the DBMS_STAT.SET_*_PREFS procedures.

  • AUTOSTATS_TARGET (SET_GLOBAL_PREFS only)
  • CASCADE
  • DEGREE
  • ESTIMATE_PERCENT
  • METHOD_OPT
  • NO_INVALIDATE
  • GRANULARITY
  • PUBLISH
  • INCREMENTAL
  • STALE_PERCENT

As mentioned above there are four DBMS_STATS.SET_*_PREFS procedures.

  1. SET_TABLE_PREFS

  2. SET_SCHEMA_PREFS

  3. SET_DATABASE_PREFS

  4. SET_GLOBAL_PREFS


The DBMS_STATS.SET_TABLE_PREFS procedure allows you to change the default values of the parameters used by the DBMS_STATS.GATHER_*_STATS procedures for the specified table only.

The DBMS_STATS.SET_SCHEMA_PREFS procedure allows you to change the default values of the parameters used by the DBMS_STATS.GATHER_*_STATS procedures for all of the existing objects in the specified schema. This procedure actually calls DBMS_STATS.SET_TABLE_PREFS for each of the tables in the specified schema. Since it uses DBMS_STATS.SET_TABLE_PREFS calling this procedure will not affect any new objects created after it has been run. New objects will pick up the GLOBAL_PREF values for all parameters.

The DBMS_STATS.SET_DATABASE_PREFS procedure allows you to change the default values of the parameters used by the DBMS_STATS.GATHER_*_STATS procedures for all of the user defined schemas in the database. This procedure actually calls DBMS_STATS.SET_TABLE_PREFS for each of the tables in each of the user defined schemas. Since it uses DBMS_STATS.SET_TABLE_PREFS this procedure will not affect any new objects created after it has been run. New objects will pick up the GLOBAL_PREF values for all parameters. It is also possible to include the Oracle owned schemas (sys, system, etc) by setting the ADD_SYS parameter to TRUE.

The DBMS_STATS.SET_GLOBAL_PREFS procedure allows you to change the default values of the parameters used by the DBMS_STATS.GATHER_*_STATS procedures for any object in the database that does not have an existing table preference. All parameters default to the global setting unless there is a table preference set or the parameter is explicitly set in the DBMS_STATS.GATHER_*_STATS command. Changes made by this procedure will affect any new objects created after it has been run as new objects will pick up the GLOBAL_PREF values for all parameters.

With GLOBAL_PREFS it is also possible to set a default value for one additional parameter, called AUTOSTAT_TARGET. This additional parameter controls what objects the automatic statistic gathering job (that runs in the nightly maintenance window) will look after. The possible values for this parameter are ALL,ORACLE, and AUTO. ALL means the automatic statistics gathering job will gather statistics on all objects in the database. ORACLE means that the automatic statistics gathering job will only gather statistics for Oracle owned schemas (sys, sytem, etc) Finally AUTO (the default) means Oracle will decide what objects to gather statistics on. Currently AUTO and ALL behave the same.

In summary, DBMS_STATS obeys the following hierarchy for parameter values, parameters values set in the DBMS_STAT.GATHER*_STATS command over rules everything. If the parameter has not been set in the command we check for a table level preference. If there is no table preference set we use the global preference.
Categories: DBA Blogs, Development

How we faster the process of converting a non-ASM single-instance database to RAC database with ASM using RCONFIG tool?

Sabdar Syed - Sat, 2009-08-08 16:07
I have been given with a challenging task to convert one of our critical production databases, which is of 1 TB (Terabyte) in size, to Oracle 10g RAC with ASM storage option. Even though, there are many methods and tools available to perform this activity, I have preferred to use the RCONFIG tool.

We prepared an input XML file required for RCONFIG tool, and run the RCONFIG utility as follows:

$ cd /oracle/ora102/db_1/assistants/rconfig/sampleXMLs
$ rconfig ConverToRAC.xml
When we start the RCONFIG tool to convert the database to RAC, the RCONFIG tool initially moves all the non-ASM database files to ASM disk files, for this RCONFIG tool internally invokes RMAN utility to backup the target database to the ASM disk groups, eventually the database is converted to RAC using RCONFIG.

The conversion took almost 9 hours to complete the process, because during the conversion RMAN used only one channel per data file to backup to ASM disks. There was no chance of improving the RMAN copy process by allocating more channels in the input XML file, and also Oracle doesn’t recommend doing other changes in the input XML file.

One thing was observed during the RMAN copy that RMAN is using target database control file instead of recovery catalog, and also using the RMAN default preconfigured settings for that database.

To know the RMAN default preconfigured settings for the database:

$ export ORACLE_SID=MYPROD
$ rman target /
Recovery Manager: Release 10.2.0.4.0 - Production on Wed Aug 5 10:21:05 2009

Copyright (c) 1982, 2007, Oracle. All rights reserved.

connected to target database: MYPROD (DBID=1131234567)

RMAN> show all;

using target database control file instead of recovery catalog
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/oracle/ora102/db_1/dbs/snapcf_T24MIG1.f'; # default
Here we see that the PARALLELISM is 1 (default), that’s why the RMAN using only one channel during backing up the non-ASM datafiles to ASM Disk Groups, and were taking 9 hours to complete the backup.

We have changed the PRALLELISM count to 6 (it depends upon number of CPUs you have in the server).

Solution:

RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 6;

old RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET;
new RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK PARALLELISM 6 BACKUP TYPE TO BACKUPSET;
new RMAN configuration parameters are successfully stored

RMAN> show all;

RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 6 BACKUP TYPE TO BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/oracle/ora102/db_1/dbs/snapcf_T24MIG1.f'; # default
After changing the PARALLELISM count to 6, the RMAN has allocated 6 channels and the conversion process has improved greatly and reduced the downtime drastically to 4 Hours 30 minutes.

Following is the extract of rconfig.log file, this file is located under:

$ORACLE_HOME/db_1/cfgtoolslogs/rconfig
............................................................................
............................................................................
............................................................................
[17:17:16:43] Log RMAN Output=RMAN> backup as copy database to destination '+DATA_DG';
[17:17:16:53] Log RMAN Output=Starting backup at 04-AUG-09
[17:17:16:258] Log RMAN Output=using target database control file instead of recovery catalog
[17:17:16:694] Log RMAN Output=allocated channel: ORA_DISK_1
[17:17:16:698] Log RMAN Output=channel ORA_DISK_1: sid=866 devtype=DISK
[17:17:17:9] Log RMAN Output=allocated channel: ORA_DISK_2
[17:17:17:13] Log RMAN Output=channel ORA_DISK_2: sid=865 devtype=DISK
[17:17:17:324] Log RMAN Output=allocated channel: ORA_DISK_3
[17:17:17:327] Log RMAN Output=channel ORA_DISK_3: sid=864 devtype=DISK
[17:17:17:637] Log RMAN Output=allocated channel: ORA_DISK_4
[17:17:17:641] Log RMAN Output=channel ORA_DISK_4: sid=863 devtype=DISK
[17:17:17:967] Log RMAN Output=allocated channel: ORA_DISK_5
[17:17:17:971] Log RMAN Output=channel ORA_DISK_5: sid=862 devtype=DISK
[17:17:18:288] Log RMAN Output=allocated channel: ORA_DISK_6
[17:17:18:293] Log RMAN Output=channel ORA_DISK_6: sid=861 devtype=DISK
[17:17:20:416] Log RMAN Output=channel ORA_DISK_1: starting datafile copy
[17:17:20:427] Log RMAN Output=input datafile fno=00053 name=/oradata/MYPROD/users_01.dbf
[17:17:20:532] Log RMAN Output=channel ORA_DISK_2: starting datafile copy
[17:17:20:544] Log RMAN Output=input datafile fno=00021 name=/oradata/MYPROD/ users_02.dbf
[17:17:20:680] Log RMAN Output=channel ORA_DISK_3: starting datafile copy
[17:17:20:694] Log RMAN Output=input datafile fno=00022 name=/oradata/MYPROD/ users_03.dbf
[17:17:20:786] Log RMAN Output=channel ORA_DISK_4: starting datafile copy
[17:17:20:800] Log RMAN Output=input datafile fno=00023 name=/oradata/MYPROD/ users_04.dbf
[17:17:20:855] Log RMAN Output=channel ORA_DISK_5: starting datafile copy
[17:17:20:868] Log RMAN Output=input datafile fno=00024 name=/oradata/MYPROD/ users_05.dbf
[17:17:20:920] Log RMAN Output=channel ORA_DISK_6: starting datafile copy
[17:17:20:930] Log RMAN Output=input datafile fno=00011 name=/oradata/MYPROD/ users_06.dbf
............................................................................
............................................................................
............................................................................
[21:29:5:518] Log RMAN Output=Finished backup at 04-AUG-09
............................................................................
............................................................................
............................................................................

[21:39:10:723] [NetConfig.startListenerResources:5] started Listeners associated with database MYPROD
[21:39:10:723] [Step.execute:255] STEP Result=Operation Succeeded
[21:39:10:724] [Step.execute:284] Returning result:Operation Succeeded
[21:39:10:724] [RConfigEngine.execute:68] bAsyncJob=false
[21:39:10:725] [RConfigEngine.execute:77] Result= < version="1.1">


&ltConvertToRAC&gt
&ltConvert&gt
&ltResponse&gt
&ltResult code="0" &gt
Operation Succeeded
&lt/Result&gt
&lt/Response&gt
&ltReturnValue type="object"&gt
&ltOracle_Home&gt
/oracle/ora102/db_1
&lt/Oracle_Home&gt
&ltSIDList&gt
&ltSID&gtMYPROD1&lt\SID&gt
&ltSID&gtMYPROD2&lt\SID&gt
&lt\SIDList&gt &lt/ReturnValue&gt
&lt/Convert&gt
&lt/ConvertToRAC&gt&lt/RConfig&gt

Note: For the sake of look and feel format, the above output has been trimmed neatly. You can also observer that 6 channels were being allocated, timings of backup start and end, and the success code end of the rconfig.log file.

References:

To know more about RCONFIG tool and other Metalink references on it, please take a look at the below blog post written by Mr. Syed Jaffar Hussain.

http://jaffardba.blogspot.com/2008/09/my-experience-of-converting-cross.html

Oracle 10g R2 Documentation information on RCONFIG:

http://download.oracle.com/docs/cd/B19306_01/install.102/b14205/cvrt2rac.htm#BABBAAEH

Regards,
Sabdar Syed,
http://sabdarsyed.blogspot.com/

The Humble PL/SQL Dot

Tahiti Views - Sat, 2009-08-08 14:20
Like many other languages, PL/SQL has its own "dot notation". If we assume that most people can intuit or easily look up things like the syntax for '''IF/THEN/ELSIF''', that means that first-timer users might quickly run into dots and want to understand their significance.The authoritative docs on the dots is in the Oracle Database 11g PL/SQL Language Reference, in particular Appendix B, How PL/John Russellhttp://www.blogger.com/profile/17089970732272081637noreply@blogger.com2

Detecting Corrupt Data Blocks

Jared Still - Thu, 2009-08-06 16:50
Or more accurately, how not to detect corrupt data blocks.

This thread on Oracle-L is regarding lost writes on a database.

One suggestion was made to use the exp utility to export the database, thereby determining if there are corrupt blocks in the database due to disk failure. I didn't give it much thought at first, but fellow Oak Table member Mark Farnham got me thinking about it.

Using exp to detect corrupt blocks, or rather, the absence of corrupt blocks may work, but then again, it may not. It is entirely possible to do a  full table scan on a table successfully, as would happen during an export, even though the blocks on disk have been corrupted.

This can be demonstrated by building a table, ensuring the contents are cached, then destroying the data in the data file, followed by a successful export of the table.

Granted, there are a lot of mitigating factors that could be taken into consideration as to whether or not this would happen in a production database. That's not the point: the point is that it could happen, so exp is not a reliable indicator of the state of the data files on disk.

This test was performed on Oracle 10.2.0.4 EE on RH Linux ES 4. Both are 32 bit.

First create a test tablespace:

create tablespace lost_write datafile '/u01/oradata/dv11/lost_write.dbf' size 1m
extent management local
uniform size 64k
/



Next the table LOST_WRITE is created in the tablespace of the same name. This will be used to test the assertion that a successful export of the table can be done even though the data on disk is corrupt.

create table lost_write
cache
tablespace lost_write
as
select * from dba_objects
where rownum <= 1000
/

begin
dbms_stats.gather_table_stats(user,'LOST_WRITE');
end;
/

select tablespace_name, blocks, bytes
from user_segments
where segment_name = 'LOST_WRITE'
/


TABLESPACE_NAME BLOCKS BYTES
------------------------------ ---------- ----------
LOST_WRITE 16 131072

1 row selected.



Next, do a full table scan and verify that the blocks are cached:

select * from lost_write;

Verify in cache:
select file#,block#,class#, status
from v$bh where ts# = (select ts# from sys.ts$ where name = 'LOST_WRITE')
order by block#
/

FILE# BLOCK# CLASS# STATUS
---------- ---------- ---------- -------
40 2 13 xcur
40 3 12 xcur
40 9 8 xcur
40 10 9 xcur
40 11 4 xcur
40 12 1 xcur
40 13 1 xcur
40 14 1 xcur
40 15 1 xcur
40 16 1 xcur
40 17 1 xcur
40 18 1 xcur
40 19 1 xcur
40 20 1 xcur
40 21 1 xcur
40 22 1 xcur
40 23 1 xcur




Now swap the bytes in the file, skipping the first 2 oracle blocks
Caveat: I don't know if that was the correct # of blocks, and I didn't spend any time trying to find out
Also, I belatedly saw that count probably should have been 22 rather than 16, but the results still served the purpose of corrupting the datafile, as we shall see in a bit.

What this dd command is doing is using the same file for both input and output, and rewriting blocks 3-18, swapping each pair of bytes.

dd if=/u01/oradata/dv11/lost_write.dbf of=/u01/oradata/dv11/lost_write.dbf bs=8129 skip=2 count=16 conv=swab,notrunc



The effect is demonstrated by this simple test:

jkstill-19 > echo hello | dd
hello
0+1 records in
0+1 records out
[ /home/jkstill ]

jkstill-19 > echo hello | dd conv=swab
ehll
o0+1 records in
0+1 records out


Now we can attempt the export:

exp tables=\(jkstill.lost_write\) ...

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, Oracle Label Security, Data Mining and Real Application Testing options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set

About to export specified tables via Conventional Path ...
. . exporting table LOST_WRITE 1000 rows exported
Export terminated successfully without warnings.
19 > echo hello | dd
hello
0+1 records in
0+1 records out
[ /home/jkstill ]

jkstill-19 > echo hello | dd conv=swab
ehll
o0+1 records in
0+1 records out


So, even though the data on disk has been corrupted, the export succeeded. That is due to the table being created with the CACHE option, and all the blocks being cached at the time of export. It may not be necessary to use the CACHE option, but I used it to ensure the test would succeed.

Now let's see what happens when trying to scan the table again. First the NOCACHE option will be set on the table, then a checkpoint.

10:42:45 dv11 SQL> alter table lost_write nocache;

10:43:02 dv11 SQL> alter system checkpoint;

Now try to scan the table again:

10:43:14 ordevdb01.radisys.com - js001292@dv11 SQL> /
select * from lost_write
*
ERROR at line 1:
ORA-00376: file 40 cannot be read at this time
ORA-01110: data file 40: '/u01/oradata/dv11/lost_write.dbf'



A corollary conclusion can drawn from this example.

If you do discover bad data blocks, you just might be able to do an export of the tables that are in the affected region before doing any recovery. This might be a belt and suspenders approach, but DBA's are not generally being known for taking unnecessary chances when possible data loss is on the line.
Categories: DBA Blogs

Empezando con OCCI

Mark A. Williams - Tue, 2009-07-28 17:45

Luis Neri was kind enough to translate my "Getting Started with OCCI (Windows Version)" post into Spanish. Below is his translation of the original post. Thanks very much to Luis for this. However, please note that I do not speak Spanish, so I won't be able to respond to any comments in that language.

- Mark


 


 

La Oracle C++ Call Interface, también conocida como OCCI, es una API construida sobre otras API's de bajo nivel de Oracle. Uno de los objetivos de OCCI es ofrecer a los programadores de C++ una forma de acceso fácil a las bases de datos de Oracle en una forma similar a la que tienen los programadores de Java con "Java Database Connectivity (JDBC) ". Este documento trata de dar una vista rápida para empezar con esta tecnología, la cual puede ser incorporada a los desarrollos y aplicaciones GIS del SIRAN; si se desea, existe más información en la documentación en línea de Oracle.

http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28390/toc.htm

Este "Empezando con OCCI" tiene el objeto de dar un método para la puesta del ambiente el cual usa OCCI en los desarrollos con C++ bajo Windows para acceder a las bases de datos de Oracle.

El ambiente

El ambiente en que se realizarón las pruebas para la puesta en funcionamiento de esta tecnología es el que abajo se detalla, tenga en cuenta que a este ambiente se pueden hacer pequeñas adaptaciones para que funcione en el suyo en particular.

  • Oracle Database Client: Oracle 11.0.2
  • Oracle Database: Oracle 10.2
  • Development Machine: Usser3 Geoware, Windows Vista Home Premium 32 bits
  • Development IDE: Microsoft Visual C++ 2008 Profesional (Windows SDK also installed) SP1
  • Oracle Client: Oracle Instant Client with OCCI

Información importante

Uno de las características más importantes al trabajar con OCCI es que se debe asegurar que todos los componentes del ambiente de desarrollo y de runtime están soportados en las combinaciones y tener las versiones correctas. Se hace un fuerte énfasis en esto, si usted no observa con atención esta recomendación seguramente encontrara problemas. Para lograr esto por favor encuentre las combinaciones correctas de las versiones en la pagina de Oracle Technology Network (OTN).

Descargue los componentes correctos

Al momento de escribir este documento se encontraron para el ambiente descrito arriba los siguientes componentes

  • OCCI 11.1.0.6 (Visual C++9 (VS 2008)[Windows 32-bit])
  • Instant Client Version 11.1.0.6

Estos componentes se deben descargar en la máquina de desarrollo. Yo lo hice en el escritorio.

  • OCCI 11.1.0.6 (Visual C++9 (VS 2008)[Windows 32-bit]) - occivc9win32_111060.zip
  • Instant Client Package - Basic: instantclient-basic-win32-11.1.0.6.0.zip
  • Instant Client Package - SDK: instantclient-sdk-win32-11.1.0.6.0.zip
  • Instant Client Package - SQL*Plus: instantclient-sqlplus-win32-11.1.0.6.0.zip  (Opcional, pero se recomienda instalarlo)

Instalación de Instant Client

La instalación es tan simple como descomprimir el archivo, descomprímalo sobre C:\ , el resultado será una carpeta C:\instantclient_11_1 con las subcarpetas "sdk", "vc8" y "vc7", las cuales no serán utilizadas en el desarrollo sobre nuestro ambiente.


 

Instalación del OCCI Package

Al igual que la instalación de Instant Client, los paquetes Occi pueden ser descomprimidos, pero sin embargo en vez de crear un directorio sobre c:\, los descomprimí sobre el escritorio, una vez descomprimidos, me desvié un poco de lo quel archivo occivc9_111060_readme.txt dice, y realice los siguiente:

Cree un directorio "vc9" dentro del directorio "sdk" como sigue:

C:\instantclient_11_1\sdk\lib\msvc\vc9

Cree una carpeta "vc9" dentro de "instantclient_11_1" como sigue:

C:\instantclient_11_1\vc9

Borre el archivo oraocci11.dll y oraocci11.sym del directorio C:\instantclient_11_1. Estos archives no están hechos para construir o compilar con Visual Studio 2008 y como se dijo anteriormente es importante que se empaten las versiones.

Se extrajeron los archivos OCCI en la carpeta del escritorio moviendo los dos archivos a la carpeta C:\instantclient_11_1\sdk\lib\msvc\vc9 antes creada:

  • oraocci11.lib
  • oraocci11d.lib

de la misma carpeta en el escritorio, se mueven los siguientes archivos a la carpeta C:\instantclient_11_1\vc9 antes creada:

  • oraocci11.dll
  • oraocci11.dll.manifest
  • oraocci11d.dll
  • oraocci11d.dll.manifest


 

Finalmente se borra el archivo oraocci11.lib de:

C:\instantclient_11_1\sdk\lib\msvc

Otra vez este archivo no es compatible con nuestro ambiente.

Despues de realizar estos pasos, los archivos .lib deben estar bajo la carpeta C:\instantclient_11_1\sdk\lib\msvc y los archivos .dll y .manifest deben estar bajo la carpeta C:\instantclient_11_1. Esto pasos parecen trabajo extra, pero resultan de una separación completa de las varias versiones OCCI y lo hace más fácil y explicito.

Para especificar que versión de librerías OCCI usar, añada estas carpetas al path del sistema. Estas dos carpetas deben estar al inicio del path:

C:\instantclient_11_1\vc9;C:\instantclient_11_1;{y lo demás del path…}

Configuración de Visual Studio

El ambiente Windows ha sido configurado para el uso de los nuevos paquetes OCCI y Instan Client (los adecuados para nuestro ambiente), pero antes de empezar un desarrollo en Visual Studio, es necesario establecer algunas opciones. Sin estas opciones Visual Studio es incapaz de encontrar los archivos correctos y construir aplicaciones. Hay dos opciones las cuales necesitan ser especificadas:

  • Include files – permite a Visual Studio encontrar los header files para OCCI
  • Library files – permite a Visual Studio encontrar los library files para OCCI

UsandoVisual C++ 2008, las rutas de menu donde se especifican estas opciones son:

  • Tools –> Options… expanda el nodo "Projects and Solutions", seleccione "VC++ Directories", bajo "Show directories for:" seleccione "Include files", doble-click bajo la ultima entrada para abrir una nueva pantalla e introduzca la ruta "C:\instantclient_11_1\sdk\include" presione enter
  • Bajo "Show directories for:" seleccione"Library files", doble-click bajo la ultima entrada para abrir una nueva pantalla e introduzca la ruta "C:\instantclient_11_1\sdk\lib\msvc\vc9" and presione enter
  • Presione OK para guardar las opciones.

Cree un proyecto simple de prueba

Ya que se ha hecho todo la puesta en funcionamiento el ambiente está configurado!, Utilice la siguiente proyecto como prueba para verificar que todo funciona como se espera. Otra vez, este es un simple ejemplo para verificar que las cosas funcionan correctamente. No es un templeate de desarrollo.

Cree en Visual C++ 2008 un proyecto seleccionando File –> New –> Project… del menú principal, seleccione "Win32" como tipo de proyecto, seleccione "Win32 Console Application", dele un nombre al proyecto (yo usé prueba_occi), seleccione un carpeta donde guardar deseleccione "Create directory for solution", y presione OK.

Presione Next en el Wizard, deseleccione Precompiled header, presione Empty project, y presione Finish.

En el explorador de la solucion, click-izquierdo en Header Files, seleccione Add, seleccione New Item…

En Add New Item, seleccione Header File (.h), introduzca Employees.h (o cualquier otro nombre) en nombre y presione Add.

/*
* A simple OCCI test application
* This file contains the Employees class declaration
*/

#include <occi.h>
#include <iostream>
#include <iomanip>

using namespace oracle::occi;
using namespace std;

class Employees {
public:
  Employees();
  virtual ~Employees();

  void List();

private:
  Environment *env;
  Connection  *con;

  string user;
  string passwd;
  string db;
};

en Solution Explorer, click-izquierdo en Source Files, seleccione Add, seleccione New Item…

en Add New Item, seleccione C++ File (.cpp), introduzca Employees.cpp ((o cualquier otro nombre) en nombre y presione Add.

Este es el contenido de mi archivo en el sistema:

/*
* A simple OCCI test application
* This file contains the Employees class implementation
*/

#include "Employees.h"

using namespace std;
using namespace oracle::occi;

int main (void)
{
  /*
   * create an instance of the Employees class,
   * invoke the List member, delete the instance,
   * and prompt to continue...
   */

  Employees *pEmployees = new Employees();

  pEmployees->List();

  delete pEmployees;

  cout << "ENTER to continue...";

  cin.get();

  return 0;
}

Employees::Employees()
{
  /*
   * connect to the test database as the HR
   * sample user and use the EZCONNECT method
   * of specifying the connect string. Be sure
   * to adjust for your environment! The format
   * of the string is host:port/service_name
   */

  user = "hr";
  passwd = "hr";
  db = "oel01:1521/OEL11GR1.SAND";

  env = Environment::createEnvironment(Environment::DEFAULT);

  try
  {
    con = env->createConnection(user, passwd, db);
  }
  catch (SQLException& ex)
  {
    cout << ex.getMessage();

    exit(EXIT_FAILURE);
  }
}

Employees::~Employees()
{
  env->terminateConnection (con);

  Environment::terminateEnvironment (env);
}

void Employees::List()
{
  /*
   * simple test method to select data from
   * the employees table and display the results
   */

  Statement *stmt = NULL;
  ResultSet *rs = NULL;
  string sql = "select employee_id, first_name, last_name " \
               "from employees order by last_name, first_name";

  try
  {
    stmt = con->createStatement(sql);
  }
  catch (SQLException& ex)
  {
    cout << ex.getMessage();
  }

  if (stmt)
  {
    try
    {
      stmt->setPrefetchRowCount(32);

      rs = stmt->executeQuery();
    }
    catch (SQLException& ex)
    {
      cout << ex.getMessage();
    }

    if (rs)
    {
      cout << endl << setw(8) << left << "ID"
           << setw(22) << left << "FIRST NAME"
           << setw(27) << left << "LAST NAME"
           << endl;
      cout << setw(8) << left << "======"
           << setw(22) << left << "===================="
           << setw(27) << left << "========================="
           << endl;

      while (rs->next()) {
        cout << setw(8) << left << rs->getString(1)
             << setw(22) << left << (rs->isNull(2) ? "n/a" : rs->getString(2))
             << setw(27) << left << rs->getString(3)
             << endl;
      }

      cout << endl;

      stmt->closeResultSet(rs);
    }

    con->terminateStatement(stmt);
  }
}


 

Antes de construir el ejemplo (build), se necesita añadir la librería OCCI a la lista de entradas del linker:

Seleccione Project –> prueba_occi Properties... del menu (Sustituya el nombre por el proio si es necesario)

Expanda el nodo Configuration Properties, expanda el nodo Linker, seleccione Input item, introduzca "oraocci11d.lib" para debug build o "oraocci11.lib" para release build.

Seleccione Build –> Build Solution del menú para construir la solución. Si todo está puesto correctamente no debería haber errores. Si existen error busque donde pueden estar y corrija. La pantalla de ejecución seria como esta:

ID      FIRST NAME            LAST NAME
======  ====================  =========================
174     Ellen                 Abel
166     Sundar                Ande
130     Mozhe                 Atkinson
105     David                 Austin
204     Hermann               Baer
116     Shelli                Baida
167     Amit                  Banda
172     Elizabeth             Bates

[ snip ]

120     Matthew               Weiss
200     Jennifer              Whalen
149     Eleni                 Zlotkey

ENTER to continue...

Si eres nuevo en el uso de OCCI sobre Windows con Visual Studio 2008, quizá el ejemplo de arriba puede ser de ayuda al inicio.

Joe's Blog: 15 Mintues of Fame

Joe Fuda - Sat, 2009-07-25 03:00

They say everyone gets at least 15 minutes of fame in their lifetime. Here's my total to-date.

1 minute (Middle School): my picture and some artwork appeared in The Toronto Star after I won their weekly cartoon contest for kids

30 seconds (High School): I was pictured in The Etobicoke Guardian performing a welding demonstration at a local shopping mall (this doesn't count for a full minute because a welding mask covered my face in the picture)

1 minute (University): I was pictured in The Toronto Star again, this time they caught me with my arm dyed purple, pants rolled up, and wearing a yellow hard hat as I waded through a fountain in front of Toronto City Hall during a University of Toronto Engineering hazing ritual

Yesterday a couple of minutes were added to that total when I was featured in Oracle's Innovation Showcase. As part of our 100-day countdown to Oracle OpenWorld Oracle is posting interviews with 100 of its top innovators. I was "Innovator of the Day" this past Friday, though I'm still listed there today too for some reason. I guess if you're lucky enough to be picked on a Friday then you become "Innovator of the Weekend" by default. You can find the full interview at this link, where it will reside even after my visage fades from the spotlight of the main showcase page.

So let's see, that leaves me with 10 minutes and 30 seconds of future fame left. I wonder what The Toronto Star will catch me doing next?


...

Will the Optimizer Team be at Oracle Open World 2009?

Oracle Optimizer Team - Thu, 2009-07-23 20:18
With only two and a half months to go until Oracle Open World in San Francisco, October 11-15th, we have gotten several requests asking if we plan to present any session at the conference.

We have two session and a demo station in the Database campground at this year's show. We will give a technical presentation on What to Expect from the Oracle Optimizer When Upgrading to Oracle Database 11g and the Oracle Optimizer Roundtable.

The technical session, which is on Tuesday Oct 13 at 2:30 pm, gives step by step instructions and detailed examples of how to use the new 11g features to ensure your upgrade goes smoothly and without any SQL plan regressions.

The roundtable, which is on Thursday Oct. 15th at 10:30 am, will give you a first hand opportunity to pose you burning Optimizer and statistics questions directly to a panel of our leading Optimizer developers. In fact if you plan to attend the roundtable and already know what questions you would like to ask, then please send them to us via email and we will be sure to include them. Other wise, you can hand in your questions at our demo station at any stage during the week, or as you enter the actual session. Just be sure to write your questions in clear block capitals!

We look forward to seeing you all at Oracle Open World.

Will the Optimizer Team be at Oracle Open World 2009?

Inside the Oracle Optimizer - Thu, 2009-07-23 20:18
With only two and a half months to go until Oracle Open World in San Francisco, October 11-15th, we have gotten several requests asking if we plan to present any session at the conference.

We have two session and a demo station in the Database campground at this year's show. We will give a technical presentation on What to Expect from the Oracle Optimizer When Upgrading to Oracle Database 11g and the Oracle Optimizer Roundtable.

The technical session, which is on Tuesday Oct 13 at 2:30 pm, gives step by step instructions and detailed examples of how to use the new 11g features to ensure your upgrade goes smoothly and without any SQL plan regressions.

The roundtable, which is on Thursday Oct. 15th at 10:30 am, will give you a first hand opportunity to pose you burning Optimizer and statistics questions directly to a panel of our leading Optimizer developers. In fact if you plan to attend the roundtable and already know what questions you would like to ask, then please send them to us via email and we will be sure to include them. Other wise, you can hand in your questions at our demo station at any stage during the week, or as you enter the actual session. Just be sure to write your questions in clear block capitals!

We look forward to seeing you all at Oracle Open World.

Categories: DBA Blogs, Development

Initial version of DataMapper Oracle adapter

Raimonds Simanovskis - Mon, 2009-07-20 16:00

datamapper.jpgWhat is DataMapper?

DataMapper is Ruby Object/Relational Mapper that is similar to ActiveRecord (component of Ruby on Rails) but still it handles several things differently than ActiveRecord.

I got interested in DataMapper because I liked better some of its design decisions when compared with ActiveRecord. And in particular DataMapper architecture can suite better if you need to work with legacy Oracle database schemas – that is the area where I use Ruby on Rails a lot and for these purposes I also created Oracle enhanced adapter for ActiveRecord.

But as there were no Oracle adapter available for DataMapper I needed to create one :) I started to work on Oracle adapter for DataMapper after the RailsConf and now it is passing all DataMapper tests on all Ruby platforms – MRI 1.8, Ruby 1.9 and JRuby 1.3.

Why DataMapper for Oracle database?

If you would like to learn main differences between DataMapper and ActiveRecord then please start with this overview and this summary of benefits.

Here I will mention specific benefits if you would like to use DataMapper with Oracle database.

Model properties

In DataMapper you always specify in model class definition what Ruby “type” you would like to use for each model attribute (or property as called in DataMapper):

class Post
  include DataMapper::Resource
  property :id,         Serial
  property :title,      String
  property :post_date,  Date
  property :created_at, DateTime
  property :updated_at, Time
end

The main benefit for that is that you can explicitly define when to use Ruby Time, Date or DateTime class which is stored as DATE (or sometimes as TIMESTAMP) in Oracle database. In addition you can define your own custom DataMapper types and define how to serialize them into database.

Composite primary keys

DataMapper core library supports composite primary keys for models. If you use ActiveRecord then there is an option to use additional composite_primary_keys gem but it regularly breaks with latest ActiveRecord versions and quite often it also might break in some edge cases. In DataMapper composite primary keys are defined quite simple:

class City
  include DataMapper::Resource
  property :country,   String, :key => true
  property :name,      String, :key => true
end
Legacy schemas

DataMapper is quite useful when you want to put Ruby models on top of existing Oracle schemas. It is possible to provide different database field name for property or provide custom sequence name for primary keys:

class Post
  include DataMapper::Resource
  property :id, Serial, :field => "post_id", :sequence => "post_s"  
end

You can also define one model that can be persisted in two different repositories (e.g. databases or schemas) and use different naming conventions in each repository:

class Post
  include DataMapper::Resource
  repository(:old) do
    property :id, Serial, :field => "post_id", :sequence => "post_s"
  end
  repository(:default) do
    property :id, Serial
  end
end

As a result DataMapper can be used also for data migration between different databases.

Bind variables

ActiveRecord always generates SQL statements for execution as one single string. Therefore Oracle enhanced adapter always initializes Oracle session with setting cursor_sharing=‘similar’. It instructs Oracle always to take all literals (constants) from SQL statement and replace them with bind variables. It reduces the number of unique SQL statements generated but also it is some overhead for Oracle optimizer.

DataMapper always passes all statement parameters separately to corresponding database adapter and therefore it is possible for Oracle adapter to pass all parameters as bind variables to Oracle.

CLOB and BLOB values inserting and selecting

As for ActiveRecord all inserted values should be passed as literals in INSERT statement it was not possible to insert large CLOB and BLOB values directly in INSERT statement. Therefore ActiveRecord Oracle enhanced adapter did separate call-backs for inserting any CLOB or BLOB data after INSERT of other data. In DataMapper it is possible to insert all data at once as CLOB and BLOB data are passed as bind variables.

DataMapper also handles better lazy loading of large columns. So if you define property as Text then by default it will not be selected from database – it will be selected separately only when you use it. Typically it could reduce amount of data that needs to be sent from database to application as Text properties are quite often not needed in e.g. all web pages.

Wny not DataMapper?

If you are fine with ActiveRecord default conventions and you don’t have any issues that I listed previously then probably ActiveRecord is good enough for you and you shouldn’t change to DataMapper. There are of course much more Rails plugins that work with ActiveRecord but not yet with DataMapper. And DataMapper is still much less used and therefore there might some edge cases where it is not tested and you will need to find the issue causes by yourself.

But if you like to try new things then please try it out – and also DataMapper community is quite friendly and helpful and will help to solve any issues :)

Installation of DataMapper Oracle adapter

So if you have decided to try to use DataMapper with Oracle database then follow the instructions how to install it.

Oracle support is done for current development version 0.10.0 of DataMapper – therefore you will need to install the latest versions from GitHub (they are still not published as gems on RubyForge).

DataMapper with Oracle adapter can be used both on MRI 1.8.6 (I am not testing it on 1.8.7) and Ruby 1.9.1 as well as on JRuby 1.3. And currently installation is tested on Mac OS X and Linux – if there is anyone interested in Windows support then please let me know.

MRI 1.8.6 or Ruby 1.9.1

At first you need to have the same preconditions as for ActiveRecord:

  • Oracle Instant Cient
  • ruby-oci8 gem, version 2.0.2 or later

If you are using Mac then you can use these instructions for installation.

Now at first it is necessary to install DataObjects Oracle driver – DataObjects library is unified interface to relational databases (like SQLite, MySQL, PostgreSQL, Oracle) that DataMapper uses to access these databases.

At first validate that you have the latest version of rubygems installed and install necessary additional gems:

gem update --system
gem install addressable -v 2.0

As I mentioned currently you need to install the latest version from GitHub (at first create and go to directory where you would like to store DataMapper sources):

git clone git://github.com/datamapper/extlib.git
cd extlib
git checkout -b next --track origin/next
rake install
cd ..
git clone git://github.com/datamapper/do.git
cd do
git checkout -b next --track origin/next
cd data_objects
rake install
cd ../do_oracle
rake compile
rake install
cd ../..

Now if DataObjects installation was successful you can install DataMapper. UPDATE: Oracle adapter is now in “next” branch of DataMapper so now you need to install it form there:

git clone git://github.com/datamapper/dm-core.git
cd dm-core
git checkout -b next --track origin/next
rake install

Now start irb and test if you can connect to Oracle database (change database name, username and password according to your setup):

require "rubygems"
require "dm-core"
DataMapper.setup :default, "oracle://hr:hr@xe"

and try some basic DataMapper operations (I assume that you don’t have posts table in this schema):

class Post
  include DataMapper::Resource
  property :id,     Serial, :sequence => "posts_seq"
  property :title,  String
end
DataMapper.auto_migrate!
p = Post.create(:title=>"Title")
Post.get(p.id)
Post.auto_migrate_down!
JRuby

At first I assume that you have already installed JRuby latest version (1.3.1 at the moment).

Then you need to place Oracle JDBC driver ojdbc14.jar file in JRUBY_HOME/lib directory (other option is just to put somewhere in PATH).

All other installation should be done in the same way – just use “jruby -S gem” instead of “gem” and “jruby -S rake” instead of “rake” and it should install necessary gems for JRuby.

In addition before installing do_oracle gem you need to install do_jdbc gem (which contains general JDBC driver functionality):

# after installation of data_objects gem
cd ../do_jdbc
jruby -S rake compile
jruby -S rake install
# continue with do_oracle installation
Other DataMapper gems

DataMapper is much more componentized than ActiveRecord. Here I described how to install just the main dm-core gem. You can see the list of other gems in DataMapper web site.

To install additional DataMapper gems you need to

git clone git://github.com/datamapper/dm-more.git
cd dm-more
git checkout -b next --track origin/next
cd dm-some-other-gem
rake install
Questions?

This was my first attempt to describe how to start to use DataMapper with Oracle. If you have any questions or something is not working for you then please write comments and I will try to answer and fix any issues in these instructions.

Categories: Development

Team Productivity Center Tutorial Published

Susan Duncan - Mon, 2009-07-20 05:57
There is now an Oracle By Example (OBE) tutorial available for TPC. It takes you through a number of topics including, on the admin side, setting up teams and integrating with repositories and, as a TPC user, querying repositories and creating relationships and tagging items across repositories.

It assumes that you have already installed or have access to TPC on the server. If not, here are instructions on doing that. The OBE includes images and examples using JIRA and MS Project Server but for a tutorial on the Rally Software integration, explore their site

And, as always, give me your feedback!

Too many Managers spoil the project

Krishanu Bose - Sun, 2009-07-19 07:37

All of us in our childhood must have heard of the proverb "Too many cooks spoil the broth". If too many people try to take charge of a task, the end product might be ruined. This applies to any task like an implementation project as well.An easy way to identify if the project is going awry is when you find many people following up to find the status of the job being done. In one of my earlier projects there was one developer writing a piece of code and there were four managers chasing the poor lady for updates and status. However, none of these managers were capable or inclined to help out the person writing a complex piece of code.One really felt bad when a bug was detected in the code and none of the managers took ownership and started blaming each other and the poor lady for writing the incorrect code.

So, the next obvious question would be wher the top management is blind to such mis-management or most of the time over-management (sometimes micro-management)? My answer would be a definite 'Yes'. For the top management what matters at the end of the day is billing and not success or failure of projects. They are mostly driven by short term objectives of ensuring that their bench strength is low and not the long term objective of ensuring customer satisfaction by delivering a good solution for the client. Also, with having too many managers, the internal dynamics of peer rivalry; of each person trying to show case himself as the better manager in the eyes of top management and trying to out-do the other in a selfish manner puts the project at a grave risk.

Most of us would have seen or been part of such a project. Do share your thoughts on why we staff so many managers when there is no necessity for so many managers without defining clear boundaries and scope and conflicting job area. And, if someone has changed such a scenario, do share your experience on how to change such a situation and bring the project back to track?

Post Lunch session -OSGI

Venkat Viswa - Sat, 2009-07-18 03:56
Lunch was extremely good and felt good after eating. Had only less though so as to keep in make.

Now the topic is on OSGI by Sameera Jayasoma . WSO2 is an open source company hased in Srilanka. Their main product is WSO2 carbon,Which is a fully componentized SOA platform based on OSGI. Then there was some PR on their company.


Modular Systems
Break large system into more smaller,understandable units called modules.Benefits of modular systems are reuse,abstraction,division of labour and ease of repair.


Module should be self contained,highly cohesive and loosely coupled.

Java for building Modular systems
--> provides class level modularity(public, non public methods)
--> we need somthing like external packages, internal packages in a jar

ClassLoader hierarchy : Bootstrap classloader (rt.jar) --> extension classloader (ext.jar) --> application classloader (a.jar,b.jar)


Problems with JARs

--> Jar is unit of deployment in Java and typical java app consists of set of jar files

--> no runtime representation for a JAR.

when java loads a class, it tries to load class one by one in the classpath.

--> multiple version of jar files cannot be loaded simultaneoulsy
--> Jar files cannot declare their dependencies on other jars

Java lacks true modularism and dynamism. Java has given flexibility to build system on top of it. This is called OSGI (dynamic module system for java)

OSGI

--> Bundle is a unit of modularization of OSGi. OSGI app is a collection of Bundle.
Bundle is similar to jar and contains additional metada in Manifest.mf file. In osgi , java package is the unit of information hiding. unlike in jar when class is the
unit of information hiding.

--> bundle can share and hide packages from other bundles.

sample manifest.mf file

Bundle-MainfestVersion:2
Bunle-Name :
Bundle-SymbolicName:
Bundle-Version:1.0.0 (major,minor,micro version)
Export-Package
Import-package:

symbolic name and version are unique for a bundle. default value for version is 1.0.0
all packages in bundle are hidden from other bundle by default. if we need to share package explicitly mention the name.

Bundles & Class Loaders
OSGI gives separate classloaders per bundle,thereby eliminating hierarchial classloading in java

System bundle is a special bundle that represents the framework and registers system services

Tooling for osgi : eclipse pde is the best of the lot

osgi provides command line interface

ss -> lists all the bundles in the system

b/bundles - gives the list of informaiton about the bundle.export information.

b 0 --> gives system bundl information

export-hundle and import-bundle
require-bundle : import all exported packages from another bundle. but this is discouraged.

Issues with require bundle
Split packages,bundle changes over time,require bundles chain can occur.

fragment bundles : attached to a host bundle by the framework.Shares parent classloader.

runtime class loading in osgi: order of loadin

1) call parent clas loader (only for java.*)
2)imported packages
3)required bundles
4)load from its own internal classpath
5)Load from fragment classpath

Usage of fragment bundle 1) provide translation files to different locales

OSGI specifications --specifies framework and specificaions

osgi alliance

current version is 4.1

osgi can be considered as a layer on top of java.can also use jni to talk to OS. functionality of framework is deivided into several layers

1)module 2)lifecycle 3(

Lifecyce layer : manage lifecyle of api.
bundle starts --> installed,resolved,starting,active,stopping,uninstalled.

Register a service using bundle service using registerservice on bundlecontext

Using a service --> find a serviceReference,get the object,cast it to a proper type and use the service

Events and Listeners

framework fires serviceevents during registering a service,unregistering a service

services are dynamic. monitor services using service listeners,service trackers,declarative service,iPOJO and blueprint services.

Service Tracker

declarative service

SCR (service component runtime)

Powerful Reporting with BIRT

Venkat Viswa - Sat, 2009-07-18 00:18

This is the sesison that i hope to get the maximum benefits from especially since we have started using BIRT for our projects. The session is by Krishna Venktraman from Actuate, who is the director



Background

--> Most applications have some kind of data visualization need. Real world applications don't consider reporting when they design the application. BIRT provides a framework that manages all the reporting requirements.

Traditional approach :

Buy closed source commercial products or build custom developed solution
With open source based products things become much easier.

Emergence of BIRT Project
BIRT was initiated in 2004 as a top level eclipse project. IBM,Innovent and Actuate are the key members.
Focus was BIRT was to make report development easy.Its open and standards based and provides for rich programmatic control. It has both simplicity and well as Power to create complex reports. BIRT supports of concept of reporting libraries that promotes reuse and reduces changes.

The main audience for BIRT was report developers,advanced report developers who use scripting, runtime integration developers who use birt viewer and engine apis, report design integrator who use design engine apis, extension developers who develop extention points and finally core development who do eclipse development itself.

There were five major releases since project launch with 1.0 ,2.0,2.1,2.2,2.3,2.5 as the versions. It was built from ground up and lot of community feedback was take into account

Key capabilities
--> Simple to complex layouts
--> Comprehensive data access
--> Output formats
--> Reuse and developer productivity
--> Interactivity and linking
--> Multiple usage and productivity aids

Some key features added in 2.x versions
--> ability to join datasets
--> enhanced chart activity
--> multiple master page support
--> Dynamic crosstab support
--> CSS files can be linked
--> Emitters for XLS,Word , PPT and Postscript
--> Webservices can act as datasource
--> Javascript/Java Debugger

BIRT design gallery : some of the displays llok really good.

High level BIRT acrhitecture

Two key components
1) Report Design Engine
2) Report runtime Engine

at the end of design process we get a .rptdesign. report engine then looks at the file, interprets it, fetches the data and goes about the generation process. we get an .rptdocument. Key services of report engine are generation services,data services,charting engine and presentation services.

BIRT exchange is a community site for posting all BIRT related issues.

Key API
a) Design Engine API b) Report Engine API c) Chart Engine API

Extension points
--> data source extensibility
--> custom business logic extensibility using scripting and acess java code
--> Visualization extensibility
--> Rendering content for output (emitters)

Deployment options

--> Commercial Report Server
--> J2EE Application server
--> Java application

Actuate provides Actuate BIRT report designer,BIRT report studio, BIRT Viewer,BIRT Interactive Viewer,deployment kits, iServer Express,iServer Enterprise.

Now for the actual report designs...

Just had an overview of the birt tool. Going through the series of demos on basic report, now basic charts, then crosstab/matrix report.

Book Marks and hyper links

Click the element and then pickup bookmark from properties . Now go to the place where you want to place hyperlink and link up the bookmark.

Filters
Limit what to display . You can limit at dataset level or at table level.

Sub Reports

Main Report --> Outer table
Sub Report --> nested table
Pass data value from outer table to nested table

In BIRT we need to nest tables in order to create sub reports.
Use Dataset parameter binding on the child set to get the data from parent dataset

BIRT Scripting

Mozilla Rhino script engine is embedded in BIRT.
Scripting = Expressions + Events
It users server side scripting. All the export options will get the same output.


Event Handling
Initialization Report level events (-initalize,-beforeFactory) --> Data source Open (-beforeOpen,-open) --> Data Set Generation (--beforeOpen,--open,fetch) --> Dataset Generation

Generation phase : Report level,datasource level, element level
Render : report level,element level

can implement powerful logic using scripting

Report Libaries
Just use export library by right clicking on the rptdesign file. Then click on use library and say use library.It has all the data sources, data sets and report items.
Library is a container of reporting items.
Can do local overrides on things imported from libraries

Templates
File --> New Template.Serves as starting points for speedy report development.
Display a visual image. then register this template with the new wizard.

Last piece was a demo on how to deploy the reports.

One big disappointment was i couldn't get any idea on how to integrate report engine and birt viewer with the custom applications.

On the whole it was a good session.

Keynote II

Venkat Viswa - Fri, 2009-07-17 23:53


Enhancing the developer productivity using RAD for websphere


--> Interesting the two main competitors give the keynotes one after the other

IBM Rational Architecture and construction
--> for solution,software and enterprise architects : Rational Data Architect,Rational Software modeler
--> for architects and designers who code in java,c++,xml --> Rational software architect Standard edition
--> Rational application developer --> for developers


IBM RAtional Applicaiton Developer for websphere

--> accelerates development
--> do even traceablity matrix
--> support for jpa and code generation

Comprehensive JEE5 tools

Unit testing is provided out of box
Visually layout JPA and EJB 3.0 classes
Convert UML to WSDL using RSA product
Provides excellent AJAX support

Enhanced support for Web 2.0
Declare POJOs and make it as REST service. Call Rest service as JSPs.
Javascript editor and debugger

JSF support
visual development of JSF pages
Data abstraction objects for easy data connectivity - SDO
Automatic code generation for data validation,formatting
Integration of third party JSF libraries

Portal development support is also excellent.

One of the very key features is to Automate performance and memory testing . This is built on top of eclipse TPTP.

Automates azpplication testing using WebSphere Application Server 6.0,6.1 and 7.0

IBMs strategy is to deliver high quality solutions by moving towards flexible architecture,automation layer,reduce onboarding

JAZZ platform : is meant for open community.

Mail id of the presenter is :bhrengen@in.ibm.com

Keynote address for the day

Venkat Viswa - Fri, 2009-07-17 23:52

First again a PR from SaltMarch :) .


The topic will be Plug-in to rapid development on Weblogic with Eclipse by Dhiraj Bandari

Dhiraj Bandari is as sales consultant from Oracle, moved on from BEA with the acquisition.

Oracle Enterprise Pack for Eclipse (OEPE) is a plugin that is really useful when we used weblogic application server integration. Provides the following features

--> Weblogic server (start,stop)
--> web services
--> spring
--. JSF + facelets
--> DB tools
--> ORM workbench



ORM workbench

--> creates the entity classes and helps to run all db functionalities
--> supports eclipselink,kodo,openjpa
--> has database schema broswer similar to TOAD

--> provides spring support


Tools for JAX-WS web services
--> New facets for weblogic web service development

Weblogic Server Tools
--> Run/deploy/debug locally or remotely
--> FastSwap ,shared libraries,Deployment plans
--> Migration support

Weblogic deployment descriptor editors


Oracle's strategy is to stop development on weblogic workshop. They will develop & enhance only JDeveloper and Enterprise pack for eclipse.


FastSwap Feature -- for iterative development

Traditional JEE Development cycle is Edit --> Build --> Deploy --> Test

Using Modern IDES it becomes Edit --> Deploy --> Test

FastSwap's goal is to eliminate the Deploy step Edit --> Test

FastSwap detects changes to class files,redfined changed classes,Non invasive and development mode only

Demo on FastSwap operation

How to enable fast swap feature --> go to weblogic application deployment descriptor and ebale fast swap. Then instant chagnes to ejb classes,web classes can be seen.

Day 2 live blogging - EclipseSummit India

Venkat Viswa - Fri, 2009-07-17 23:51
Participants are quiely settling in. Strength has considerably dwindled compared to yesterday. There was unexpected drizzle in Bangalore , causing me to get partly wet and making me feel miserable in an AC room.

Wireless Internet connection does not seem to work as expected :(. Hope to post all these one by one when i am back in the network.


Expectations for today
Today I am eagerly looking forward to attend BIRT workshop in the morning followed by a sumptuous lunch and then another workshop on OSGI.

Writing Non-ASCII Content into MQ

Ramkumar Menon - Fri, 2009-07-17 11:06

I had Arabic characters coming in from a partner webservice that I needed to write out to an MQ.
The default version was not writing out data into the MQ as expected - the Arabic data was being written out as a bunch of unreadable characters.
Following this, I followed the steps mentioned in the document http://download.oracle.com/docs/cd/E12524_01/relnotes.1013/e12523/adapters.htm#CHDDCAGA.
That did the trick!

JPA 2.0 New Features

Venkat Viswa - Fri, 2009-07-17 04:30
JPA 2.0 is releasing 2009 fall. (JSR 317)

Goal

--> Fil in ORM mapping gaps
--> Make object modeling more flexible
--> offer simple cache control abstraction
--> allow advanced locking settings
--> JPQL enhancements

More standardized properties
--> some properties are used by each and every driver

Persistent unit properties like javax.persistence.jdbc.driver, javax.persistence.jdbc.url

JPA 2.0 supports join table as well as f key relationships.
Collections of embeddables or basic values.

This is made possible using elementcollection annotation

OrderedLists
order is maintained even if you move things around by using an index.

More Map flexibility
Map Keys and values can be : Basic objects,embeddables, entities

Enahcned embedded support
Embeddables can be nested and can have relationships

Access Type options
--> mix access modes in a hierarchy (field or properties)
@Access annotation

Derived Identities
JPA 1.0 relationship field cannot be part of the id

JPA 2.0 @Id + @OneToOne

Second Level Cache
API for operating on entity cache which is accessible from EntityManagerFactory
Supports only ver basic cache operations , which can be extended by vendors

@Cacheable annotation on entity (default is true)

There is also a property by the name shared-cache-mode to denote per PersistenceUnit whether to cache.
--no entities
--only cacheable entities
-- all entities

Properties cacheRetreiveMode and cacheStoreMode per EM method call whether to
read from cache

Locking
1.0 allows only for optimistic locking
2.0 provides for optimistinc locking by default
pessimistic locking can be used per entity or query

Lock mode values introducted
Optimistic (=READ)
optimistic_force_increment(=write)
pessimistic_read
pessimistric_write

API Additions
Lockmode parameter added to find,refresh
Properties parameter added tofind,refresh,lock
Other useful additions
void detach(Object entity)
unwrap
Query API addition
getFirstResult
getHints
Enhanced JP QL
Timestamp literals
Non-polymorphic queries
IN expression may include collection parameter --> IN (:param)
Ordered List indexing
CASE statement

Criteria API
Similar to eclipselink expressions,hibernate criteria
Criteria API - Canonical Metamodel
--> For every managed class , there is a metamodel class


Load State Detection
JPA can integrate with Bean validation (JSR 303) : Hibernate Validator is an implentation

JPA 2.0 shpped as part of Java EE6 release

JOptimizer

Venkat Viswa - Fri, 2009-07-17 04:01
Back after a sumptuous lunch

Tool by Embarcadero technogies : http://www.embarcadero.com/products/j_optimizer/


Available standalone as well as eclipse plugin

Uses
--> Detecting excessive object allocation

--> memory leaks

--> detecting bottle necks

--> code coverage/code quality

--> thread debugging

--> Breakdown JEE requests

--> request analyzer

remotely connect and find which layer is causing problems. Can go into any level of detail.

code coverage

how to analyze and find out threading issues ?

--> Detecting deadlocks and visually analyze the deadlocks.

richard.davies@embarcadero.com

Pages

Subscribe to Oracle FAQ aggregator