Marcelo Ochoa

Subscribe to Marcelo Ochoa feed
This place is a notepad for sharing experience, code and other related stuff to Oracle, Java and XML technologies.Marcelo Ochoahttps://plus.google.com/107316642729450802939noreply@blogger.comBlogger68125
Updated: 13 hours 3 min ago

Building an Oracle NoSQL cluster using Docker

Thu, 2015-09-10 09:43
Continuing with my previous post about using Docker in development/testing environment now the case is how to build an Oracle NoSQL cluster in single machine using Docker.
I assume that you already have docker installed and running, there are a plenty of tutorial about that and in my case using Ubuntu is just two step installer using apt-get :)
My starting point was using some ideas from another Docker project for building a Hadoop Cluster.
This project is using another great idea named Serf/Dnsmasq on Docker the motivating extracted from README.md file is:
This image aims to provide resolvable fully qualified domain names,
between dynamically created docker containers on ubuntu.
## The problem
By default **/etc/hosts** is readonly in docker containers. The usual
solution is to start a DNS server (probably as a docker container) and pass
a reference when starting docker instances: `docker run -dns `
So with this idea in mind I wrote this Docker file:

FROM java:openjdk-7-jdk
MAINTAINER marcelo.ochoa@gmail.com
RUN export DEBIAN_FRONTEND=noninteractive && \
    apt-get update && \
    apt-get install -y dnsmasq unzip curl ant ant-contrib junit
# dnsmasq configuration
ADD dnsmasq.conf /etc/dnsmasq.conf
ADD resolv.dnsmasq.conf /etc/resolv.dnsmasq.conf
# install serfdom.io
RUN curl -Lo /tmp/serf.zip https://dl.bintray.com/mitchellh/serf/0.5.0_linux_amd64.zip
RUN curl -Lo /tmp/kv-ce-3.3.4.zip http://download.oracle.com/otn-pub/otn_software/nosql-database/kv-ce-3.3.4.zip
RUN unzip /tmp/serf.zip -d /bin
RUN unzip /tmp/kv-ce-3.3.4.zip -d /opt
RUN rm -f /tmp/serf.zip
RUN rm -f /tmp/kv-ce-3.3.4.zip
ENV SERF_CONFIG_DIR /etc/serf
# configure serf
ADD serf-config.json $SERF_CONFIG_DIR/serf-config.json
ADD event-router.sh $SERF_CONFIG_DIR/event-router.sh
RUN chmod +x  $SERF_CONFIG_DIR/event-router.sh
ADD handlers $SERF_CONFIG_DIR/handlers
ADD start-serf-agent.sh  $SERF_CONFIG_DIR/start-serf-agent.sh
RUN chmod +x  $SERF_CONFIG_DIR/start-serf-agent.sh
EXPOSE 7373 7946 5000 5001 5010 5011 5012 5013 5014 5015 5016 5017 5018 5019 5020
CMD /etc/serf/start-serf-agent.sh
relevant information was marked in strong, here the explanation:
  • FROM java:openjdk-7-jdk, this Docker base images already have installed Ubuntu and Java7 so only a few additions are required
  • RUN curl .. /0.5.0_linux_amd64.zip, this is compiled version of Serf implementation ready to run on Ubuntu
  • RUN curl -Lo .. /kv-ce-3.3.4.zip, this is the community version of Oracle NoSQL, free download
  • CMD /etc/serf/start-serf-agent.sh, this is the script modified from the original Docker/serf project which includes the configuration of the Oracle NoSQL just after the image boot.
Last point requires an special explanation first there are 3 bash function for starting, stopping and creating the bootconfig file for the NoSQL nodes, here the sections:
stop_database() {
        java -Xmx256m -Xms256m -jar $KVHOME/lib/kvstore.jar stop -root $KVROOT
exit
}
start_database() {
nohup java -Xmx256m -Xms256m -jar $KVHOME/lib/kvstore.jar start -root $KVROOT &
}
create_bootconfig() {
        [[ -n $NODE_TYPE ]] && [[ $NODE_TYPE = "m" ]] && java -jar $KVHOME/lib/kvstore.jar makebootconfig -root $KVROOT -port 5000 -admin 5001 -host "$(hostname -f)" -harange 5010,5020 -store-security none -capacity 1 -num_cpus 0 -memory_mb 0
        [[ -n $NODE_TYPE ]] && [[ $NODE_TYPE = "s" ]] && java -jar $KVHOME/lib/kvstore.jar makebootconfig -root $KVROOT -port 5000 -host "$(hostname -f)" -harange 5010,5020 -store-security none -capacity 1 -num_cpus 0 -memory_mb 0
}
last function (create_bootconfig) works different if the node is designed as master ($NODE_TYPE = "m") or slave ($NODE_TYPE = "s").
I decided to not persist the NoSQL storage after docker images stop, but is it possible replacing the directory where the NoSQL nodes reside externally as I showed on my previous post, with this configuration the NoSQL storage is not re-created at every boot.
With above explanations we can create the Docker image using:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# docker build -t "oracle-nosql/serf" .
the complete list of files required can be download as zip from this location.
Once the image is built We can start a cluster of 3 nodes simple executing the script start-cluster.sh, this script creates a node named master.mycorp.com and two slaves, slave[1..2].mycorp.com, here the output:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ./start-cluster.sh
WARNING: Localhost DNS setting (--dns=127.0.0.1) may fail in containers.
e4932053780227f2a99e167f6efb0b1eeb9fda93fba2aa9206c7a9f05bacc25c
WARNING: Localhost DNS setting (--dns=127.0.0.1) may fail in containers.
d6d0387c6893263141d58efa80933065be23aa3c98651dc6358bf7d7688d32cf
WARNING: Localhost DNS setting (--dns=127.0.0.1) may fail in containers.
4fc18aebf466ec67de18c72c22739337499b5a76830f86d90a6533ff3bb6e314
you can check the status of the cluster executing a serf command at the master node, for example:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# docker exec -ti master serf members
master.mycorp.com  172.17.0.71:7946  alive
slave1.mycorp.com  172.17.0.72:7946  alive
slave2.mycorp.com  172.17.0.73:7946  alive
at this point 3 NoSQL nodes are ready to work, but they are unconfigured, here the output of NoSQL ping command:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# docker exec -ti master java -jar /opt/kv-3.3.4/lib/kvstore.jar ping -host master -port 5000
SNA at hostname: master, registry port: 5000 is not registered.
No further information is available
Using the examples of Oracle NoSQL Documentation We can create an store using this plan (script.txt):
configure -name mystore
plan deploy-zone -name "Boston" -rf 3 -wait
plan deploy-sn -zn zn1 -host master.mycorp.com -port 5000 -wait
plan deploy-admin -sn sn1 -port 5001 -wait
pool create -name BostonPool
pool join -name BostonPool -sn sn1
plan deploy-sn -zn zn1 -host slave1.mycorp.com -port 5000 -wait
pool join -name BostonPool -sn sn2
plan deploy-sn -zn zn1 -host slave2.mycorp.com -port 5000 -wait
pool join -name BostonPool -sn sn3
topology create -name topo -pool BostonPool -partitions 300
plan deploy-topology -name topo -wait
show topology
to simple submit this plan to the NoSQL nodes there is a script named deploy-store.sh, here the output:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ./deploy-store.sh 
Store configured: mystore
Executed plan 1, waiting for completion...
Plan 1 ended successfully
Executed plan 2, waiting for completion...
Plan 2 ended successfully
Executed plan 3, waiting for completion...
Plan 3 ended successfully
Added Storage Node(s) [sn1] to pool BostonPool
Executed plan 4, waiting for completion...
Plan 4 ended successfully
Added Storage Node(s) [sn2] to pool BostonPool
Executed plan 5, waiting for completion...
Plan 5 ended successfully
Added Storage Node(s) [sn3] to pool BostonPool
Created: topo
Executed plan 6, waiting for completion...
Plan 6 ended successfully
store=mystore  numPartitions=300 sequence=308
  zn: id=zn1 name="Boston" repFactor=3 type=PRIMARY
  sn=[sn1] zn:[id=zn1 name="Boston"] master.mycorp.com:5000 capacity=1 RUNNING
    [rg1-rn1] RUNNING
          No performance info available
  sn=[sn2] zn:[id=zn1 name="Boston"] slave1.mycorp.com:5000 capacity=1 RUNNING
    [rg1-rn2] RUNNING
          No performance info available
  sn=[sn3] zn:[id=zn1 name="Boston"] slave2.mycorp.com:5000 capacity=1 RUNNING
    [rg1-rn3] RUNNING
          No performance info available
  shard=[rg1] num partitions=300
    [rg1-rn1] sn=sn1
    [rg1-rn2] sn=sn2
    [rg1-rn3] sn=sn3
Also you can access to NoSQL Admin page using the URL http://localhost:5001/ because the start-cluster.sh script publish this port outside the master container.
Here the screen shot:


The cluster is ready!!, have fun storing your data.

Addendum!!
Persistent NoSQL store, as I mentioned early in this post if We put the /var/kvroot mapped to the host machine the NoSQL store will persist through multiples execution of the cluster, for example creating 3 directories as:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# mkdir /tmp/kvroot1
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# mkdir /tmp/kvroot2
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# mkdir /tmp/kvroot3
and creating a new shell script for starting the cluster mapped to above directories as (start-cluster-persistent.sh):
docker run -d -t --volume=/tmp/kvroot1:/var/kvroot --publish=5000:5000 --publish=5001:5001 --dns 127.0.0.1 -e NODE_TYPE=m -P --name master -h master.mycorp.com oracle-nosql/serf
FIRST_IP=$(docker inspect --format="{{.NetworkSettings.IPAddress}}" master)
docker run -d -t --volume=/tmp/kvroot2:/var/kvroot --dns 127.0.0.1 -e NODE_TYPE=s -e JOIN_IP=$FIRST_IP -P --name slave1 -h slave1.mycorp.com oracle-nosql/serf
docker run -d -t --volume=/tmp/kvroot3:/var/kvroot --dns 127.0.0.1 -e NODE_TYPE=s -e JOIN_IP=$FIRST_IP -P --name slave2 -h slave2.mycorp.com oracle-nosql/serf
We can start and deploy the store for the first time using:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ./start-cluster-persistent.sh
... output here...
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ls -ltr /tmp/kvroot1
total 8
-rw-r--r-- 1 root root  52 sep 10 20:19 security.policy
-rw-r--r-- 1 root root 781 sep 10 20:19 config.xml
...
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ./deploy-store.sh 
... output here ...
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# docker exec -ti master java -jar /opt/kv-3.3.4/lib/kvstore.jar ping -host master -port 5000
Pinging components of store mystore based upon topology sequence #308
300 partitions and 3 storage nodes
Time: 2015-09-10 23:20:18 UTC   Version: 12.1.3.3.4
Shard Status: total:1 healthy:1 degraded:0 noQuorum:0 offline:0
Zone [name="Boston" id=zn1 type=PRIMARY]   RN Status: total:3 online:3 maxDelayMillis:0 maxCatchupTimeSecs:0
Storage Node [sn1] on master.mycorp.com:5000    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.3.4 2015-04-24 09:01:17 UTC  Build id: e3ae28b507bf
Admin [admin1] Status: RUNNING,MASTER
Rep Node [rg1-rn1] Status: RUNNING,MASTER sequenceNumber:627 haPort:5011
Storage Node [sn2] on slave1.mycorp.com:5000    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.3.4 2015-04-24 09:01:17 UTC  Build id: e3ae28b507bf
Rep Node [rg1-rn2] Status: RUNNING,REPLICA sequenceNumber:627 haPort:5010 delayMillis:0 catchupTimeSecs:0
Storage Node [sn3] on slave2.mycorp.com:5000    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.3.4 2015-04-24 09:01:17 UTC  Build id: e3ae28b507bf
Rep Node [rg1-rn3] Status: RUNNING,REPLICA sequenceNumber:627 haPort:5010 delayMillis:0 catchupTimeSecs:0
as you can see the cluster is ready for storing data, now We will stop and start again to see that is not necessary to redeploy the configuration:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ./stop-cluster.sh
... output here ...
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ./start-cluster-persistent.sh
... output here ...
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# docker exec -ti master java -jar /opt/kv-3.3.4/lib/kvstore.jar ping -host master -port 5000
Pinging components of store mystore based upon topology sequence #308
300 partitions and 3 storage nodes
Time: 2015-09-10 23:34:15 UTC   Version: 12.1.3.3.4
Shard Status: total:1 healthy:1 degraded:0 noQuorum:0 offline:0
Zone [name="Boston" id=zn1 type=PRIMARY]   RN Status: total:3 online:3 maxDelayMillis:2342 maxCatchupTimeSecs:-4
Storage Node [sn1] on master.mycorp.com:5000    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.3.4 2015-04-24 09:01:17 UTC  Build id: e3ae28b507bf
Admin [admin1] Status: RUNNING,MASTER
Rep Node [rg1-rn1] Status: RUNNING,REPLICA sequenceNumber:639 haPort:5011 delayMillis:2342 catchupTimeSecs:-4
Storage Node [sn2] on slave1.mycorp.com:5000    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.3.4 2015-04-24 09:01:17 UTC  Build id: e3ae28b507bf
Rep Node [rg1-rn2] Status: RUNNING,REPLICA sequenceNumber:639 haPort:5010 delayMillis:0 catchupTimeSecs:0
Storage Node [sn3] on slave2.mycorp.com:5000    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.3.4 2015-04-24 09:01:17 UTC  Build id: e3ae28b507bf
Rep Node [rg1-rn3] Status: RUNNING,MASTER sequenceNumber:639 haPort:5010
and that's all last ping command shows that the store survive the stop/remove/start container cycle.


On docker, Ubuntu and Oracle RDBMS

Tue, 2015-08-25 20:17
I have Oracle RDBMS working on Ubuntu for a long time (12.04 and 14.04) RDBMS versions 10g, 11g and 12c with some tweaks to get it working.
Apart from the effort to get it working, some time requires Makefile modifications, these configurations are not supported by Oracle and for sure you couldn't report any bug.
To solve this, is easy to get VirtualBox working and download a pre-built VM, but it requires a lot of hardware resource :(
Hopefully Docker comes in action, I followed this great post by Frits Hoogland for Installing Oracle Database in Docker.
First We need docker running on Ubuntu, there too many guides about that also by simple installing using apt-get repository, my case.
Following Frits's guide I changed the default repository destination using an USB external disk formatted with btrfs:
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        15G  7,7G  6,5G  55% /
none            4,0K     0  4,0K   0% /sys/fs/cgroup
udev            3,9G   12K  3,9G   1% /dev
tmpfs           789M  1,4M  788M   1% /run
none            5,0M     0  5,0M   0% /run/lock
none            3,9G  344M  3,6G   9% /run/shm
none            100M   56K  100M   1% /run/user
/dev/sda6       442G  384G   36G  92% /home
/dev/sdb        597G   18G  577G   3% /var/lib/docker
Also I created the directory for the Docker template but with some changes, here the files:
root@local:/# cd /var/lib/docker/dockerfiles/build-oracle-12102/
root@local:/var/lib/docker/dockerfiles/build-oracle-12102# ls -l
total 2625148
-rw-r--r-- 1 root root      10976 ago 23 10:07 db_install.dbt
-rw-r--r-- 1 root root      10931 ago 25 16:30 db_install-full.dbt
-rw-r--r-- 1 root root      10972 ago 25 16:30 db_install-simple.dbt
-rw-r--r-- 1 root root       1168 ago 25 11:09 Dockerfile
-rw-r--r-- 1 root root 1673544724 ago 22 20:36 linuxamd64_12102_database_1of2.zip
-rw-r--r-- 1 root root 1014530602 ago 22 20:36 linuxamd64_12102_database_2of2.zip
-rwxr-xr-x 1 root root       1729 ago 25 10:11 manage-oracle.sh
-rw-r--r-- 1 root root      24542 ago 24 20:42 responsefile_oracle12102.rsp
The content of Dockerfile is:
FROM    oraclelinux:6
MAINTAINER marcelo.ochoa@gmail.com
RUN groupadd -g 54321 oinstall
RUN groupadd -g 54322 dba
RUN useradd -m -g oinstall -G oinstall,dba -u 54321 oracle
RUN yum -y install oracle-rdbms-server-12cR1-preinstall perl wget unzip
RUN mkdir /u01
RUN chown oracle:oinstall /u01
USER    oracle
WORKDIR /home/oracle
COPY linuxamd64_12102_database_1of2.zip /home/oracle/
COPY linuxamd64_12102_database_2of2.zip /home/oracle/
COPY responsefile_oracle12102.rsp /home/oracle/
RUN unzip linuxamd64_12102_database_1of2.zip
RUN unzip linuxamd64_12102_database_2of2.zip
RUN rm linuxamd64_12102_database_1of2.zip linuxamd64_12102_database_2of2.zip
RUN /home/oracle/database/runInstaller -silent -force -waitforcompletion -responsefile /home/oracle/responsefile_oracle12102.rsp -ignoresysprereqs -ignoreprereq
USER    root
RUN /u01/app/oraInventory/orainstRoot.sh
RUN /u01/app/oracle/product/12.1.0.2/dbhome_1/root.sh -silent
RUN rm -rf /home/oracle/responsefile_oracle12102.rsp /home/oracle/database
USER    oracle
WORKDIR /home/oracle
RUN     mkdir -p /u01/app/oracle/data
COPY    manage-oracle.sh /home/oracle/
EXPOSE  1521
CMD sh -c /home/oracle/manage-oracle.sh
Remarked lines differ from the original post by Frits to not download everything from the web using wget, instead of doing that I downloaded 12c binary distribution from OTN Web Site and I copied these two zip into the directory where Dockerfile resides, also I am download Frits's files responsefile_oracle12102.rsp, manage-oracle.sh and db_install.dbt.
The file which is responsible for creating/starting/stopping the DB also was modified to use db_install.dbt from the host machine, here the modified version of manage-oracle.sh:
#!/bin/bash
PERSISTENT_DATA=/u01/app/oracle/data
export ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_1
export ORACLE_SID=$(hostname)
stop_database() {
$ORACLE_HOME/bin/sqlplus / as sysdba << EOF
shutdown abort
exit
EOF
exit
}
start_database() {
$ORACLE_HOME/bin/sqlplus / as sysdba << EOF
startup
exit
EOF
}
create_pfile() {
$ORACLE_HOME/bin/sqlplus -S / as sysdba << EOF
set echo off pages 0 lines 200 feed off head off sqlblanklines off trimspool on trimout on
spool $PERSISTENT_DATA/init_$(hostname).ora
select 'spfile="'||value||'"' from v\$parameter where name = 'spfile';
spool off
exit
EOF
}
trap stop_database SIGTERM
printf "LISTENER=(DESCRIPTION_LIST=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=$(hostname))(PORT=1521))(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1521))))\n" > $ORACLE_HOME/network/admin/listener.ora
$ORACLE_HOME/bin/lsnrctl start
if [ ! -f ${PERSISTENT_DATA}/DATABASE_IS_SETUP ]; then
sed -i "s/{{ db_create_file_dest }}/\/u01\/app\/oracle\/data\/$(hostname)/" $PERSISTENT_DATA/db_install.dbt
sed -i "s/{{ oracle_base }}/\/u01\/app\/oracle/" $PERSISTENT_DATA/db_install.dbt
sed -i "s/{{ database_name }}/$(hostname)/" $PERSISTENT_DATA/db_install.dbt
$ORACLE_HOME/bin/dbca -silent -createdatabase -templatename $PERSISTENT_DATA/db_install.dbt -gdbname $(hostname) -sid $(hostname) -syspassword oracle -systempassword oracle -dbsnmppassword oracle
create_pfile
if [ $? -eq 0 ]; then
touch ${PERSISTENT_DATA}/DATABASE_IS_SETUP
fi
else
mkdir -p /u01/app/oracle/admin/$(hostname)/adump
$ORACLE_HOME/bin/sqlplus / as sysdba << EOF
startup pfile=$PERSISTENT_DATA/init_$(hostname).ora
exit
EOF
fi
tail -f /u01/app/oracle/diag/rdbms/$(hostname)/*/trace/alert_$(hostname).log &
wait
I am doing that to quickly creates different database with some Oracle options enabled or not, for example db_install.dbt file for OLS Searching will have enabled:
option name="JSERVER" value="true"
option name="SPATIAL" value="true"
option name="IMEDIA" value="true"
option name="XDB_PROTOCOLS" value="true"
option name="ORACLE_TEXT" value="true"
.....
To create the docker template is similar to the original post:
# cd /var/lib/docker/dockerfiles/build-oracle-12102
# docker build -t "oracle-12102" .
# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
oracle-12102        latest              24687eeab73c        8 hours ago         12.26 GB
oraclelinux         6                   cfc75fa9f295        3 weeks ago         156.2 MB
Finally to create a full featured Java enabled 12c database I created a directory with the following content:
# mkdir -p /var/lib/docker/db/ols
# cp /var/lib/docker/dockerfiles/build-oracle-12102/db_install-full.dbt  /var/lib/docker/db/ols/db_install.dbt
# chown -R 54321:54321 /var/lib/docker/db/ols
and executed:
# docker run --ipc=host --volume=/var/lib/docker/db/ols:/u01/app/oracle/data --name ols --hostname ols --detach=true oracle-12102
25efb5d26aad31e7b06a8e2707af7c25943e2e42ec5c432dc9fa55f0da0bdaef
the container is started and the database creations works perfect, here the output:
# docker logs -f ols
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 25-AUG-2015 16:35:18
Copyright (c) 1991, 2014, Oracle.  All rights reserved.
Starting /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/tnslsnr: please wait...
TNSLSNR for Linux: Version 12.1.0.2.0 - Production
System parameter file is /u01/app/oracle/product/12.1.0.2/dbhome_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/diag/tnslsnr/ols/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ols)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ols)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date                25-AUG-2015 16:35:19
Uptime                    0 days 0 hr. 0 min. 1 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/12.1.0.2/dbhome_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/ols/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ols)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
The listener supports no services
The command completed successfully
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 25-AUG-2015 16:35:18
Copyright (c) 1991, 2014, Oracle.  All rights reserved.
Starting /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/tnslsnr: please wait...
TNSLSNR for Linux: Version 12.1.0.2.0 - Production
System parameter file is /u01/app/oracle/product/12.1.0.2/dbhome_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/diag/tnslsnr/ols/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ols)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ols)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date                25-AUG-2015 16:35:19
Uptime                    0 days 0 hr. 0 min. 1 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/12.1.0.2/dbhome_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/ols/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ols)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
The listener supports no services
The command completed successfully
Creating and starting Oracle instance
1% complete
.....
25% complete
Adding Oracle JVM
32% complete
....
Adding Oracle Text
50% complete
....
Adding Oracle Multimedia
55% complete
....
Adding Oracle Spatial
69% complete
Adding Oracle Spatial
69% complete
....
Adding Oracle Application Express
82% complete
87% complete
Completing Database Creation
Next steps installing Scotas OLS for testing and happy Docker/Oracle combination :)
Update about shared memory usage (31/08)After installing Scotas OLS which is a heavy Java Application I found these messages at the .trc file associated to the JIT compiler slave process:
ORA-04035: unable to allocate 4096 bytes of shared memory in shared object cache "JOXSHM" of size "1073741824"
joez: Failed loading machine code: Unable to allocate code space
peshmmap_Create_Memory_Map:
Map_Length = 4096
Map_Protection = 7
Flags = 1
File_Offset = 0
mmap failed with error 1
error message:Operation not permitted
After checked that there is no problem about memory available I sent an email to my friend Kuassi to find some clarification from JVM Dev team and they sent me a quick test to find the solution, the problem is the mount option of the tmpfs file system, by default docker do:
# docker run --privileged=true -i -t oraclelinux:6 /bin/bash
[root@d1ac66be54fb /]# cat /proc/mounts|grep shm
shm /dev/shm tmpfs rw,nosuid,nodev,noexec,relatime,size=65536k 0 0
Note the noexec flag!!, to solve that is necessary to start the container with the --privileged=true flag and execute as root:
[root@d1ac66be54fb /]# mount -o remount,exec /dev/shm
warning: can't open /etc/fstab: No such file or directory
[root@
d1ac66be54fb /]# cat /proc/mounts|grep shm
shm /dev/shm tmpfs rw,nosuid,nodev,relatime,size=65536k 0 0
and that's all, the message "..unable to allocate 4096 bytes.." disappear and the Oracle Internal Java Virtual Machine works perfect.
Finally the task now is to modify Dockerfile and manage-oracle.sh scripts to run as root and to execute the remount operation before starting Oracle RDBMS.
New Dockerfile basically changes last lines to start manage-oracle.sh as root:
USER root
RUN /u01/app/oraInventory/orainstRoot.sh
RUN /u01/app/oracle/product/12.1.0.2/dbhome_1/root.sh -silent
RUN rm -rf /home/oracle/responsefile_oracle12102.rsp /home/oracle/database
WORKDIR /home/oracle
RUN mkdir -p /u01/app/oracle/data
RUN chown oracle:oinstall /u01/app/oracle/data
COPY manage-oracle.sh /home/oracle/
EXPOSE 1521
CMD [ "sh" , "-c" ,  "/home/oracle/manage-oracle.sh" ]
and manage-oracle.sh executes all Oracle related scripts using su - oracle -c calls, for example:
stop_database() {
su - oracle -c "$ORACLE_HOME/bin/sqlplus / as sysdba" << EOF
shutdown abort
exit
EOF
exit
}
obviously there is remount operation previous to start the RDBMS:
mount -o remount,exec /dev/shm  su - oracle -c "$ORACLE_HOME/bin/lsnrctl start"
Remember to start your docker image using --privileged=true, for example:
docker run --privileged=true --ipc=host --volume=/var/lib/docker/db/ols:/u01/app/oracle/data --name ols --hostname ols --detach=true --publish=1521:1521 --publish=9099:9099 oracle-12102
HTH, Marcelo.



Is Oracle 12c REST ready?

Sat, 2014-12-13 16:33
This post is a continuation of my previous post Is Oracle 11g REST Ready?, and the answer is yes.
Again the availability of the embedded JVM at the Oracle RDBMS allows us to run an implementation of the complete REST stack and application.
To show how to implement a simple Hello World Application in REST I decided to use this time Jersey REST stack.
With Oracle 12c we have the availability of two JDK (1.6 and 1.7) and to compile and run Jersey we have to change the default 1.6 and switch de RBMS to 1.7 JDK, follow this guide to do that, but remember that in a CDB/PDB environment switching the JDK means change the compatibility JDK on all PDB, here another good post on that topic, DB 12c update java to version 7.
Once we have our RDBMS ready with JDK 1.7 we need Jersey compiled and ready to upload, here my steps:
a.- Check JDK version:
    mochoa@localhost:~$ export JAVA_HOME=/usr/local/jdk1.7
    mochoa@localhost:~$ export PATH=$JAVA_HOME/bin:$PATH
    mochoa@localhost:~$ type java
    java is /usr/local/jdk1.7/bin/java
    mochoa@localhost:~$ java -version
    java version "1.7.0_55"
    Java(TM) SE Runtime Environment (build 1.7.0_55-b13)
    Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed mode) b.- Check Maven:
    mochoa@localhost:~$ mvn -version
    Apache Maven 3.2.3 (33f8c3e1027c3ddde99d3cdebad2656a31e8fdf4; 2014-08-11T17:58:10-03:00)
    Maven home: /usr/local/apache-maven-3.2.3
    Java version: 1.7.0_55, vendor: Oracle Corporation
    Java home: /home/usr/local/jdk1.7.0_55/jre
    Default locale: en_US, platform encoding: UTF-8
    OS name: "linux", version: "3.13.0-40-generic", arch: "amd64", family: "unix"
    c.- Download and build Jersey using this guide Building and Testing Jersey, after a successful build of jersey all components will be located at our Maven local repository, on Linux is at $HOME/.m2/repository
    d.- Add a new container implementation for Oracle XMLDB Servlet, sources could be downloaded using this link, this new container implementation is a cloned version of jersey-servlet-core but downgrading Servlet 2.3 to 2.2 implemented by XMLDB, here the steps:
    mochoa@localhost:~/jdeveloper/mywork/jersey$ cd containers/
    mochoa@
    localhost:~/jdeveloper/mywork/jersey/containers$ tar xvfz /tmp/xdb-servlet.tar.gz
    mochoa@
    localhost:~/jdeveloper/mywork/jersey/containers$ cd xdb-servlet/
    mochoa@
    localhost:~/jdeveloper/mywork/jersey/containers/xdb-servlet$ mvn -Dmaven.test.skip=true clean install
    [INFO] Scanning for projects...
    [INFO]                                                                        
    [INFO] ------------------------------------------------------------------------
    [INFO] Building jersey-container-servlet-xdb 2.14-SNAPSHOT
    .... lot of stuff here ....
    [INFO] BUILD SUCCESS
    [INFO] ------------------------------------------------------------------------
    [INFO] Total time: 18.146 s
    [INFO] Finished at: 2014-12-13T17:07:21-03:00
    [INFO] Final Memory: 29M/351M
    [INFO] ------------------------------------------------------------------------
    e.- Create a new user into the RDBMS, this user will be used to contain all Jersey stack:
    SQL> select tablespace_name from dba_tablespaces;
    TABLESPACE_NAME
    ------------------------------
    SYSTEM
    SYSAUX
    TEMP
    USERS
    EXAMPLE
    SQL> create user jersey identified by jersey
         default tablespace users
         temporary tablespace temp
         quota unlimited on users;
    User created.
    SQL> grant connect,resource,create public synonym to jersey;
    Grant succeeded.
    f.- Upload all libraries to the RDBMS, the list of libraries build from Jersey sources and their dependency are:
    1. javax/servlet/servlet-api/2.2/servlet-api-2.2.jar (la versión instalada en la BD)
    2. javax/persistence/persistence-api/1.0/persistence-api-1.0.jar
    3. org/glassfish/hk2/external/javax.inject/2.4.0-b06/javax.inject-2.4.0-b06.jar
    4. org/glassfish/hk2/hk2-utils/2.4.0-b06/hk2-utils-2.4.0-b06.jar
    5. org/osgi/org.osgi.core/4.2.0/org.osgi.core-4.2.0.jar
    6. org/glassfish/hk2/osgi-resource-locator/1.0.1/osgi-resource-locator-1.0.1.jar
    7. org/glassfish/hk2/hk2-api/2.4.0-b06/hk2-api-2.4.0-b06.jar
    8. javax/ws/rs/javax.ws.rs-api/2.0.1/javax.ws.rs-api-2.0.1.jar
    9. org/glassfish/jersey/bundles/repackaged/jersey-guava/2.14-SNAPSHOT/jersey-guava-2.14-SNAPSHOT.jar
    10. javax/annotation/javax.annotation-api/1.2/javax.annotation-api-1.2.jar
    11. org/glassfish/jersey/core/jersey-common/2.14-SNAPSHOT/jersey-common-2.14-SNAPSHOT.jar
    12. org/glassfish/jersey/core/jersey-client/2.14-SNAPSHOT/jersey-client-2.14-SNAPSHOT.jar
    13. javax/validation/validation-api/1.1.0.Final/validation-api-1.1.0.Final.jar
    14. javassist/javassist/3.12.1.GA/javassist-3.12.1.GA.jar
    15. org/glassfish/hk2/external/aopalliance-repackaged/2.4.0-b06/aopalliance-repackaged-2.4.0-b06.jar
    16. org/glassfish/hk2/hk2-locator/2.4.0-b06/hk2-locator-2.4.0-b06.jar
    17. org/glassfish/jersey/core/jersey-server/2.14-SNAPSHOT/jersey-server-2.14-SNAPSHOT.jar
    library [1] should never be uploaded into RDBMS because is part of XMLDB implementation, so here the steps to upload [2]-[17] libraries:

    $ cd $HOME/.m2/repository % loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[2]
    Classes Loaded: 91
    Resources Loaded: 2
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 91
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[3]
    Classes Loaded: 6
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 6
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[4]
    Classes Loaded: 60
    Resources Loaded: 5
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 60
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[5]
    Some errors, but no resolving problems found.
    loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[6]
    Classes Loaded: 0
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 12
    Synonyms Created: 12
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[7]
    Classes Loaded: 0
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 153
    Synonyms Created: 153
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[8]
    Classes Loaded: 125
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 125
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[9]
    Classes Loaded: 1594
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 1594
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[10]
       ...Some errors not allowed in PDB, other classes OK....
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[11]
    Classes Loaded: 0
    Resources Loaded: 5
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 490
    Synonyms Created: 490
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[12]
    Classes Loaded: 99
    Resources Loaded: 4
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 99
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[13]
    Classes Loaded: 106
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 106
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[14]
    Classes Loaded: 366
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 366
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[15]
    Classes Loaded: 26
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 26
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[16]
    Classes Loaded: 0
    Resources Loaded: 4
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 92
    Synonyms Created: 92
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[17]
    Classes Loaded: 0
    Resources Loaded: 16
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 652
    Synonyms Created: 652
    Errors: 0
    We can check above loadjava commands logged as jersey into the target database using (all queries must return empty):
    SQL> select dbms_java.longname(object_name) from all_objects where object_type = 'JAVA CLASS' AND status = 'INVALID'
    AND dbms_java.longname(object_name) like 'javax/persistence/%';
    SQL> select dbms_java.longname(object_name) from all_objects where object_type = 'JAVA CLASS' AND status = 'INVALID'
    AND dbms_java.longname(object_name) like 'javax/inject/%';
    SQL> select dbms_java.longname(object_name) from all_objects where object_type = 'JAVA CLASS' AND status = 'INVALID'
    AND dbms_java.longname(object_name) like 'javax/annotation/%';
    SQL> select dbms_java.longname(object_name) from all_objects where object_type = 'JAVA CLASS' AND status = 'INVALID'
    AND dbms_java.longname(object_name) like 'javax/validation/%';
    SQL> select dbms_java.longname(object_name) from all_objects where object_type = 'JAVA CLASS' AND status = 'INVALID'
    AND dbms_java.longname(object_name) like 'javax/ws/rs/%';
    SQL> select dbms_java.longname(object_name) from all_objects where object_type = 'JAVA CLASS' AND status = 'INVALID'
    AND dbms_java.longname(object_name) like 'jersey/%';
    SQL> select dbms_java.longname(object_name) from all_objects where object_type = 'JAVA CLASS' AND status = 'INVALID'
    AND dbms_java.longname(object_name) like 'org/%';
    g.- finally uploading our xdb-servlet container:
    loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl org/glassfish/jersey/containers/jersey-container-servlet-xdb/2.14-SNAPSHOT/jersey-container-servlet-xdb-2.14-SNAPSHOT.jar
    Classes Loaded: 39
    Resources Loaded: 4
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 39
    Errors: 0
    h.- at this point we have everything uploaded into the RDBMS, now we prepare XMLDB to run Java implemented Servlets.
    Enabling XMLDB HTTP access:
    SQL> EXEC DBMS_XDB.SETHTTPPORT(8080);
    PL/SQL procedure successfully completed.
    SQL> alter system register;
    System altered.
    i.- By default XMLDB is configured with digest authentication, to change that we can download xdbconfig.xml file using ftp and updating the section and finally uploading again using ftp (require SYS user):
    <authentication>
        
    <allow-mechanism>basic</allow-mechanism>
    </authentication>j.- grants required for running Jersey Servlet, we are using JERSEY user here, for using other account similar grants are required directly or by creating a new role (recommend):
    SQL> exec dbms_java.grant_permission( 'JERSEY', 'SYS:java.lang.RuntimePermission', 'getClassLoader','');
    SQL> exec dbms_java.grant_permission( 'JERSEY', 'SYS:java.lang.RuntimePermission', 'accessDeclaredMembers', '' );
    SQL> exec dbms_java.grant_permission( 'JERSEY', 'SYS:java.lang.reflect.ReflectPermission', 'suppressAccessChecks', '' );
    SQL> exec dbms_java.grant_permission( 'JERSEY','SYS:java.util.logging.LoggingPermission', 'control', '' );
    k.- uploading a simple Hello World application from examples directory:
    mochoa@localhost:~/jdeveloper/mywork/jersey$ cd examples/helloworld
    mochoa@
    localhost:~/jdeveloper/mywork/jersey/examples/helloworld$ loadjava -r -v -u jersey/jersey@pdborcl target/classes/org/glassfish/jersey/examples/helloworld/HelloWorldResource.class
    arguments: '-u' 'jersey/***@pdborcl' '-r' '-v' 'target/classes/org/glassfish/jersey/examples/helloworld/HelloWorldResource.class'
    identical: org/glassfish/jersey/examples/helloworld/HelloWorldResource
    skipping : class org/glassfish/jersey/examples/helloworld/HelloWorldResource
    Classes Loaded: 0
    Resources Loaded: 0
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 1
    Synonyms Created: 0
    Errors: 0
    l.- Registering Servlet into XMLDB Adapater (logged as SYS):
    SQL> DECLARE
        configxml SYS.XMLType;
    begin
     dbms_xdb.deleteServletMapping('JerseyServlet');
     dbms_xdb.deleteServlet('JerseyServlet');
     dbms_xdb.addServlet(name=>'JerseyServlet',language=>'Java',class=>'org.glassfish.jersey.servlet.ServletContainer',dispname=>'Jersey Servlet',schema=>'jersey');
    SELECT INSERTCHILDXML(xdburitype('/xdbconfig.xml').getXML(),'/xdbconfig/sysconfig/protocolconfig/httpconfig/webappconfig/servletconfig/servlet-list/servlet[servlet-name="JerseyServlet"]','init-param',
    XMLType('jersey.config.server.provider.classnamesorg.glassfish.jersey.examples.helloworld.HelloWorldResourceHello World Application'),'xmlns="http://xmlns.oracle.com/xdb/xdbconfig.xsd"') INTO configxml
    FROM DUAL;
     dbms_xdb.cfg_update(configxml);
     dbms_xdb.addServletSecRole(SERVNAME => 'JerseyServlet',ROLENAME => 'authenticatedUser',ROLELINK => 'authenticatedUser');
     dbms_xdb.addServletMapping('/jersey/*','JerseyServlet');
     commit;
    end;
    /
    m.- Finally test my App:
    mochoa@localhost:~/jdeveloper/mywork/jersey/examples/helloworld$ curl --basic --user jersey:jersey http://localhost:8080/jersey/helloworld
    Hello World!!
    And that's all, happy 12c REST world.
    Notes on security:
    1. As you can see when registering a Servlet we added ROLENAME => 'authenticatedUser',ROLELINK => 'authenticatedUser', this imply that a RDBMS user name and password is required for accessing to this Servlet, as in the example we have to provide jersey/jersey which was the owner of the Hello Wolrd app
    2. HTTP protocol sent user name and password encoded as Base 64 when using basic authentication schema, if we want to hide this information over the net when using plain HTTP protocol we have to move to HTTPS.
    3. If you install other Hello World application in a different schema, for example scott, is necessary to upload also the class ServletContainer, for example using loadjava -u scott/tiger@pdborcl org/glassfish/jersey/servlet/ServletContainer.class, and obviously our new application class, finally registering the Servlet changing the tag to <servlet-schema>scott</servlet-schema>
    4. Servlet which runs without authentication are registered using ROLENAME => 'PUBLIC',ROLELINK => 'PUBLIC'), but is NOT recommended and it requires anonymous account unlock and these grants:
    SQL> ALTER USER ANONYMOUS ACCOUNT UNLOCK;User altered.SQL> exec dbms_java.grant_permission( 'ANONYMOUS', 'SYS:java.lang.RuntimePermission', 'getClassLoader',''));
    SQL> exec dbms_java.grant_permission( 'ANONYMOUS', 'SYS:java.lang.RuntimePermission', 'accessDeclaredMembers', '' );
    SQL> exec dbms_java.grant_permission( 'ANONYMOUS', 'SYS:java.lang.reflect.ReflectPermission', 'suppressAccessChecks', '' );
    SQL> exec dbms_java.grant_permission( 'ANONYMOUS','SYS:java.util.logging.LoggingPermission', 'control', '' );


    Upgrading system's library/classes on 12c CDB/PDB environments

    Mon, 2014-11-17 17:41
    Some days ago I found that the ODCI.jar included into 12c doesn't reflect latest update for oracle ODCI API.
    This API is used when writing new domain indexes such as Scotas OLS, pipe-line tables and many other cool stuff.
    ODCI.jar includes several Java classes which are wrappers of Oracle Object types such as ODCIArgDesc among others, the jar included into the RDBMS 11g/12c seem to be outdated, may be generated with 10g version database, for example it doesn't included attributes such as ODCICompQueryInfo which have information about Composite Domain Index (filter by/order by push predicates).
    The content of ODCI.jar is a set of classes generated by the tool JPublisher and looks like:
    oracle@localhost:/u01/app/oracle/product/12.1.0.2.0/dbhome_1/rdbms/jlib$ jar tvf ODCI.jar
         0 Mon Jul 07 09:12:54 ART 2014 META-INF/
        71 Mon Jul 07 09:12:54 ART 2014 META-INF/MANIFEST.MF
      3501 Mon Jul 07 09:12:30 ART 2014 oracle/ODCI/ODCIArgDesc.class
      3339 Mon Jul 07 09:12:32 ART 2014 oracle/ODCI/ODCIArgDescList.class
      1725 Mon Jul 07 09:12:32 ART 2014 oracle/ODCI/ODCIArgDescRef.class
    ....
      2743 Mon Jul 07 09:12:52 ART 2014 oracle/ODCI/ODCIStatsOptions.class
      1770 Mon Jul 07 09:12:54 ART 2014 oracle/ODCI/ODCIStatsOptionsRef.class
    The complete list of classes do not reflect the list of object types that latest 12c RDBMS have, this list is about 38 types expanded later to more than 60 classes:
    SQL> select * from dba_types where type_name like 'ODCI%'
    SYS     ODCIARGDESC
    SYS     ODCIARGDESCLIST
    ....
    SYS     ODCIVARCHAR2LIST
    38 rows selected
    so there is a clear difference between the classes included at ODCI.jar and the actual list of object types included into the RDBMS.
    Obviously these classes could be re-generated using JPublisher but I'll have to provide an input file with a template for case sensitive names, typically used in Java.
    To quickly create a JPublisher input file I'll execute this anonymous PLSQL block on JDeveloper logged as SYS at the CDB:
    set long 10000 lines 500 pages 50 timing on echo on
    set serveroutput on size 1000000
    begin
     for i in (select * from dba_types where type_name like 'ODCI%' order by type_name) loop
       if (i.typecode = 'COLLECTION') then
          dbms_output.put('SQL sys.'||i.type_name||' AS ');
          FOR j in (select * from dba_source where owner=i.owner AND NAME=i.type_name) loop
             if (substr(j.text,1,4) = 'TYPE') then
                dbms_output.put(substr(j.text,6,length(j.name))||' TRANSLATE ');
             else
                dbms_output.put(upper(substr(j.text,instr(upper(j.text),' OF ')+4,length(j.text)-instr(upper(j.text),' OF ')-4))||' AS '||substr(j.text,instr(upper(j.text),' OF ')+4,length(j.text)-instr(upper(j.text),' OF ')-4));
             end if;
          end loop;
          dbms_output.put_line('');
       else
          dbms_output.put('SQL sys.'||i.type_name||' AS ');
          FOR j in (select * from dba_source where owner=i.owner AND NAME=i.type_name) loop
             if (substr(j.text,1,4) = 'TYPE') then
                dbms_output.put(substr(j.text,6,length(j.name))||' TRANSLATE ');
             end if;
             if (substr(j.text,1,1) = ' ') then
                dbms_output.put(upper(substr(j.text,3,instr(j.text,' ',3)-3))||' AS '||substr(j.text,3,instr(j.text,' ',3)-3)||', ');
             end if;
          end loop;
          dbms_output.put_line('');
       end if;
     end loop;
    end;
    finally editing this file manually to remove latest coma sign I'll get this ODCI.in mapping file for JPublisher.
    With above file is possible to use an Ant task calling JPublisher utiliy as:
                description="Generate a new ODCI.jar file with ODCI types wrappers using JPublisher">
              login="${dba.usr}/${dba.pwd}@${db.str}"
          dir="tmp"
          package="oracle.ODCI"
          file="../db/ODCI.in"/>
               basedir="tmp"
           includes="**/*.class"
         />
     
    by executing above Ant task I'll have a new ODCI.jar with a content like:
    oracle@localhost:/u01/app/oracle/product/12.1.0.2.0/dbhome_1/rdbms/jlib$ jar tvf ODCI.jar
         0 Sun Nov 16 21:07:50 ART 2014 META-INF/
       106 Sun Nov 16 21:07:48 ART 2014 META-INF/MANIFEST.MF
         0 Sat Nov 15 15:17:40 ART 2014 oracle/
         0 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/
    102696 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/AnyData.class
      1993 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/AnyDataRef.class
     17435 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/AnyType.class
      1993 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/AnyTypeRef.class
      3347 Sun Nov 16 21:07:46 ART 2014 oracle/ODCI/ODCIArgDesc.class
      2814 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/ODCIArgDescList.class
      2033 Sun Nov 16 21:07:46 ART 2014 oracle/ODCI/ODCIArgDescRef.class
    .....
      2083 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/ODCITabFuncStatsRef.class
      2657 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/ODCIVarchar2List.class
    Well now the new ODCI.jar is ready for uploading into the CDB, to simplify this task I'll put directly in a same directory as the original one:
    oracle@localhost:/u01/app/oracle/product/12.1.0.2.0/dbhome_1/rdbms/jlib$ mv ODCI.jar ODCI.jar.orig
    oracle@localhost:/u01/app/oracle/product/12.1.0.2.0/dbhome_1/rdbms/jlib$ mv /tmp/ODCI.jar ./ODCI.jar
    NOTE: These next paragraph are examples to show that it will fail, see next paragraph to see the correct way.
    To upload this new file into the CDB logged as SYS I'll execute:
    SQL> ALTER SESSION SET CONTAINER = CDB$ROOT;
    SQL> exec sys.dbms_java.loadjava('-f -r -v -s -g public rdbms/jlib/ODCI.jar');
    to check if it works OK, I'll execute:
    SQL> select dbms_java.longname(object_name) from dba_objects where object_type='JAVA CLASS' and dbms_java.longname(object_name) like '%ODCI%';
    DBMS_JAVA.LONGNAME(OBJECT_NAME)
    --------------------------------------------------------------------------------
    oracle/ODCI/ODCIArgDesc
    oracle/ODCI/ODCIArgDescList
    oracle/ODCI/ODCIArgDescRef
    ...
    oracle/ODCI/ODCITabFuncStats
    oracle/ODCI/ODCITabFuncStatsRef
    oracle/ODCI/ODCIVarchar2List
    63 rows selected.
    I assume a this point that a new jar uploaded into the CDB root means that all PDB will inherit this new implementation as a new binary/library file patched at ORACLE_HOME does, but this is not how the class loading system works into the multitenant environment, to check that I'll re-execute above query but using the PDB$SEED container (the template used for new databases):
    SQL> ALTER SESSION SET CONTAINER = PDB$SEED;
    SQL> select dbms_java.longname(object_name) from dba_objects where object_type='JAVA CLASS' and dbms_java.longname(object_name) like '%ODCI%';
    ...
    28 rows selected.
    similar result will be displayed in any other PDB running/mounted on that CDB, more on this if I'll check a Java code on some PDB this exception will be thrown:
    Exception in thread "Root Thread" java.lang.IncompatibleClassChangeError
     at oracle.jpub.runtime.MutableArray.getOracleArray(MutableArray.java)
     at oracle.jpub.runtime.MutableArray.getObjectArray(MutableArray.java)
     at oracle.jpub.runtime.MutableArray.getObjectArray(MutableArray.java)
     at oracle.ODCI.ODCIColInfoList.getArray(ODCIColInfoList.java)
     at com.scotas.solr.odci.SolrDomainIndex.ODCIIndexCreate(SolrDomainIndex.java:366)
    this is because a code was compiled with latest API and the container have an oldest one.
    So I'll re-load the new ODCI.jar into PDB$SEED and my PDBs, using similar approach as in the CDB for example:
    SQL> ALTER SESSION SET CONTAINER = PDB$SEED;
    SQL> exec sys.dbms_java.loadjava('-f -r -v -s -g public rdbms/jlib/ODCI.jar');
    ERROR at line 1:
    ORA-65040: operation not allowed from within a pluggable database
    this is because the PDB are blocked from altering classes inherit from the CDB.
    As I mentioned early above way are incorrect when dealing in multitenant environments.
    To fix that there is Perl script named catcon.pl, it automatically takes care of loading on ROOT first, then on PDB$SEED, then any/all open PDBs specified in the command line.
    In my case I'll execute:
    # $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -u SYS -d $ORACLE_HOME/rdbms/admin -b initsoxx_output initsoxx.sql
    before doing that is necessary to open all PDB (read write, or in restrict mode) or specifying which PDB will be patched. Note that I used initsoxx.sql script, this script is used by default during RDBMS installation to upload ODCI.jar.
    Now I'll check if all PDBs have consistent ODCI classes.
    SQL> ALTER SESSION SET CONTAINER = PDB$SEED;  
    Session altered.
    SQL> select dbms_java.longname(object_name) from dba_objects where object_type='JAVA CLASS' and dbms_java.longname(object_name) like '%ODCI%';
    DBMS_JAVA.LONGNAME(OBJECT_NAME)
    --------------------------------------------------------------------------------
    oracle/ODCI/ODCITabFuncStatsRef
    oracle/ODCI/ODCICompQueryInfoRef
    ....
    oracle/ODCI/ODCIVarchar2List
    63 rows selected.
    SQL> ALTER SESSION SET CONTAINER = WIKI_PDB;
    Session altered.
    SQL> select dbms_java.longname(object_name) from dba_objects where object_type='JAVA CLASS' and dbms_java.longname(object_name) like '%ODCI%';
    DBMS_JAVA.LONGNAME(OBJECT_NAME)
    --------------------------------------------------------------------------------
    oracle/ODCI/ODCIArgDesc
    ....
    63 rows selected.
    Finally all PDBs where patched with a new library.
    More information about Development Java within RDBMS in multitenant environments are in this presentation, The impact of MultiTenant Architecture in the develop of Java within the RDBMS, for Spanish readers there is video with audio at YouTube from my talk at OTN Tour 14 ArOUG:








    All you need is Log, All you need is Log, ..., Log is all you need.

    Thu, 2013-08-22 11:44
    Paraphrasing Beatles's song I am starting this post telling that logging is one of the most important features that developers, sysadmins and DBA needs to successful develop, deploy, test and tune Java applications inside the RDBMS.
    Inside the OJVM this feature is even more important because there is no visual feedback when you are running Java code.
    Starting with 10g, I have been using this feature as is explained in my posts Getting logging entering, exiting and throwing work at Oracle JVM, which according to Google Analytics is one of most visited page, but most of tweak requires uploading new classes and configuring it with certain knowledge.
    Starting with 12c, the logging feature is directly integrated into the OJVM also for PDB.
    This mean that by simply loading a logging.properties file into a user's schema you can control the logging facilities for Java code running with an effective user (Oracle JVM performs specific steps to configure logging options).
    For example if I have a class loaded into SCOTT's schema and this class run with an effective user SCOTT, to activate the logging level for that class and user you have to execute:
    loadjava -u scott/tiger@mydb -v javavm/lib/logging.propertiesif the logging.properties file includes something like:
    $ cat javavm/lib/logging.properties
    # Limit the message that are printed on the console to INFO and above.
    java.util.logging.ConsoleHandler.level = ALL
    java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
    # Scotas OLS
    com.scotas.level = INFO
    org.apache.level = INFO
    remember that for using logging inside the OJVM SCOTT user should have SYS:java.util.logging.LoggingPermission = control, to do that connected as SYS excecute:
    $ sqlplus "sys/pwd@mydb as sysdba"
    SQL> exec dbms_java.grant_permission( 'SCOTT', 'SYS:java.util.logging.LoggingPermission', 'control', '');
    SQL> commit;
    also your application can control logging level properties during run time, above configuration are static and defined when the OJVM start the execution of the Java code for a specific DB session.
    The example below is Scotas OLS web page available for controlling logging properties during run time (the old logging configuration Servlet of Apache Solr).
    as is shown in the screen shot logging level for each hierarchy is defined with a default setting inherit from logging.properties file, but we can change that value to any other value using above form.
    Finally where my logs goes?
    Independent if you are running your Java application in a traditional RDBMS installation or in a new CDB/PDB configuration log information will goes to the .trc file associated to the session, this is at:
    $ORACLE_BASE/diag/rdbms/$ORACLE_SID/$ORACLE_SID/tracethe .trc file is named using this filename convention $ORACLE_SID_jnnn_$PID.trc, for example:
    $ cat orclc_j020_11761.trc
    Trace file /u01/app/oracle/diag/rdbms/orclc/orclc/trace/orclc_j020_11761.trc
    Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
    ORACLE_HOME = /u01/app/oracle/product/12.1.0.1.0/dbhome_1
    System name: Linux
    Node name: localhost
    Release: 3.2.0-52-generic
    Version: #78-Ubuntu SMP Fri Jul 26 16:21:44 UTC 2013
    Machine: x86_64
    Instance name: orclc
    Redo thread mounted by this instance: 1
    Oracle process number: 7
    Unix process pid: 11761, image: oracle@pocho (J020)

    *** 2013-08-22 11:29:41.805
    *** SESSION ID:(387.581) 2013-08-22 11:29:41.805
    *** CLIENT ID:() 2013-08-22 11:29:41.805
    *** SERVICE NAME:(SYS$USERS) 2013-08-22 11:29:41.805
    *** MODULE NAME:(DBMS_SCHEDULER) 2013-08-22 11:29:41.805
    *** ACTION NAME:(SOLRSERVLETSERV#1) 2013-08-22 11:29:41.805
    *** CONTAINER ID:(3) 2013-08-22 11:29:41.805
    .....
    *** 2013-08-22 11:52:37.581
    INFO: Unset log level on 'root'.
    Aug 22, 2013 11:52:37 AM com.scotas.solr.servlet.LogLevelSelection doPost
    INFO: Set 'com.scotas' to INFO level.
    is important that even if you have multiple PDBs, all log files will go to the trace directory of the CDB because all PDBs re-use the process infra-structure of the CDB.
    I will post another blog entry about the impact of the process infra-structure of the Multitenant Environment and the OJVM applications.

    An efficient way to do massive inserts with Oracle JDBC

    Wed, 2013-08-21 10:34
      I was exchanging emails with my friend Kuassi Mensah (@kmensah) to see how to use JDBC from what is known as array DML.
      Typically known for the DBA of the form:
    INSERT INTO TEST_SOURCE_BIG_A
            SELECT a.line a_line, a.name a_name, a.owner a_owner, a.text a_text, a.type a_type
               FROM TEST_SOURCE_BIG a;  if the two tables have same structure the RDBMS realizes that and is putting together batchs between 140-170 rows each.
      This will cause that any index associated (such as Scotas OLS) with a call to the function:
    FUNCTION ODCIIndexInsert (
       ODCIIndexInfo ia,
       ridlist ODCIRidList,
       newvallist varray_of_column_type,
       ODCIEnv env)
    RETURN NUMBER  where the argument takes an array ridlist with all ROWIDs.
      The point is how can I exploit that functionality from JDBC, or how to make a massive inserts efficiently from Java (very important if your app. is inserting twits or FB comments)?
      The tip is: SetExecuteBatch method of preparedStatment.
      Broadly speaking, the code should be of the form:
            PreparedStatement ps =
                conn.prepareStatement ("insert into test_source_big_a values ​​(?,?,?,?,?)");
            / / Change batch size for this statement to 500
            ((OraclePreparedStatement) ps). SetExecuteBatch (500);
            for (int i = 0; i <= 500, i + +) {
                ps.setInt (1, i);
                ps.setString (2, "name-" + i);
                ps.setString (3, "owner-" + i);
                ps.setString (4, "this is a long test using sendBatch - statement" + i);
                ps.setString (5, "type-" + i);
                ps.executeUpdate () / / JDBC this for later execution queues
            }
            ((OraclePreparedStatement) ps). SendBatch () / / JDBC sends the queued request
            conn.commit ();   calling that piece of code generates 6 batchs of rows:
           100 + (4 x 93) + 98
       which from the point of view of the index implementation will be much more efficient because instead of enqueuing 500 messages in the AQ, just going to enqueue 6!!
       If you send more values with parameter rows in the batch (500) will go automatically sending without waiting SendBatch or commit.
       A parameter which Kuassi recommended me to consider is the SDU (session data unit), which is defined at the level of SQLNet to avoid SQLNet packet fragmentation.
      Note that not only is going to be much more efficient from the point of the insert tables but also with respect to the use of the network since it generates far fewer round-trip from the App. side and RDBMS.

    ElasticSearch Server

    Thu, 2013-05-16 06:59

    Continuing my previous post, second book that I was working for PacktPub as technical reviewer was "Elasticsearch Server".
    A very good book for this development based on the library Lucene, with many examples and concepts in order to exploit the full potential of free text searches using ElasticSearch.
    As with the book described in the previous post (Apache Solr 4 Cookbook) this book is a perfect resource used during the development of "Scotas's Push Connector" because it allows me to integrate and exploit their full potential.
    Overview

    • Learn the basics of ElasticSearch like data indexing, analysis, and dynamic mapping
    • Query and filter ElasticSearch for more accurate and precise search results
    • Learn how to monitor and manage ElasticSearch clusters and troubleshoot any problems that arise
    Table of Contents


    • Chapter 1: Getting Started with ElasticSearch Cluster
    • Chapter 2: Searching Your Data
    • Chapter 3: Extending Your Structure and Search
    • Chapter 4: Make Your Search Better
    • Chapter 5: Combining Indexing, Analysis, and Search
    • Chapter 6: Beyond Searching
    • Chapter 7: Administrating Your Cluster
    • Chapter 8: Dealing with Problems

    Authors
    Rafał KućRafał Kuć is a born team leader and software developer. Currently working as a Consultant and a Software Engineer at Sematext Inc, where he concentrates on open source technologies such as Apache Lucene and Solr, ElasticSearch, and Hadoop stack. He has more than 10 years of experience in various software branches, from banking software to e-commerce products. He is mainly focused on Java, but open to every tool and programming language that will make the achievement of his goal easier and faster. Rafał is also one of the founders of the solr.pl site, where he tries to share his knowledge and help people with their problems with Solr and Lucene. He is also a speaker for various conferences around the world such as Lucene Eurocon, Berlin Buzzwords, and ApacheCon. Rafał began his journey with Lucene in 2002 and it wasn't love at first sight. When he came back to Lucene later in 2003, he revised his thoughts about the framework and saw the potential in search technologies. Then Solr came and that was it. From then on, Rafał has concentrated on search technologies and data analysis. Right now Lucene, Solr, and ElasticSearch are his main points of interest. Rafał is also the author of Apache Solr 3.1 Cookbook and the update to it—Apache Solr 4 Cookbook—published by Packt Publishing.
    Marek RogozińskiMarek Rogoziński is a software architect and consultant with more than 10 years of experience. His specialization concerns solutions based on open source projects such as Solr and ElasticSearch. He is also the co-funder of the solr.pl site, publishing information and tutorials about the Solr and Lucene library. He currently holds the position of Chief Technology Officer in Smartupz, the vendor of the Discourse™ social collaboration software.
    Conclusion
    Many developers know Lucene, but do not know the product ElasticSearch, to know or want to know this book is going to enter in the product and the potential of this.




    Apache Solr 4 Cookbook

    Thu, 2013-05-16 06:34

    During my summer I had the chance to work for Packtpub as technical reviewer of two great books.
    First "Apache Solr 4 Cookbook" is a very good book to entered into the world of free-text search integrated to any development or portal with real and practical examples using the latest version of the Apache Solr search.
    Overview

    • Learn how to make Apache Solr search faster, more complete, and comprehensively scalable
    • Solve performance, setup, configuration, analysis, and query problems in no time
    • Get to grips with, and master, the new exciting features of Apache Solr 4


    Table of Contents
    • Chapter 1: Apache Solr Configuration
    • Chapter 2: Indexing Your Data
    • Chapter 3: Analyzing Your Text Data
    • Chapter 4: Querying Solr
    • Chapter 5: Using the Faceting Mechanism
    • Chapter 6: Improving Solr Performance
    • Chapter 7: In the Cloud
    • Chapter 8: Using Additional Solr Functionalities
    • Chapter 9: Dealing with Problems
    • Appendix: Real-life Situations
    Author
    Rafał Kuć is a born team leader and software developer. Currently working as a Consultant and a Software Engineer at Sematext Inc, where he concentrates on open source technologies such as Apache Lucene and Solr, ElasticSearch, and Hadoop stack. He has more than 10 years of experience in various software branches, from banking software to e-commerce products. He is mainly focused on Java, but open to every tool and programming language that will make the achievement of his goal easier and faster. Rafał is also one of the founders of the solr.pl site, where he tries to share his knowledge and help people with their problems with Solr and Lucene. He is also a speaker for various conferences around the world such as Lucene Eurocon, Berlin Buzzwords, and ApacheCon. Rafał began his journey with Lucene in 2002 and it wasn't love at first sight. When he came back to Lucene later in 2003, he revised his thoughts about the framework and saw the potential in search technologies. Then Solr came and that was it. From then on, Rafał has concentrated on search technologies and data analysis. Right now Lucene, Solr, and ElasticSearch are his main points of interest. Rafał is also the author of Apache Solr 3.1 Cookbook and the update to it—Apache Solr 4 Cookbook—published by Packt Publishing.


    ConclusionIf you are about to start a development or entered into the world of free text this book is a very good investment in time and resources, practical examples really serve to acquire new concepts in a simple and practical with minimal effort.




    Implementing Oracle parallel shared server process in Java inside the Database

    Mon, 2011-11-28 14:33
    Behind the implementation of latest LDI open source project and the OLS products there is a functionality not well know by Oracle Java database developers, I called it Parallel Shared Server process.
    The idea is to have an Oracle shared server process running during the  life-time of the instance, which means a process automatically started during database startup and stopped during database shutdown.
    So which functionality this process can implement?, on LDI is an RMI server, on OLS is lightweight HTTP server, but basically you can implement anything you need for example getting information from another process and fill some table, getting statistical, consuming web services, etc. etc.
    Let see in some example how it works.
    We will create a TEST user and creates some Java classes running a simple Hello World RMI server.
    SQL> conn / as sysdbaSQL> create user test identified by test
      2  default tablespace users
      3  temporary tablespace temp
      4  quota unlimited on users;
    SQL> grant connect,resource,create any job to TEST;SQL> exec dbms_java.grant_permission( 'TEST', 'SYS:java.net.SocketPermission', 'localhost:1024-', 'listen,resolve');
    SQL> exec dbms_java.grant_permission( '
    TEST', 'SYS:java.net.SocketPermission', 'localhost:1024-', 'accept, resolve');
    SQL> exec dbms_java.grant_permission( '
    TEST', 'SYS:java.net.SocketPermission', 'localhost:1024-', 'connect, resolve');
    SQL> exec dbms_java.grant_permission( '
    TEST', 'SYS:java.lang.RuntimePermission', 'setContextClassLoader', '' );The RMI interface and server implementation running on TEST user.
    SQL> conn test/test
    SQL> create or replace and compile java source named "mytest.Hello" as
    package mytest;
    import java.rmi.Remote;
    import java.rmi.RemoteException;
    public interface Hello extends Remote {
        String sayHello() throws RemoteException;
        int nextCount() throws RemoteException;
    }
    /
    SQL> create or replace and compile java source named "mytest.HelloImpl" as
    package mytest;
    import java.rmi.Naming;
    import java.rmi.RemoteException;
    import java.rmi.RMISecurityManager;
    import java.rmi.registry.LocateRegistry;
    import java.rmi.server.UnicastRemoteObject;
    public class HelloImpl extends UnicastRemoteObject implements Hello {
        static int counter = 0;
       
        public HelloImpl() throws RemoteException {
            super();
        }
        public String sayHello() {
            return "Hello World!";
        }
        public static void main(String[] args) {
            // Create and install a security manager
            if (System.getSecurityManager() == null) {
                System.setSecurityManager(new RMISecurityManager());
            }
            try {
                HelloImpl obj = new HelloImpl();
                LocateRegistry.createRegistry(1099);
                // Bind this object instance to the name "HelloServer"
                Naming.rebind("//localhost/HelloServer", obj);
                System.out.println("HelloServer bound in registry");
            } catch (Exception e) {
                System.out.println("HelloImpl err: " + e.getMessage());
                e.printStackTrace();
            }
        }
        public synchronized int nextCount() {
            return counter++;
        }
    }
    /
    SQL> create or replace procedure HelloServ(srvName IN VARCHAR2) as LANGUAGE JAVA NAME
            'mytest.HelloImpl.main(java.lang.String [])';
    /
    SQL> begin
      -- Start a Cron like process (DBMS_SCHEDULER)
      DBMS_SCHEDULER.CREATE_JOB(
       job_name          =>  'HelloServJob',
       job_type          =>  'PLSQL_BLOCK',
       job_action        =>  'begin
         HelloServ(''HelloServer'');
         exception when others then
            null;
         end;',
       start_date        =>  SYSDATE,
       enabled           => false,
       auto_drop         => false);
      DBMS_SCHEDULER.SET_ATTRIBUTE_NULL (
       name           =>   'HelloServJob',
       attribute      =>   'MAX_FAILURES');
    end;
    /
    commit;
    Now we can register two database instance trigger to automatically start and stop the job.
    SQL> conn / as sysdba
    SQL> CREATE OR REPLACE TRIGGER start_test_srv
      AFTER STARTUP ON DATABASE
    BEGIN
      -- Start a Cron like process (DBMS_SCHEDULER)
      DBMS_SCHEDULER.ENABLE('TEST.HelloServJob');
    END;
    /
    SQL> CREATE OR REPLACE TRIGGER stop_test_srv
      BEFORE SHUTDOWN ON DATABASE
    BEGIN
      -- Start a Cron like process (DBMS_SCHEDULER)
      DBMS_SCHEDULER.STOP_JOB('TEST.HelloServJob',force=>true);
    EXCEPTION WHEN OTHERS THEN
      null;
    END;
    /
    If we process to do a shutdown/startup sequence the server will up and running, also we can start the server manually by executing:
    SQL> conn / as sysdba
    SQL> exec DBMS_SCHEDULER.ENABLE('TEST.HelloServJob');
    SQL> commit;
    after doing that we can see at $ORACLE_BASE/diag/rdbms/orcl/orcl/trace a .trc file associated with the parallel shared server process which is up and running:
    -bash-4.2$ cat orcl_j000_10411.trc
    Trace file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_j000_10411.trc
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORACLE_HOME = /u01/app/oracle/product/11_2_0_2_0/dbhome_1
    System name:    Linux
    Node name:      localhost.localdomain
    Release:        2.6.38.7-server-1mnb2
    Version:        #1 SMP Sun May 22 22:59:25 UTC 2011
    Machine:        i686
    Instance name: orcl
    Redo thread mounted by this instance: 1
    Oracle process number: 25
    Unix process pid: 10411, image: oracle@localhost.localdomain (J000)

    *** 2011-11-28 18:05:41.091
    *** SESSION ID:(151.35) 2011-11-28 18:05:41.091
    *** CLIENT ID:() 2011-11-28 18:05:41.091
    *** SERVICE NAME:(SYS$USERS) 2011-11-28 18:05:41.091
    *** MODULE NAME:(DBMS_SCHEDULER) 2011-11-28 18:05:41.091
    *** ACTION NAME:(HELLOSERVJOB) 2011-11-28 18:05:41.091

    HelloServer bound in registry
    and this process is listening into the default RMI port 1099, we can see that using:
    -bash-4.2$ netstat -anp|grep ora_j0
    (Not all processes could be identified, non-owned process info
     will not be shown, you would have to be root to see it all.)
    tcp        0      0 :::19189                    :::*                        LISTEN      10411/ora_j000_orcl
    tcp        0      0 :::1099                     :::*                        LISTEN      10411/ora_j000_orcl 
    and that's all, we can connect using an RMI client from another client session, for example:
    SQL> create or replace and compile java source named "mytest.HelloClient" as
    package mytest;
    import java.rmi.Naming;
    import java.rmi.RemoteException;
    public class HelloClient {
        Hello obj = null;
        public HelloClient() {
            try {
                obj = (Hello)Naming.lookup("//localhost/HelloServer");
            } catch (Exception e) {
                System.out.println("HelloApplet exception: " + e.getMessage());
                e.printStackTrace();
            }
        }
        public String sayHello() throws RemoteException {
            return obj.sayHello();
        }
        public int nextCount() throws RemoteException {
            return obj.nextCount();
        }
        public static void main(String[] args) throws RemoteException {
            HelloClient helloClient = new HelloClient();
            System.out.println(helloClient.sayHello());
            System.out.println(helloClient.nextCount());
        }
    }
    /
    SQL> create or replace procedure HelloClient(srvName IN VARCHAR2) as LANGUAGE JAVA NAME
    'mytest.HelloClient.main(java.lang.String [])';
    /
    SQL> set define ?
    SQL> set serverout on
    SQL> exec dbms_java.set_output(32000);
    SQL> exec HelloClient('HelloServer');
    0
    SQL> exec HelloClient('HelloServer');
    1
    Note that the server is state-full which means that preserve the state across calls, if we exit from above SQLPlus and connect again we will see that:

    SQL> set define ?
    SQL> set serverout on
    SQL> exec dbms_java.set_output(32000);
    SQL> exec HelloClient('HelloServer');
    2
    SQL> exec HelloClient('HelloServer');
    3

    Concluding this post I would like to remark that this parallel shared server process is running into RDBMS space and is not like starting an RMI server into the middle tier. the big difference is that all SQL access is implemented accessing directly to the RDBMS structures into the SGA because is using the internal JDBC driver.

    New release of Lucene Domain Index based on 3.4.0 code

    Fri, 2011-11-25 13:44
    This new release of Lucene Domain Index (LDI) has been in the new SVN for a long time, but due a lot of works with the commercial version Scotas never went public in binary installation :(
    Several thing happen during this time:

    • New web site (thanks a lot to Peter Wehner for the conversion from the Google docs)
    • New hosting at SF.net (now is separate SVN from the previous one CVS at DBPrism)

    The change log of this version is:


    • Use latest merge policy implementation TieredMergePolicy
    • Use total RAM reported by getJavaPoolSize() when setting MaxBufferedDocs
    • Better error reporting when an Analyzer is not found.
    • Replaced execute immediate with open-fech-close functionality to avoid core dump on 10g when double check for deleted rowid
    • Included a back-port version of JUnit4 to jdk1.4 version for 10g releases
    • Added a parallel updater process, when working in OnLine mode this process do write operations on LDI structure on behalf of the AQ process
    • Delete do not longer required a write exclusive lock on index storage, now deletes are also en-queued as inserts or updates
    • Updated source to Lucene 3.4.0 code, removed some deprecated API

    Download latest binary distribution at 3.4.0 directory of SF.net download area (tested with 10g/11gR2).
    The addition of a new parallel shared server process is the major change which speed up a lot DML operations, I'll write in a new post on how this parallel shared server technique works.
    Please report any issue during the installation or bugs at the Support Area of the project.

    Long time no post

    Mon, 2011-09-26 08:41

    I found that my last post was on Sep. 14 2010, that's too bad.
    The reason of that is the startup called Scotas I worked on that almost since December of 2010 to engineering a next level of Oracle and Lucene project integration, this is by adding the Solr stack.
    Its a natural evolution, Solr provides a lot of common functionality working on top of  Lucene required for the enterprise level solution.
    But the integration required a set of implementation challenges to work, once of them is the implementation of long live shared server process running in Java and which implements a new dedicated HTTP listener working as an Oracle slave process, I will explain this topic on another technical post.
    Another additions included in a set of products are the push technology spread on the integration with Solr running externally, ElasticSearch, Cassandra and HBase.
    Behind this technology we extended the idea of Oracle/Solr integration, the Oracle ODCI API,. which enabled a NRT (Near Real Time) synchronization of the enterprise data with NoSQL layer for example.
    Near Real Time means that once you commit the changes on the RDBMS layer thy are propagated automatically to the other side practically with 0 delay, and for the two way replicator such as Solr PC and ElasticSearch deletions are in real-time avoiding the false positive hits of deleted rows.
    Near Real Time synchronization is declarative, no programming effort is required and allows multiples columns of one table, or multiples columns of many tables connected by a foreign key.
    Well next week is the OOW 2011, I'll Be there and I Hope You Will, Too and for anybody that are coming to the event and are interested on that technology we could meet there, just drop me an email to mochoa-at-scotas.com

    New Oracle Lucene Domain Index release based on Lucene 3.0.2

    Tue, 2010-09-14 16:39
    Just a few words to announce a new release of Oracle Lucene Domain Index, this zip is valid for 10g and 11g database version (10g using back-ported classes from 1.5 to 1.4)
    This release is compiled using Lucene 3.0.2 version and incorporates a set of new features added, here the list:


    • Added a long awaited functionality, a parallel/shared/slave search process used during a start-fetch-close and CountHits function
    • Added lfreqterms ancillary operator returning the freq terms array of rows visited
    • Added lsimilarity ancillary operator returning a computed Levenshtein distance of the row visited
    • Added a ldidyoumean pipeline table function using DidYouMean.indexDictionary storage
    • Added test using SQLUnit

    The bigger addition is the Parallel-Shared-Slave search process, this architectural change was in my to-do list for a long time and finally I added in this release :)
    The idea behind this is to have a new Oracle process started by the DBMS_SCHEDULER sub-system during the database startup process and stopped immediately before shutdown.
    Now this process is responsible for implementing the ODCI methods start-fetch-close/count-hit on behalf of the client process (process associated to an specific user session) which connect to the shared-slave process by using RMI.
    With this new architecture we have two principal benefits:

    • Reduce memory consumption
    • Increase Lucene Cache Hits

    Less memory consumption because the internal OJVM implementation is attached to a client session, so the Java space used by Lucene structures is isolated and independent from another concurrent session, now all Lucene memory structures used during index scan process are created in a shared process and then not replicated.
    Also if one session submits a Lucene search, this search is cached for subsequent queries, all subsequent queries coming from the same client session or any other which are associated to the same index and with the same Query string implies a hit.
    I'll explain more in detail this new architecture in another post also showing how many parallel process can work together when using Parallel Indexing and Searching.
    On the other hand next week I'll be at the Oracle OpenWorld 2010 in SFO presenting the session:

    Schedule: Tuesday: 09:30AM
    Session ID: S315660
    Title: Database Applications Lifecycle Management
    Event: JavaOne and Oracle Develop
    Stream(s): ORACLE DEVELOP
    Track(s): Database Development
    Abstract: Complex applications, such as Java running inside the database, require an application lifecycle management to develop and delivery good code. This session will cover some best practices, tools, and experience managing and delivering code for running inside the database, including tools for debugging, automatic test, packaging, deployment, and release management. Some of the tools presented will include Apache Maven, JUnit, log4j, Oracle JDeveloper, and others integrated into the Oracle Java Virtual Machine (JVM) development environment.

    See you there or at any of networking planned events :)

    Near-shore development in Tandil-Argentina

    Tue, 2010-06-29 09:30
    This post is bit off topic about Oracle, but not at all.
    I remember read by a first time the term near-shore development during OOW08 in a Information Week newsletter free at OTN Lounge.
    Two years ago I see the increasing on development using this model in my City Tandil, 400Km far from Buenos Aires in Argentina.
    Specially, I am working as external consultant with the company Temperies which was founded and growing in Tandil unlike other companies which are part of the Technology Park and came from Buenos Aires.
    But why this market still growing in Tandil with well know economy changes?
    First Why Tandil?

    • Is a very nice small city (130000 habitants) surrounded by hills where the quality of life makes the difference in term of productivity of the development team, this is one of the bigger difference against other Argentine cities such as Buenos Aires, Cordoba or Rosario. Any worker can get his workplace in less than five minutes walking.
    • Has a recognized University in Computer Science.
    • Is cheap to live here, so salaries are not so high.

    Second Why Argentina?
      - Is a country with a big difference in term of US$/Euro exchange rate compared with another countries like Chile for example, and believe me when an near-shore project is evaluated this point is very important because with the same money you can do the project several times :)
      - Most of the computer science graduated and under-graduated students have very good knowledge of English.
      - The time zone of Argentine is in middle of Europe and US West coast, so projects can be perfectly managed around the team synchronization, if we work with San Francisco, we work during our afternoon, if we work with Germany, we work during our morning, and thats all.
    Finally I want to remark that there are plenty of offers in term of Software Factories in Tandil, from small companies with 10 to 20 employees to big factories with more than one hundred employees to fit with every need.
    As I set at the beginning of this post I am working with Temperies a mid-range Software Factory which, in addition to the list of goodness exposed previously, have an strong commitment with Agile practices, Oracle and a widely experience with success stories in near-shoring projects around the world, and obviously the CIOs are good friends of mine :)
    Well if anybody want to go deeper in this area just drop me an email to marcelo.ochoa gmail.com or see you at the Oracle Open World in September.

    Dealing with JDK1.5 libraries on Oracle 10g

    Tue, 2010-06-08 16:04
    Modern libraries are compiled with JDK 1.5 and the question is How to deal with these libraries on an Oracle 10g OJVM.
    Some examples are Lucene 3.x branch or Hadoop. The solution that I tested is using a Java Retro Translator and some complementary libraries.
    I have tested this solution in Lucene Domain Index 3.x branch with success.
    As you can see on the CVS there is build.xml file which performs all the retro translator steps. Here an step by step explanation of the process:

    1. Load all required libraries provided by Retro translator project which implements features not available on JDK 1.4/1.3 runtime, this is done on the target load-retrotranslator-sys-code.  This target loads many libraries on SYS schema due are immutable, or with low probability of change. It will change if we upgrade a retro-translator version. All libraries are then compiled to assembler using NCOMP utility, target ncomp-runtime-retrotranslator-sys-code.
    2. Then we can convert libraries compiled with JDK1.5, in this build.xml file the Lucene and Lucene Domain Index implementation, to a JDK1.4 target runtime. This is done on the targets backport-code-lucene and backport-code-odi, on first target We converts all Lucene libraries excluding JUnit and Test code, these libraries require as a dependency JUnit and retro-translator jars. Second target converts Lucene Domain Index jar depending on Lucene core and Oracle's libs. The back-port operation generates a file named lucene-odi-all-${version}.jar with Lucene and Lucene Domain Index code ready to run on JDK1.4 runtime.
    3. Once We have the code back-ported to a JDK1.4 runtime We can upload and NCOMP into Oracle 10g, this is done on targets load-lucene-odi-backported-code and ncomp-lucene-all.
    And that's all!!, the code works fine on my Oracle 10.2 database - Linux :), finally users of 11g and 10g databases can deploy Lucene Domain Index implementation using one distribution file.

    Auto complete functionality with latest Lucene Domain Index

    Sat, 2010-05-08 10:44
    A few days ago I uploaded two new releases of Oracle Lucene Domain Index, once based on Lucene 2.9.2 core base (10g, 11g) and another based on 3.0.1 release (10g/11g).
    The question is why 3.0.1 release only have one installation file?
    This is because the code base of Lucene 3.x branch is only compatible with JDK1.5 so to get Lucene 3.x release working on 10g databases which is based on JDK1.4 I included a retro-translator, this library gets code compiled in 1.5 format and converts it to 1.4, I'll explain more in details this process in another post.
    The important point is, I want to still supporting Lucene Domain Index for Oracle 10g because the installed base of this release is big even with the end of official support next July.
    On the other hand this new release includes another great contribution from Pedro Pinheiro, an auto-complete pipeline table function. This reinforce the goal of Lucene Domain Index that with a few new classes and some PLSQL wrapper you can extend LDI to your need.
    Here a simple example:
    I am creating and populating a simple table with a english-spanish dictionary lookup:
    create table dicc (
       term varchar(256),
       def  varchar2(4000))
    /
    -- Populate dictionary with 10K terms and definitions
    @@dicc-valuesthen a Lucene Domain Index for auto-complete functionality:
    create index dicc_lidx on dicc(term) indextype is lucene.luceneindex
    parameters('ExtraCols:def;LogLevel:INFO;FormatCols:TERM(ANALYZED_WITH_POSITIONS_OFFSETS);PerFieldAnalyzer:TERM(org.apache.lucene.analysis.WhitespaceAnalyzer),DEF(org.apache.lucene.analysis.StopAnalyzer)');Note that TERM column is analyzed storing term positions offset.
    With this index created we can query using auto complete pipeline table function as follow:
    SQL> select * from table(lautocomplete('DICC_LIDX','TERM','th',15)) t;

    TERM    DOCFREQ
    there       3
    theory     2
    thaw       2
    then        2
    therefore 2
    thence     1
    their        1
    thanks     1
    theft        1
    theatrical 1
    the          1
    theme     1
    that         1
    thermal   1
    thank      1
    15 rows selected.
    Elapsed: 00:00:00.01First argument of this function is your index name, second argument is the column used for auto complete, third argument is the string used for lookup and last argument is how many terms are returned. By default rows are returned order by docFreq descending. Here other example:
    SQL> select * from table(lautocomplete('DICC_LIDX','TERM','spor',10)) t;
    TERM      DOCFREQ
    sport         3
    sportsman 1
    sporadic   1
    Elapsed: 00:00:00.02For the example table which includes 10102 rows the execution time of above examples is around 21ms, not bad for a notebook.
    Another new feature of this release is parallel index on RAM, which is enable by default with this release, indexing on RAM means that when you are working in OnLine mode a batch of new rows to be added to the index are processed in parallel (ParallelDegree parameter) more information is on Lucene Domain Index documentation on-line, if you have a server with multi-core processor or a RAC installation with sufficient RAM this feature speed up your indexing time by 40% eliminating the BLOB access during a partial index creation.
    Well next post will be about how to deal with Libraries compiled using JDK1.5 on Oracle 10g Databases. Stay tunned...

    New release of Lucene Domain Index based on Lucene 2.9.2

    Mon, 2010-03-29 09:12
    Here we are, latest release of Lucene Domain Index is ready for downloading at SF.net download area.
    As usual this is a new production release based on latest production release of Apache Lucene Project, but we always included some new functionalities :)
    This release includes an interesting contribution from Pedro Pinheiro Lucene Did You Mean, to implement this functionality a dictionary can be created by calling:
    BEGIN
         DidYouMean.indexDictionary('SOURCE_BIG_LIDX','TEXT');
    END;this dictionary is orthogonal to the Domain Index updates and should be maintained externally by calling again above procedure if there are a lot of changes on the column(s) which was used as index terms.
    Then you can easily query Did You Mean words by using a function:
    SQL> select DidYouMean.suggest('SOURCE_BIG_LIDX','souce vody') from dual;
              source bodyMore information about this package is at Lucene Domain Index documentation available for download as PDF or on-line through Google Docs.
    Enjoy this new release!!!
    Last minute comments: We are working with Lucene 3.0.1 release which works perfect on 11g databases because is only available for JDK1.5 API, but as everybody knows 10g installation are widely used aound the world, so We are working in a Lucene Domain Index 3.0.1 distribution back-ported to JDK1.4 using a retro-translator, once We have a full tested release it will be available for download, stay tunned.

    New release of Lucene Domain Index based on Lucene 2.9.1

    Tue, 2010-02-16 06:39
    We have released a new Lucene Domain Index (LDI) based on 2.9.1 core base.
    Since Lucene 2.9.1 is on the road from several months ago why this new release of LDI came delayed?
    The answer is because We added parallel processing support.
    This new feature is enabled by a new LDI parameter ParallelDegree, by setting this parameter with a value greater than 1 LDI creates multiples Lucene directory storages to process insertions in parallel.
    Let see a practical example:
    create index source_big_lidx on test_source_big(text)
    indextype is lucene.luceneindex parameters('BatchCount:250;ParallelDegree:2;SyncMode:OnLine;LogLevel:INFO;AutoTuneMemory:true;PerFieldAnalyzer:line(org.apache.lucene.analysis.KeywordAnalyzer),TEXT(org.apache.lucene.analysis.SimpleAnalyzer);FormatCols:line(0000);ExtraCols:line "line"');

    Above example create a LDI with ParallelDegree equal to 2 and BatchCount equal to 250. Parallel degree is only used when SyncMode is OnLine, future LDI releases will includes parallel operations when SyncMode is Deferred.
    Once this DML operation is executed LDI creates three OJVMDirectory Lucene stores, two for parallel index operations and the master store, then batches of 250 rows are enqueued for indexing in parallel mode.
    Parallel insert implies a parallel operation (document creation and insertion in a secondary store) and serialized merge in the master store.
    Machines with multi-core chips or RAC installations will speed up the LDI index creation/rebuild, obviously when an IO concurrence is the bottleneck there is no performance improvement when ParallelDegree>1. Look at this screenshot on my notebook
    As you can see two background processes (AQ processes named ora_j*_test) are running consuming most of the CPU usage, these are LDI operations (insert|merge), also you can see two DB Writer process trying to write in parallel the information that LDI is generating.
    A complete list of changes of this new release is at ChangeLog.txt file.
    Downloads for 11g and 10g in binary format are SF.net project section download.
    Source code, obviously, is available through CVS access.
    On line documentation is available in PDF or as Google Document.

    Revisiting JDBC Connection Pooling and Spring Data Source implementation

    Mon, 2009-12-14 18:11

    Today, I have revisited the Oracle Implementation of OCI Connection Pooling.
    Using my previous post


    Using native Oracle Data Source and connection caching/pooling from Spring
    I have updated the file examples:
    The idea behind this tip is the same, do the connection pooling implementation by using the JDBC driver which is faster and safer.
    If you are using an old database version, for example 10g, you can try installing at the middle tier the Install Client which is available for multiples platform, including Mac OS/X
    HTH, Marcelo.

    Experiences on Java programming within the databases, tips & tricks, tools, open source libraries and more

    Tue, 2009-10-06 12:56
    This is the topic of my talk during unconference this year (Tuesday - Oct 13 - 10am - Overlook II) but, What is an unconference?
    An unconference is a conference where the content of the sessions is driven and created by the participants, generally day-by-day during the course of the event, rather than by a single organizer, or small group of organizers, in advance (Source: Wikipedia). Unconferences came from the realization that "the sum of the expertise of the people in the audience is greater than the sum of expertise of the people on stage" (Source: Dave Winer).
    During this talk I would like to transfer my experience on Java programming withing the Database since 1999.
    Specially looking for three scenarios of work:
    • Traditional Java Stored Procedure programming
    • HTPP/HTTPS services through the XMLDB Servlet connector
    • Domain Index and pipe-line table function
    This scenarios are covered by different Open Source projects used as practical examples DBPrism CMS, XMLDB Restlet Connector and Lucene Domain Index.
    Each projects included several Open Source Libraries used and many tools for doing deployment and installing.
    I hope I can see you all there next week... Marcelo

    New release of Lucene Domain Index based on Lucene 2.9.0

    Wed, 2009-09-30 06:47
    A new binary distribution of Lucene Domain Index (2.9.0.1.0) for Oracle 10g/11g has been released.
    Lucene Domain Index is integration of Lucene Project running inside the Oracle JVM and integrated to the SQL layer by adding a new index type.
    This new version uses latest Lucene 2.9.0 core libraries and introduces some of the changes on API.
    Here complete list of changes:
    • Tested with Oracle 11gR2, 11gR1 and 10.2 databases.
    • DefaultUserDataStore do a SAX parsing to get text nodes and attributes from an XMLType value.
    • A SimpleLRUCache is used to load rowids and his associated Lucene doc id, this reduce memory consumption when querying very big tables. A new parameter has been added, CachedRowIdSize by default 10000 to control the size of the LRU cache.
    • Lucene Domain Index core was updated to use TopFieldCollector and to avoid computation time when lscore() is not used.
    • Two new parameter has been added NormalizeScore which control when to track the Max Score and when querying, both parameters are consequence of new Lucene Collector API and boost the performance when querying.
    • A table alias L$MT is defined for the master table associated to the index to be used in complex queries associating columns from master tables and columns from dependant tables.
    Full documentation is at Google Doc.
    Download binary version for Oracle 10g and 11g.
    One of the biggest changed introduced into Lucene core libraries are in the TopCollector API, it introduces optimizations when you don't need to compute the score, track maximum score and preserve the Lucene document ID in order.
    This changes are reflected in Lucene Domain Index through the usage of lscore() ancilliary operator. For example:
    create table emails (
    emailFrom VARCHAR2(256),
    emailTo VARCHAR2(256),
    subject VARCHAR2(4000),
    emailDate DATE,
    bodyText CLOB)
    /
    and an index created as:
    create index emailbodyText on emails(bodyText) indextype is lucene.LuceneIndex
    parameters('Analyzer:org.apache.lucene.analysis.StopAnalyzer;ExtraCols:emailDate "emailDate",subject "subject",emailFrom "emailFrom",emailTo "emailTo"');
    -- required to Sort by subject
    alter index emailbodyText parameters('FormatCols:subject(NOT_ANALYZED),emailFrom(NOT_ANALYZED),emailTo(NOT_ANALYZED)');
    -- do not track max score
    alter index emailbodyText parameters('NormalizeScore:false');
    this queries reflect when score is computed or not.
    SELECT /*+ DOMAIN_INDEX_SORT */ lscore(1) sc,subject
    FROM emails where emailfrom like '%@gmail.com' and
    lcontains(bodytext,'subject:lucene injection',1)>0
    order by lscore(1) DESC;
    • Lucene score is computed and do not track maximum score, the result will be returned using relevance order descendant, default Lucene ordering no extra effort.
    SELECT /*+ DOMAIN_INDEX_SORT */ subject
    FROM emails
    where lcontains(bodytext,'subject:lucene injection','subject:ASC',1)>0;
    • Lucene score is not computed and obviously the maximum score is not tracked.
    Note that if we are querying using the optimizer hint DOMAIN_INDEX_SORT and lcontains(..,'subject:ASC',..) the score value is not relevant the result will be returned using the column subject ascending.
    Similar result can be obtained using order by SQL, but to do that the RDBMS first collects all the rows that match to lcontains() operator and then do an order by, the difference in this simple test is an optimizer cost 3 over 2.
    The NormalizeScore parameter is by default true, to get back compatibility, applications which assume an score in range 0..1 will run without any change, but if you want to get faster response time by avoiding max score computation you can change it by using alter index parameters DDL command.
    To see which really means NormalizeScore parameter look a this example using WikiPedia English dump:
    select /*+ FIRST_ROWS DOMAIN_INDEX_SORT */ lscore(1),
    extractValue(object_value,'/page/title'),
    extractValue(object_value,'/page/revision/timestamp') "revisionDate"
    from pages where lcontains(object_value,'rownum:[1 TO 2] AND sport','revisionDate:ASC',1)>0;
    with NormalizeScore:false returns:
    1.56775963306427001953125 SCCA 25-FEB-02 03.43.11.000000000 PM +00:00
    1.79172527790069580078125 Sports utility vehicle 04-APR-02 10.31.41.000000000 PM +00:00
    with NormalizeScore:true returns:
    0.875 SCCA 25-FEB-02 03.43.11.000000000 PM +00:00
    1 Sports utility vehicle 04-APR-02 10.31.41.000000000 PM +00:00
    Both results are logically because we are looking for a result order by revisionDate:ASC, but the score computed is different.
    Well, I will be at Oracle Open World 09, if anybody want to know more about this kind of internal implementation I will talk at the Oracle Unconference and obviously we can meet at the OTN Lounge, see you there Marcelo.

    Pages