Feed aggregator

how to improve the SQL performance of an 8 node Exadata Data Warehouse RAC

Tom Kyte - Thu, 2016-10-13 00:46
Hi Tom! Last day I work with an 8 node Exadata Data Warehouse to tune performance for the slow SQL,and I found that the table is not gather statistics and the execution plan is not the best, it executes with the TABLE ACCESS STORAGE FULL,the table...
Categories: DBA Blogs

change running code's execution plan

Tom Kyte - Thu, 2016-10-13 00:46
Hi Tom, Suppose a package is executing in prod environment, currently proc A is running and after that Proc B will be executed in next 45 mins. I have just spotted some problems in proc B execution plan(which was generated earlier). I cant chang...
Categories: DBA Blogs

Oracle

Tom Kyte - Thu, 2016-10-13 00:46
Hi Guys, I am using linux to run the query. I have one query like select * from Emp; I want to spool the results into one file(this file has only exported data without number of rows selected message) and what ever the log i.e number of lines ...
Categories: DBA Blogs

How to replicate a nested table

Tom Kyte - Thu, 2016-10-13 00:46
I would like to know if there is any easier way to replicate a nested table. Say for a normal table, we can just do CREATE TABLE a_bkp as SELECT * FROM a; Is there a similar easy way to create replica of existing nested table ?
Categories: DBA Blogs

Comments

Tom Kyte - Thu, 2016-10-13 00:46
Can someone please help me with query for how can I extract the 'comment' section of all the tables ? Used this query to get all the table names: Select table_name, owner, tablespace_name from dba_tables; Used below query to gather the comm...
Categories: DBA Blogs

INDEX MONITORING USAGE

Tom Kyte - Thu, 2016-10-13 00:46
Hi All, It seems that only indexes that are listed in the explain plan are monitored by index monitoring feature. But indexes that are used for constraint checking might not be listed in explain plan and in this case the index monitoring featur...
Categories: DBA Blogs

Customers talk to each other—and everyone wins

Linda Fishman Hoyle - Wed, 2016-10-12 20:40

A Guest Post by Oracle Senior Vice President Chris Leone, Applications Development (pictured left)

Most customer panels are conversations between the moderator and the customer. Not in Cara Capretta’s HCM session at OpenWorld 2016. Two minutes into the conversation, the HR and IT leaders from Kaiser Permanente, SGS, Siemens Corporation, MoneyGram, and IPSOS were laughing and joking with each other.

“We’re in this Together”

It turns out that the panelists knew each other from reference calls.Caroline Villemin, HCM Systems Manager of IPSOS, said her reference call with SGS was “more than just getting feedback on the solution; we really discovered the customer community and everything around it.”

Etienne Delobel, HRIS Director of SGS, described the interaction with a professional colleague this way: “When Caroline and the CFO of IPSOS came for an on-site visit, they spent half a day with us looking into our HCM platform. I told them that if they go with HCM Cloud, we'll work together, and we'll be together." He added, “There are all these great features available in the cloud. The thing is, I do not have enough resources on my team to play with them, to have fun with them. So now I'm looking at how IPSOS is implementing and using things like self-service. They’re prototyping for us.”

Customers Connect Online

In the on-premises world, customers interacted mostly with vendors and partners. Now that everyone in the SaaS world is on the same release, customers are looking more to each other for advice. You see this in online communities such as Oracle’s Customer Connect, where customers join discussion forums on topics covering CX, HCM, ERP, EPM, and more.

The HCM Cloud community alone has 20,000 active community members. In this Profit Magazine interview with Oracle SVP Chris Leone (pictured left), he points out that activity in the cloud community has grown because people are sharing information across organizations. They’re saying, “Hey, take advantage of this new succession planning feature, or this new career development tool. They’re sharing reports.”

Oracle Listens

In addition, customers use the Idea Lab to submit ideas, collaborate on development, and vote for their favorite ideas. According to Leone, 80 percent of the enhancements in Release 12 were customer-driven. HCM customers have submitted 7,200 ideas so far.

Are you part of this online community? We encourage you to sign up today.

How to install Mirantis OpenStack 9.0 using VirtualBox – part 3

Yann Neuhaus - Wed, 2016-10-12 14:20

In the two first blogs, I installed the Fuel environment and deployed OpenStack in the Fuel slave nodes and all of that from the Fuel Master node.

In this blog, I will show you  all the steps to follow in order to create an instance in OpenStack. All is going to be done via the Command Line Interface and not via Horizon – the OpenStack Dashboard (I will explain this in an other blog) .

 

Let’s begin!

First of all let’s connect to the Fuel Master node and list all the nodes :

[root@fuel ~]# fuel2 node list
+----+------------+--------+-------------+-----------------+-----------+-------------------+---------+---------------+--------+
| id | name | status | os_platform | roles | ip | mac | cluster | platform_name | online |
+----+------------+--------+-------------+-----------------+-----------+-------------------+---------+---------------+--------+
| 1 | Storage | ready | ubuntu | [u'cinder'] | 10.20.0.5 | 08:00:27:80:04:e8 | 1 | None | True |
| 2 | Compute | ready | ubuntu | [u'compute'] | 10.20.0.4 | 08:00:27:cc:85:69 | 1 | None | True |
| 3 | Controller | ready | ubuntu | [u'controller'] | 10.20.0.3 | 08:00:27:35:b0:77 | 1 | None | True |
+----+------------+--------+-------------+-----------------+-----------+-------------------+---------+---------------+--------+

 

Now, I connect to the controller node

[root@fuel ~]# ssh 10.20.0.3
Warning: Permanently added '10.20.0.3' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-98-generic x86_64)

* Documentation:  https://help.ubuntu.com/
Last login: Tue Oct 11 09:55:54 2016 from 10.20.0.2
root@node-3:~#

 

Let’s put an alias for each of the Fuel slave nodes  :

[root@fuel ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.20.0.2       fuel.domain.tld fuel
10.20.0.3       fuel.domain.tld controller
10.20.0.4       fuel.domain.tld compute
10.20.0.5       fuel.domain.tld storage

 

Now I can connect with the aliases:

[root@fuel ~]# ssh controller
The authenticity of host 'controller (10.20.0.3)' can't be established.
ECDSA key fingerprint is 01:b5:15:22:03:d0:f9:bb:86:3a:06:a7:8c:19:bd:22.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'controller,10.20.0.3' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-98-generic x86_64)

* Documentation: https://help.ubuntu.com/
Last login: Tue Oct 11 10:12:34 2016 from 10.20.0.2
root@node-3:~# exit
logout

 

Repeat this step for the two left nodes (compute & storage)

I connect to the controller node :

[root@fuel ~]# ssh controller
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-98-generic x86_64)

* Documentation:  https://help.ubuntu.com/
Last login: Tue Oct 11 11:24:16 2016 from 10.20.0.2
root@node-3:~#

 

I  try to enter an OpenStack Command Line (list the instances for example) :

root@node-3:~# nova list
 ERROR (CommandError): You must provide a username or user ID via --os-username, --os-user-id, env[OS_USERNAME] or env[OS_USER_ID]
 root@node-3:~#

It is normal because using OpenStack Clients Command Line  requires before to get a token from Keystone.

In order to do that, you have to specify where the controller node can reach Keystone and get the required informations from the  OpenStack API’s. Because it is an authentification process you will also need to specify a username and a password.

But doing this at each time you use the OpenStack Command Lines clients can  quickly become inconvenient. Hopefully, in OpenStack, there is a way to avoid this. The solution is to create a file with all environments variables that need to be exported for getting this token from Keystone.

Mirantis creates this file for us:

root@node-3:~# ls
openrc

Let’s see what this file contains :

root@node-3:~# cat openrc
#!/bin/sh
export OS_NO_CACHE='true'
export OS_TENANT_NAME='admin'
export OS_PROJECT_NAME='admin'
export OS_USERNAME='admin'
export OS_PASSWORD='admin'
export OS_AUTH_URL='http://192.168.0.2:5000/'
export OS_DEFAULT_DOMAIN='Default'
export OS_AUTH_STRATEGY='keystone'
export OS_REGION_NAME='RegionOne'
export CINDER_ENDPOINT_TYPE='internalURL'
export GLANCE_ENDPOINT_TYPE='internalURL'
export KEYSTONE_ENDPOINT_TYPE='internalURL'
export NOVA_ENDPOINT_TYPE='internalURL'
export NEUTRON_ENDPOINT_TYPE='internalURL'
export OS_ENDPOINT_TYPE='internalURL'
export MURANO_REPO_URL='http://storage.apps.openstack.org/'
export MURANO_PACKAGES_SERVICE='glance'

 

I source the file to export the environment variables :

root@node-3:~# source openrc

 

I check if the variables were exported. It is via the OS_AUTH_URL that the controller node can reach Keystone

root@node-3:~# echo $OS_USERNAME
admin
root@node-3:~# echo $OS_PASSWORD
admin
root@node-3:~# echo $OS_AUTH_URL

http://192.168.0.2:5000/

 

Now I can list if there are some instances running :

root@node-3:~# nova list
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+

 

 

Let’s create the first instance

In order to create this instance, I need :

  • a flavor
  • an OS
  • a network
  • a keypair
  • a name

 

I list the available flavors

root@node-3:~# nova flavor-list
 +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+
 | ID                                   | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
 +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+
 | 1                                    | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
 | 2                                    | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
 | 3                                    | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
 | 4                                    | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
 | 5                                    | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
 | d942587a-c48b-48ca-9c96-cad3c358eb6e | m1.micro  | 64        | 0    | 0         |      | 1     | 1.0         | True      |
 +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+

 

Then, I list the Operating Systems or images available. There is only one image created by default by Fuel which is a mini Linux, it is called Cirros.

root@node-3:~# nova image-list
+--------------------------------------+--------+--------+--------+
| ID                                   | Name   | Status | Server |
+--------------------------------------+--------+--------+--------+
| a3708fe7-60f7-49c9-91ed-a6eee1ab8ba4 | TestVM | ACTIVE |        |
+--------------------------------------+--------+--------+--------+

 

I list also the networks (they had been created by Fuel during the deployment of OpenStack)

root@node-3:~# neutron net-list
+--------------------------------------+--------------------+-------------------------------------------------------+
| id                                   | name               | subnets                                               |
+--------------------------------------+--------------------+-------------------------------------------------------+
| b22e82c9-df6b-4580-a77e-cde8e93f30d8 | admin_floating_net | 7acc6b15-1c00-4447-b4f7-0fcced7a594b 172.16.0.0/24    |
| 09b1e122-cb63-44d5-af0b-244d3aa06331 | admin_internal_net | aea6fc29-dfb6-4586-9b09-70ce5c992315 192.168.111.0/24 |
+--------------------------------------+--------------------+-------------------------------------------------------+

 

I create a keypair that will be injected in the future instance in order to avoid a password authentification (unless if a password authentification was set up).. This step is not useful for this case because the Cirros image provides a password by default. But it can be helpful if you use other cloud images (ubuntu, centos, etc..)

root@node-3:~# nova keypair-add --pub-key ~/.ssh/authorized_keys mykey
root@node-3:~# nova keypair-list
+-------+------+-------------------------------------------------+
| Name  | Type | Fingerprint                                     |
+-------+------+-------------------------------------------------+
| mykey | ssh  | ee:56:e6:c0:7b:e2:d5:2b:61:23:d7:76:49:b3:d8:d5 |
+-------+------+-------------------------------------------------+

 

Now I got all the informations that I need, let’s create the instance which I will name InstanceTest01

root@node-3:~# nova boot \
> --flavor m1.micro \
> --image TestVM \
> --key-name mykey \
> --nic net-id=09b1e122-cb63-44d5-af0b-244d3aa06331 \
> InstanceTest01
+--------------------------------------+-------------------------------------------------+
| Property | Value |
+--------------------------------------+-------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | - |
| OS-EXT-SRV-ATTR:hostname | instancetest01 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
| OS-EXT-SRV-ATTR:instance_name | instance-00000001 |
| OS-EXT-SRV-ATTR:kernel_id | |
| OS-EXT-SRV-ATTR:launch_index | 0 |
| OS-EXT-SRV-ATTR:ramdisk_id | |
| OS-EXT-SRV-ATTR:reservation_id | r-dyo0086w |
| OS-EXT-SRV-ATTR:root_device_name | - |
| OS-EXT-SRV-ATTR:user_data | - |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | bUQJDtwM3vjr |
| config_drive | |
| created | 2016-10-11T13:08:34Z |
| description | - |
| flavor | m1.micro (d942587a-c48b-48ca-9c96-cad3c358eb6e) |
| hostId | |
| host_status | |
| id | b84b49f1-1b01-4aa9-bd9c-c8691fca9298 |
| image | TestVM (a3708fe7-60f7-49c9-91ed-a6eee1ab8ba4) |
| key_name | mykey |
| locked | False |
| metadata | {} |
| name | InstanceTest01 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | abfec6fc54c14da28f9971e04c344ec8 |
| updated | 2016-10-11T13:08:36Z |
| user_id | d0e1e11f84064f4c8aa02381f0d42ed2 |
+--------------------------------------+-------------------------------------------------+

Here is the instance running :

root@node-3:~# nova list
+--------------------------------------+----------------+--------+------------+-------------+----------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+----------------+--------+------------+-------------+----------------------------------+
| b84b49f1-1b01-4aa9-bd9c-c8691fca9298 | InstanceTest01 | ACTIVE | - | Running | admin_internal_net=192.168.111.3 |
+--------------------------------------+----------------+--------+------------+-------------+----------------------------------+

So the instance is up and running

 

We connect to the compute node to check if the instance is really running on it:

Connection to controller closed.
[root@fuel ~]# ssh compute
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-98-generic x86_64)

* Documentation: https://help.ubuntu.com/
Last login: Tue Oct 11 18:07:23 2016 from 10.20.0.2
root@node-2:~# virsh list
Id Name State
----------------------------------------------------
2 instance-00000001 running

root@node-2:~#

Yes, the instance is running.

I need to add the ssh rule for accessing the instance, I will add also the icmp one. These rules are managed by security groups.

Let’s try to ping the instance just created, from now all the operations are done on the controller node  :

 root@node-3:~# ip netns exec qrouter-0c67b78a-93d4-417c-9ad3-3b29e1480934 ping 192.168.111.3
 PING 192.168.111.3 (192.168.111.3) 56(84) bytes of data.
 ^C
 --- 192.168.111.3 ping statistics ---
 3 packets transmitted, 0 received, 100% packet loss, time 2004ms

 

I can not ping my instance, let’s see what rules are currently in the default security group created by OpenStack:

root@node-3:~# nova secgroup-list
 +--------------------------------------+---------+------------------------+
 | Id                                   | Name    | Description            |
 +--------------------------------------+---------+------------------------+
 | 954e4d85-ea38-4a4b-bbe6-e355946fdfb0 | default | Default security group |
 +--------------------------------------+---------+------------------------+
root@node-3:~# nova secgroup-list-rules 954e4d85-ea38-4a4b-bbe6-e355946fdfb0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
|                                             | | | | default |
|                                             | | | | default |
+-------------+-----------+---------+-----------+--------------+

There are no rules. So, I add the icmp rule in this default security group

root@node-3:~# nova secgroup-add-rule 954e4d85-ea38-4a4b-bbe6-e355946fdfb0 icmp -1 -1 0.0.0.0/0
 +-------------+-----------+---------+-----------+--------------+
 | IP Protocol | From Port | To Port | IP Range | Source Group |
 +-------------+-----------+---------+-----------+--------------+
 | icmp | -1 | -1 | 0.0.0.0/0 | |
 +-------------+-----------+---------+-----------+--------------+

 

I check if the rule was added

root@node-3:~# nova secgroup-list-rules 954e4d85-ea38-4a4b-bbe6-e355946fdfb0
 +-------------+-----------+---------+-----------+--------------+
 | IP Protocol | From Port | To Port | IP Range  | Source Group |
 +-------------+-----------+---------+-----------+--------------+
 |             |           |         |           | default      |
 |             |           |         |           | default      |
 | icmp        | -1        | -1      | 0.0.0.0/0 |              |
 +-------------+-----------+---------+-----------+--------------+

 

Let’s test if I can ping the instance, we need to use the network namespace qrouter which is basically the router that the controller node will use to reach the instance on the compute node

root@node-3:~# ip netns list
qdhcp-09b1e122-cb63-44d5-af0b-244d3aa06331
qrouter-0c67b78a-93d4-417c-9ad3-3b29e1480934
haproxy
vrouter

 

IDqRouter

 

root@node-3:~# ip netns exec qrouter-0c67b78a-93d4-417c-9ad3-3b29e1480934 ping 192.168.111.3
 PING 192.168.111.3 (192.168.111.3) 56(84) bytes of data.
 64 bytes from 192.168.111.3: icmp_seq=1 ttl=64 time=2.29 ms
 64 bytes from 192.168.111.3: icmp_seq=2 ttl=64 time=0.789 ms
 ^C
 --- 192.168.111.3 ping statistics ---
 2 packets transmitted, 2 received, 0% packet loss, time 1001ms

 

I do the same with the ssh rule

First, I test if I can access my instance

root@node-3:~# ip netns exec qrouter-0c67b78a-93d4-417c-9ad3-3b29e1480934 ssh cirros@192.168.111.3 -v
 OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014
 debug1: Reading configuration data /etc/ssh/ssh_config
 debug1: /etc/ssh/ssh_config line 19: Applying options for *
 debug1: Connecting to 192.168.111.3 [192.168.111.3] port 22.
 ^C

 

No I can not, so I add the rule

 
root@node-3:~# nova secgroup-add-rule  954e4d85-ea38-4a4b-bbe6-e355946fdfb0 tcp 22 22 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp         | 22        | 22      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+

 

I check if the rule was added :

root@node-3:~# nova secgroup-list-rules 954e4d85-ea38-4a4b-bbe6-e355946fdfb0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| | | | | default |
| tcp | 22 | 22 | 0.0.0.0/0 | |
| | | | | default |
| icmp | -1 | -1 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+

I connect to my instance, the username by default is cirros

root@node-3:~# ip netns exec qrouter-0c67b78a-93d4-417c-9ad3-3b29e1480934 ssh cirros@192.168.111.3 -v
OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to 192.168.111.3 [192.168.111.3] port 22.
debug1: Connection established.
debug1: permanently_set_uid: 0/0
debug1: identity file /root/.ssh/id_rsa type -1
debug1: identity file /root/.ssh/id_rsa-cert type -1
debug1: identity file /root/.ssh/id_dsa type -1
debug1: identity file /root/.ssh/id_dsa-cert type -1
debug1: identity file /root/.ssh/id_ecdsa type -1
debug1: identity file /root/.ssh/id_ecdsa-cert type -1
debug1: identity file /root/.ssh/id_ed25519 type -1
debug1: identity file /root/.ssh/id_ed25519-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.8
debug1: Remote protocol version 2.0, remote software version dropbear_2015.67
debug1: no match: dropbear_2015.67
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr hmac-md5 none
debug1: kex: client->server aes128-ctr hmac-md5 none
debug1: sending SSH2_MSG_KEX_ECDH_INIT
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: Server host key: ECDSA 1b:f1:1c:34:13:83:cd:b1:37:a9:e4:32:37:65:91:c4
The authenticity of host '192.168.111.3 (192.168.111.3)' can't be established.
ECDSA key fingerprint is 1b:f1:1c:34:13:83:cd:b1:37:a9:e4:32:37:65:91:c4.
Are you sure you want to continue connecting (yes/no)? yes

 

 

Type yes, and the password is cubswin:)

Warning: Permanently added '192.168.111.3' (ECDSA) to the list of known hosts.
debug1: ssh_ecdsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey,password
debug1: Next authentication method: publickey
debug1: Trying private key: /root/.ssh/id_rsa
debug1: Trying private key: /root/.ssh/id_dsa
debug1: Trying private key: /root/.ssh/id_ecdsa
debug1: Trying private key: /root/.ssh/id_ed25519
debug1: Next authentication method: password
cirros@192.168.111.3's password: 
debug1: Authentication succeeded (password).
Authenticated to 192.168.111.3 ([192.168.111.3]:22).
debug1: channel 0: new [client-session]
debug1: Entering interactive session.
debug1: Sending environment.
debug1: Sending env LANG = en_US.UTF-8
$ 
$ # "I am connected to my instance"
$

So now, I am connected to my instance

 

Let’s check if I can ping google

$ ping www.google.com
PING www.google.com (172.217.18.100): 56 data bytes
64 bytes from 172.217.18.100: seq=0 ttl=59 time=29.485 ms
64 bytes from 172.217.18.100: seq=1 ttl=59 time=9.089 ms
64 bytes from 172.217.18.100: seq=2 ttl=59 time=27.027 ms
64 bytes from 172.217.18.100: seq=3 ttl=59 time=9.992 ms
64 bytes from 172.217.18.100: seq=4 ttl=59 time=26.734 ms

This ended this series. Mirantis made OpenStack very simple to install but it requires good skills in Puppet and Astute if you want to customize the installation (like create a network node role or customize the installation of MySQL database in the controller node for example)

In other blogs, I will show you how to add nodes to the Fuel environment to make the cloud more powerful.

 

Cet article How to install Mirantis OpenStack 9.0 using VirtualBox – part 3 est apparu en premier sur Blog dbi services.

Court Orders Rimini Street to Stop Unlawful Conduct and Awards Oracle $27.7 Million in Prejudgment Interest

Oracle Press Releases - Wed, 2016-10-12 14:00
Press Release
Court Orders Rimini Street to Stop Unlawful Conduct and Awards Oracle $27.7 Million in Prejudgment Interest

Redwood Shores, Calif.—Oct 12, 2016

Yesterday, the United States District Court for the District of Nevada issued two significant rulings in Oracle’s litigation against Rimini Street and its CEO Seth Ravin.

The first ruling is a permanent injunction barring Rimini Street from continuing to infringe Oracle’s copyrights and other unlawful acts. The Court previously determined that a permanent injunction was warranted, noting Rimini Street’s “callous disregard for Oracle’s copyrights and computer systems when it engaged in the infringing conduct” and that “Rimini’s business model was built entirely on its infringement of Oracle’s copyrighted software and its improper access and downloading of data from Oracle’s website and computer systems.”

The Court’s four-page permanent injunction prohibits certain copying, distribution, and use of Oracle’s copyrighted software and documentation by Rimini Street and also imposes limitations on Rimini Street’s access to Oracle’s websites. The order states, for example, that Rimini Street may not use a customer’s software environment “to develop or test software updates or modifications for the benefit of any other licensee.” The order also prohibits Rimini Street’s preparation of certain derivative works from Oracle’s copyrighted software. The order applies not only to Rimini Street, but also “its subsidiaries, affiliates, employees, directors, officers, principals, and agents.”

“This permanent injunction imposes important restrictions on Rimini Street,” said Oracle’s General Counsel Dorian Daley, “and Oracle is grateful that the Court has taken steps to prevent continuing unlawful acts by Rimini Street and its executives.”

Although Rimini Street has stated that there was “no expected impact” from any injunction, Rimini Street also told the Court that it “could suffer significant harm to its current business practices if the proposed injunction were entered,” which it now has been. These contradictory positions raise the issue of whether Rimini is misleading the Court or making misrepresentations to customers.

In its second ruling, the Court awarded Oracle approximately $27.7 million in prejudgment interest, with an additional amount of prejudgment interest to be awarded based on the date of the final judgment. This is in addition to the $50 million jury verdict and the $46.2 million in attorneys’ fees and costs awarded to Oracle last month.

Contact Info
Deborah Hellinger
Oracle
+1.212.508.7935
deborah.hellinger@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Deborah Hellinger

  • +1.212.508.7935

Presenting at KScope can change your life

Dietmar Aust - Wed, 2016-10-12 13:16
Back in 2008 I was invited by Scott Spendolini to speak about my experiences with Oracle APEX at the German telecom shops at the KScope conference in New Orleans.

It was a truly great experience. 

At that time the system I had built was one of the larger deployments of Oracle APEX known in the community serving 6.000 users in the 800 telecom shops in Germany with 400.000 hits per day on the system.

Not only could I showcase what I had built with Oracle APEX 2.2 but I could even get more connected with the Oracle APEX community. It was awesome.

I had the chance to meet many people from the Oracle forums in person, like Scott Spendolini, Patrick Wolf, Dimitri Gielis, Roel Hartman, Karen Cannell, Anton Nielsen, John Scott but also many people from the APEX team itself like Mike Hichwa, Joel Kallman, Carl Backstrom, David Peake, Scott Spadafore and many others.

On this conference you will certainly meet the most members of the APEX team itself. It is a great opportunity to get in touch with the people in person ... this just makes a HUGE difference. So whenever you might need support later on ... your name will be known and people understand that you contribute just as well. It makes a difference !

From that time on in 2008 I got hooked and got more and more involved in the community. Each project became an opportunity to research yet another interesting topic for the next conference.

And I presented each year (I missed Long Beach ... what a bummer :( ) until now ... and I will submit again this week ... because the deadline is coming close ( 14.10.2016 ) !!!

This conference sets itself apart not only by the extremely high class content but also by the social activities, there is soo much going on. They always select really nice locations, everybody is relaxed and has so much fun. The APEX presentations are typically well attended ... even more than from the other tracks.

I can only encourage you to submit your presentation since the deadline is coming up soon: 14.10.2016 !!!

In order to submit a really compelling presentation, please read up these posts for guidance:
And remember ... we need presenters on all levels: beginners, intermediates and experts.

Hope to see you in San Antonio at KScope 2017 !!!

Cheers, 
~Dietmar. 


Oracle Documents Cloud Service Named in the Constellation Research ShortList for Enterprise File Sharing Suites

WebCenter Team - Wed, 2016-10-12 10:17

The Constellation ShortList presents vendors in different categories of the market relevant to early adopters. In addition, products included meet the threshold criteria for this category as determined by Constellation Research.

This Constellation ShortList of vendors for a market category is compiled through conversations with early adopter clients, independent analysts, and briefings with vendors and partners.

About the Enterprise File Sharing Suites Constellation ShortList

Enterprise file sharing and synchronization solutions allow individuals and teams to share and synchronize documents, photos, videos and other files across multiple devices in a secure manner. This category refers to people collaborating on and sharing business content in a distributed approach, allowing for real-time editing and file distribution as needed. The most advanced vendors enable developers to create business, content-centric, line-of-business applications that incorporate files as part of that process.

“Source:  Constellation Research, Inc., " Constellation Research ShortList for Enterprise File Sharing Suites", Alan Lepofsky, October 12, 2016".

Oracle Documents Cloud Service was included in the Constellation Research ShortList for Enterprise File Sharing Suites.

In today’s world, your employees need instant access to the content that drives your business. Whether they are at their desk, on the go, or working remotely, the ability to easily and securely access and collaborate on business content is one of the keys to being a nimble enterprise. Oracle Documents Cloud Service, an enterprise content collaboration solution with rich social features, gives you the file sync and share capabilities your employees need while providing the control and security your IT organization requires. All of the content can be accessed and collaborated on in context of business workflows, business applications and office productivity tools.

Oracle Documents Cloud Service lets you easily store your business content in the cloud, securely access it from anywhere, and share it with your colleagues and partners in real time. With a fast, intuitive web interface and easy-to-use desktop and mobile applications, employees can view and collaborate on files even when offline, keeping your organization running efficiently and your employees staying productive from any location.

Oracle Documents Cloud Service was pleased to be included in the Constellation Research ShortList for Enterprise File Sharing Suites. You can read the full report here.

Streaming data from Oracle using Oracle GoldenGate and Kafka Connect

Rittman Mead Consulting - Wed, 2016-10-12 10:00

This article was also posted on the Confluent blog, head over there for more great Kafka-related content!

Kafka Connect is part of the Confluent Platform, providing a set of connectors and a standard interface with which to ingest data to Kafka, and store or process it the other end. Initially launched with a JDBC source and HDFS sink, the list of connectors has grown to include a dozen certified connectors, and twice as many again 'community' connectors. These cover technologies such as MongoDB, InfluxDB, Kudu, MySQL - and of course as with any streaming technology, twitter, the de-facto source for any streaming how-to. Two connectors of note that were recently released are for Oracle GoldenGate as a source, and Elasticsearch as a sink. In this article I'm going to walk through how to set these up, and demonstrate how the flexibility and power of the Kafka Connect platform can enable rapid changes and evolutions to the data pipeline.

The above diagram shows an overview of what we're building. Change Data Capture (CDC) on the database streams every single change made to the data over to Kafka, from where it is streamed into Elasticsearch. Once in Elasticsearch it can be viewed in tools search as Kibana, for search and analytics:

Oracle GoldenGate (OGG) is a realtime data replication tool, falling under the broad umbrella of Change Data Capture (CDC) software, albeit at the high end in terms of functionality. It supports multiple RDBMS platforms, including - obviously - Oracle, as well as DB2, MySQL, and SQL Server. You can find the full certification list here. It uses log-based technology to stream all changes to a database from source, to target - which may be another database of the same type, or a different one. It is commonly used for data integration, as well as replication of data for availability purposes.

In the context of Kafka, Oracle GoldenGate provides a way of streaming all changes made to a table, or set of tables, and making them available to other processes in our data pipeline. These processes could include microservices relying on an up-to-date feed of data from a particular table, as well as persisting a replica copy of the data from the source system into a common datastore for analysis alongside data from other systems.

Elasticsearch is an open-source distributed document store, used heavily for both search, and analytics. It comes with some great tools including Kibana for data discovery and analysis, as well as a Graph tool. Whilst Elasticsearch is capable of being a primary data store in its own right, it is also commonly used as a secondary store in order to take advantage of its rapid search and analytics capabilities. It is the latter use-case that we're interested in here - using Elasticsearch to store a copy of data produced in Oracle.

Confluent's Elasticsearch Connector is an open source connector plug-in for Kafka Connect that sends data from Kafka to Elasticsearch. It is highly efficient, utilising Elasticsearch's bulk API. It also supports all Elasticsearch's data types which it automatically infers, and evolves the Elasticsearch mappings from the schema stored in Kafka records.

Oracle GoldenGate can be used with Kafka to directly stream every single change made to your database. Everything that happens in the database gets recorded in the transaction log (OGG or not), and OGG takes that ands sends it to Kafka. In this blog we're using Oracle as the source database, but don't forget that Oracle GoldenGate supports many sources. To use Oracle GoldenGate with Kafka, we use the "Oracle GoldenGate for Big Data" version (which has different binaries). Oracle GoldenGate has a significant advantage over the JDBC Source Connector for Kafka Connect in that it is a 'push' rather than periodic 'pull' from the source, thus it :

  1. Has much lower latency
  2. Requires less resource on the source database, since OGG mines the transaction log instead of directly querying the database for changes made based on a timestamp or key.
  3. Scales better, since entire schemas or whole databases can be replicated with minimal configuration changes. The JDBC connector requires each table, or SQL statement, to be specified.

Note that Oracle Golden Gate for Big Data also has its own native Kafka Handler, which can produce data in various formats directly to Kafka (rather than integrating with the Kafka Connect framework).

Environment

I'm using the Oracle BigDataLite VM 4.5 as the base machine for this. It includes Oracle 12c, Oracle GoldenGate for Big Data, as well as a CDH installation which provides HDFS and Hive for us to also integrate with later on.

On to the VM you need to also install:

  • Confluent Plaform 3.0
  • Oracle GoldenGate Kafka Connect connector
  • Elasticsearch Kafka Connect connector
  • Elasticsearch 2.4

To generate the schema and continuous workload, I used Swingbench 2.5.

For a step-by-step guide on how to set up these additional components, see this gist.

Starting Confluent Platform

There are three processes that need starting up, and each retains control of the session, so you'll want to use screen/tmux here, or wrap the commands in nohup [.. command ..] & so that they don't die when you close the window.

On BigDataLite the Zookeeper service is already installed, and should have started at server boot:

[oracle@bigdatalite ~]$ sudo service zookeeper-server status
zookeeper-server is running

If it isn't running, then start it with sudo service zookeeper-server start.

Next start up Kafka:

# On BigDataLite I had to remove this folder for Kafka to start
sudo rm -r /var/lib/kafka/.oracle_jre_usage
sudo /usr/bin/kafka-server-start /etc/kafka/server.properties

and finally the Schema Registry:

sudo /usr/bin/schema-registry-start /etc/schema-registry/schema-registry.properties

Note that on BigDataLite the Oracle TNS Listener is using port 8081 - the default for the Schema Registry - so I amended /etc/schema-registry/schema-registry.properties to change

listeners=http://0.0.0.0:8081

to

listeners=http://0.0.0.0:18081
Configuring Oracle GoldenGate to send transactions to Kafka Connect

Oracle GoldenGate (OGG) works on the concept of an Extract process which reads the source-specific transaction log and writes an OGG trail file in a generic OGG format. From this a Replicat process reads the trail file and delivers the transactions to the target.

In this example we'll be running the Extract against Oracle database, specifically, the SOE schema that Swingbench generated for us - and which we'll be able to generate live transactions against using Swingbench later on.

The Replicat will be sending the transactions from the trail file over to Kafka Connect.

I'm assuming here that you've already successfully defined and set running an extract against the Swingbench schema (SOE), with a trail file being delivered to /u01/ogg-bd/dirdat. For a step-by-step guide on how to do this all from scratch, see here.

You can find information about the OGG-Kafka Connect adapter in the README here.

To use it, first configure the replicat and supporting files as shown.

  1. Replicat parameters

    Create /u01/ogg-bd/dirprm/rconf.prm with the following contents:

    REPLICAT rconf
    TARGETDB LIBFILE libggjava.so SET property=dirprm/conf.props
    REPORTCOUNT EVERY 1 MINUTES, RATE
    GROUPTRANSOPS 1000
    MAP *.*.*, TARGET *.*.*;
    
  2. Handler configuration

    Edit the existing /u01/ogg-bd/dirprm/conf.props and amend gg.classpath as shown below. The classpath shown works for BigDataLite - on your own environment you need to make the necessary jar files available per the dependencies listed in the README.

    gg.handlerlist=confluent
    
    
    #The handler properties
    gg.handler.confluent.type=oracle.goldengate.kafkaconnect.KafkaConnectHandler
    gg.handler.confluent.kafkaProducerConfigFile=confluent.properties
    gg.handler.confluent.mode=tx
    gg.handler.confluent.sourceRecordGeneratorClass=oracle.goldengate.kafkaconnect.DefaultSourceRecordGenerator
    
    
    #The formatter properties
    gg.handler.confluent.format=oracle.goldengate.kafkaconnect.formatter.KafkaConnectFormatter
    gg.handler.confluent.format.insertOpKey=I
    gg.handler.confluent.format.updateOpKey=U
    gg.handler.confluent.format.deleteOpKey=D
    gg.handler.confluent.format.treatAllColumnsAsStrings=false
    gg.handler.confluent.format.iso8601Format=false
    gg.handler.confluent.format.pkUpdateHandling=abend
    
    
    goldengate.userexit.timestamp=utc
    goldengate.userexit.writers=javawriter
    javawriter.stats.display=TRUE
    javawriter.stats.full=TRUE
    
    
    gg.log=log4j
    gg.log.level=INFO
    
    
    gg.report.time=30sec
    
    
    #Set the classpath here
    gg.classpath=dirprm/:/u01/ogg-bd/ggjava/resources/lib*:/usr/share/java/kafka-connect-hdfs/*:/usr/share/java/kafka/*
    
    
    javawriter.bootoptions=-Xmx512m -Xms32m -Djava.class.path=.:ggjava/ggjava.jar:./dirprm
    

    Note the gg.log.level setting - this can be very useful to switch to DEBUG if you're investigating problems with the handler.

  3. Kafka Connect settings

    Edit the existing /u01/ogg-bd/dirprm/confluent.properties and amend the schema.registry.url URL to reflect the port change made above. All other values can be left as defaults.

    bootstrap.servers=localhost:9092
    
    
    value.serializer=org.apache.kafka.common.serialization.ByteArraySerializer
    key.serializer=org.apache.kafka.common.serialization.ByteArraySerializer
    schema.registry.url=http://localhost:18081
    
    
    value.converter=org.apache.kafka.connect.json.JsonConverter
    key.converter=org.apache.kafka.connect.json.JsonConverter
    internal.value.converter=org.apache.kafka.connect.json.JsonConverter
    internal.key.converter=org.apache.kafka.connect.json.JsonConverter
    

Now we can add the replicat. If not already, launch ggsci from the ogg-bd folder:

cd /u01/ogg-bd/
rlwrap ./ggsci

and define the replicat, and start it

ADD REPLICAT RCONF, EXTTRAIL ./dirdat/rt
START RCONF

Check its status:

GGSCI (bigdatalite.localdomain) 13> INFO RCONF

REPLICAT   RCONF     Last Started 2016-09-02 15:39   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:02 ago)
Process ID           25415
Log Read Checkpoint  File ./dirdat/rt000000000
                     First Record  RBA 0

Note that on BigDataLite 4.5 VM there are two existing replicats configured, RKAFKA and RMOV. You can ignore these, or delete them if you want to keep things simple and clear.

Testing the Replication

We'll run Swingbench in a moment to generate some proper throughput, but let's start with a single transaction to check things out.

Connect to Oracle and insert a row, not forgetting to commit the transaction (he says, from frustrating experience ;) )

[oracle@bigdatalite ogg]$ sqlplus soe/soe@orcl

SQL*Plus: Release 12.1.0.2.0 Production on Fri Sep 2 15:48:18 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Fri Sep 02 2016 12:48:22 +01:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> insert into soe.logon values (42,42,sysdate);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from soe.logon where logon_id=42;

LOGON_ID CUSTOMER_ID LOGON_DAT
---------- ----------- ---------
42       50865 12-AUG-11
42          42 02-SEP-16

Now if you list the topics defined within Kafka, you should see a new one has been created, for the SOE.LOGON table:

[oracle@bigdatalite dirrpt]$ kafka-topics --zookeeper localhost:2181 --list
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/share/java/kafka/slf4j-log4j12-1.7.21.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
ORCL.SOE.LOGON
_schemas

and you can view the record:

[oracle@bigdatalite dirrpt]$ kafka-console-consumer --zookeeper localhost:2181 --topic ORCL.SOE.LOGON --from-beginning

{"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"table"},{"type":"string","optional":false,"field":"op_type"},{"type":"string","optional":false,"field":"op_ts"},{"type":"string","optional":false,"field":"current_ts"},{"type":"string","optional":false,"field":"pos"},{"type":"double","optional":true,"field":"LOGON_ID"},{"type":"double","optional":true,"field":"CUSTOMER_ID"},{"type":"string","optional":true,"field":"LOGON_DATE"}],"optional":false,"name":"ORCL.SOE.LOGON"},"payload":{"table":"ORCL.SOE.LOGON","op_type":"I","op_ts":"2016-09-02 14:56:26.000411","current_ts":"2016-09-02 15:56:34.111000","pos":"00000000000000002010","LOGON_ID":42.0,"CUSTOMER_ID":42.0,"LOGON_DATE":"2016-09-02:15:56:25"}}

Hit Ctrl-C to cancel the consumer -- otherwise it'll sit there and wait for additional messages to be sent to the topic. Useful for monitoring when we've got lots of records flowing through, but not so useful now.

The message is JSON, so a useful tool to install is jq:

sudo yum install -y jq

You can then pipe the output of kafka-console-consumer through jq to pretty-print it:

[oracle@bigdatalite dirrpt]$ kafka-console-consumer --zookeeper localhost:2181 --topic ORCL.SOE.LOGON --from-beginning|jq '.'

{
  "payload": {
    "LOGON_DATE": "2016-09-02:15:56:25",
    "CUSTOMER_ID": 42,
    "LOGON_ID": 42,
    "pos": "00000000000000002010",
    "current_ts": "2016-09-02 15:56:34.111000",
    "op_ts": "2016-09-02 14:56:26.000411",
    "op_type": "I",
    "table": "ORCL.SOE.LOGON"
  },
[...]

or even show just sections of the message using jq's syntax (explore it here):

[oracle@bigdatalite dirrpt]$ kafka-console-consumer --zookeeper localhost:2181 --topic ORCL.SOE.LOGON --from-beginning|jq '.payload.op_ts'

"2016-09-02 14:56:26.000411"

So we've got successful replication of Oracle transactions into Kafka, via Oracle GoldenGate. Now let's bring Elasticsearch into the mix.

Configuring Elasticsearch

We're going to use Elasticsearch as a destination for storing the data coming through Kafka from Oracle. Each Oracle table will map to a separate Elasticsearch index. In Elasticsearch an 'index' is roughly akin to an RDBMS table, a 'document' to a row, a 'field' to a column, and a 'mapping' to a schema.

Elasticsearch itself needs no configuration out of the box if you want to just get up and running with it, you simply execute it:

/opt/elasticsearch-2.4.0/bin/elasticsearch

Note that this wouldn't suffice for a Production deployment, in which you'd want to allocate heap space, check open file limits, configure data paths, and so on.

With Elasticsearch running, you can then load Kopf, which is a web-based admin plugin. You'll find it at http://<server>:9200/_plugin/kopf

From Kopf you can see which nodes there are in the Elasticsearch cluster (just the one at the moment, with a random name inspired by Marvel), along with details of the indices as they're created - in the above screenshot there are none yet, because we've not loaded any data.

Setting up the Elasticsearch Kafka Connect handler

Create a configuration file for the Elasticsearch Kafka Connect handler. I've put it in with the Elasticsearch configuration itself at /opt/elasticsearch-2.4.0/config/elasticsearch-kafka-connect.properties; you can use other paths if you want.

The defaults mostly suffice to start with, but we do need to update the topics value:

name=elasticsearch-sink
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
tasks.max=1
connection.url=http://localhost:9200
type.name=kafka-connect
# Custom config
topics=ORCL.SOE.LOGON

Because Elasticsearch indices cannot be uppercase, we need to provide a mapping from the Kafka topic to the Elasticsearch index, so add a configuration to the file:

topic.index.map=ORCL.SOE.LOGON:soe.logon

If you don't do this you'll get an InvalidIndexNameException. You also need to add

topic.key.ignore = ORCL.SOE.LOGON

Note that the global key.ignore is currently ignored if you are also overriding another topic parameter. If you don't set this flag for the topic, you'll get org.apache.kafka.connect.errors.DataException: STRUCT is not supported as the document id..

Now we can run our connector. I'm setting the CLASSPATH necessary to pick up the connector itself, as well as the dependecies. I also set JMX_PORT so that the metrics are exposed on JMX for helping with debug/monitoring.

export CLASSPATH=/opt/kafka-connect-elasticsearch/*
export JMX_PORT=4243
/usr/bin/connect-standalone /etc/kafka/connect-standalone.properties /opt/elasticsearch-2.4.0/config/elasticsearch-kafka-connect.properties

You'll not get much from the console after the initial flurry of activity, except:

[pool-2-thread-1] INFO org.apache.kafka.connect.runtime.WorkerSinkTask - WorkerSinkTask{id=elasticsearch-sink-0} Committing offsets

But if you head over to Elasticsearch you should now have some data. In Kopf you'll see that there are now 'documents' in the index:

In addition the header bar of Kopf has gone a yellow/gold colour, because your Elasticsearch cluster is now in "YELLOW" state - we'll come back to this and the cause (unassigned shards) shortly.

Interactions with Elasticsearch are primarily through a REST API, which you can use to query the number of records in an index:

[oracle@bigdatalite ~]$ curl -s -X "GET" "http://localhost:9200/soe.logon/_count?pretty=true"
{
"count" : 4,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
}
}

and you can pair it up with jq as above to select just one of the fields:

$ curl -s -X "GET" "http://localhost:9200/soe.logon/_count?pretty=true"|jq '.count'
4

To see the data itself:

$ curl -s -X "GET" "http://localhost:9200/soe.logon/_search?pretty=true"
{
  "took" : 25,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 4,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "soe.logon",
      "_type" : "kafka-connect",
      "_id" : "ORCL.SOE.LOGON+0+3",
      "_score" : 1.0,
      "_source" : {
    "table" : "ORCL.SOE.LOGON",
    "op_type" : "I",
    "op_ts" : "2016-09-05 14:46:16.000436",
    "current_ts" : "2016-09-05 15:46:21.860000",
    "pos" : "00000000000000002748",
    "LOGON_ID" : 42.0,
    "CUSTOMER_ID" : 42.0,
    "LOGON_DATE" : "2016-09-05:15:46:11"
      }
    },

This is looking good! But ... there's a wrinkle. Let's fire up Kibana, an analytical tool for data in Elasticsearch, and see why.

/opt/kibana-4.6.0-linux-x86_64/bin/kibana

Go to http://<server>:5601/ and the first thing you'll see (assuming this is the first time you've run Kibana) is this:

Elasticseach, Index Mappings, and Dynamic Templates

Kibana is a pretty free-form analysis tool, and you don't have to write SQL, define dimensions, and so on -- but what you do have to do is tell it where to find the data. So let's specify our index name, which in this example is soe.logon:

Note that the Time-field name remains blank. If you untick Index contains time-based events and then click Create you'll see the index fields and their types:

Columns that are timestamps are coming across as strings - which is an issue here, because Time is one of the dimensions by which we'll pretty much always want to analyse data, and if it's not present Kibana (or any other user of the Elasticsearch data) can't do its clever time-based filtering and aggregation, such as this example taken from another (time-based) Elasticsearch index:

As a side note, the schema coming through from OGG Kafka Connect connector is listing these timestamp fields as strings, as we can see with a bit of fancy jq processing to show the schema entry for one of the fields (op_ts):

$ kafka-console-consumer --zookeeper localhost:2181 --topic ORCL.SOE.LOGON --from-beginning --max-messages 1|jq '.schema.fields[] | select (.field | contains("op_ts"))'
{
    "field": "op_ts",
    "optional": false,
    "type": "string"
}

This string-based schema is actually coming through from the OGG replicat itself - whilst the Kafka Connect handler interprets and assumes the datatypes of columns such as numbers, it doesn't for timestamps.

So - how do we fix these data types in Elasticsearch so that we can make good use of the data? Enter Dynamic Templates. These enable you to specify the mapping (similar to a schema) of an index prior to it being created for a field for the first time, and you can wildcard field names too so that, for example, anything with a _ts suffix is treated as a timestamp data type.

To configure the dynamic template we'll use the REST API again, and whilst curl is fine for simple and repeated command line work, we'll switch to the web-based Elasticsearch REST API client, Sense. Assuming that you installed it following the process above you can access it at http://<server>:5601/app/sense.

Click Get to work to close the intro banner, and in the main editor paste the following JSON (gist here)

DELETE /_template/kafkaconnect/
PUT /_template/kafkaconnect/
{
  "template": "soe*",
  "settings": {
    "number_of_shards": 1,
    "number_of_replicas": 0
  },
  "mappings": {
    "_default_": {
      "dynamic_templates": [
        {
          "dates": {
            "match": "*_ts",
            "mapping": {
              "type": "date",
              "format": "YYYY-MM-dd HH:mm:ss.SSSSSS"
            }
          }
        },
        {
          "non_analysed_string_template": {
            "match": "*",
            "match_mapping_type": "string",
            "mapping": {
              "type": "string",
              "index": "not_analyzed"
            }
          }
        }
      ]
    }
  }
}

What this is doing:

  1. The DELETE is just there so that you can re-run these statements, since Elasticsearch won't update an existing template.
  2. Any index beginning with soe will be matched against this template.
  3. This is based on a single-node Elasticsearch instance, so setting the number of replicas to zero, and shards to one. In a multi-node Production cluster you'd want to set these differently. If you leave replicas as the default (1) then your Elasticsearch cluster will remain in "YELLOW" health status as there'll forever be unassigned shards.
  4. The dates template matches any field with _ts suffix and sets it to a Date type. The inbound data must match the format shown. For details of the date format specifics, see the JodaTime documentation.
  5. The non_analysed_string_template template matches any string field and creates two instances of it; one analyzed and one not. Analyzed is where it gets tokenized which is useful for full-text searching etc, and non-analyzed is necessary for aggregations against the full field value. For example, "New York" would otherwise aggregate as 'New' and a separate instance 'York'.

Put the cursor over each statement and click the green play arrow that appears to the right of the column.

For the DELETE statement you'll get an error the first time it's run (because the index template isn't there to delete), and the PUT should succeed with

{
    "acknowledged": true
}

Now we'll delete the index itself so that it can be recreated and pick up the dynamic mappings. Here I'm using curl but you can run this in Sense too if you want.

$ curl -X "DELETE" "http://localhost:9200/soe.logon"
{"acknowledged":true}

Watch out here, because Elasticsearch will delete an index before you can say 'oh sh....' -- there is no "Are you sure you want to drop this index?" type interaction. You can even wildcard the above REST request for real destruction and mayhem - action.destructive_requires_name can be set to limit this risk.

So, to recap - we've successfully run Kafka Connect to load data from a Kafka topic into an Elasticsearch index. We've taken that index and seen that the field mappings aren't great for timestamp fields, so have defined a dynamic template in Elasticsearch so that new indices created will set any column ending _ts to a timestamp. Finally, we deleted the existing index so that we can use the new template from now on.

Let's test out the new index mapping. Since we deleted the index that had our data in (albeit test data) we can take advantage of the awesomeness that is Kafka by simply replaying the topic from the start. To do this change the name value in the Elasticsearch connector configuration (elasticsearch-kafka-connect.properties), e.g. add a number to its suffix:

name=elasticsearch-sink-01

If you're running Kafka Connect in standalone mode then you could also just delete the offsets file to achieve the same.

Whilst in the configuration file you need to also add another entry, telling the connector to ignore the schema that is passed from Kafka and instead dynamically infer the types (as well as honour the dynamic mappings that we specified)

topic.schema.ignore=ORCL.SOE.LOGON

Now restart the connector (make sure you did delete the Elasticsearch index per above, otherwise you'll see no difference in the mappings)

export CLASSPATH=/opt/kafka-connect-elasticsearch/*
export JMX_PORT=4243
/usr/bin/connect-standalone /etc/kafka/connect-standalone.properties /opt/elasticsearch-2.4.0/config/elasticsearch-kafka-connect.properties

And go check out Elasticsearch, first the mapping:

$ curl -X "GET" "http://localhost:9200/soe.logon/_mapping?pretty"

    {
      "soe.logon" : {
        "mappings" : {
            [...]
              "LOGON_ID" : {
                "type" : "double"
              },
              "current_ts" : {
                "type" : "date",
                "format" : "YYYY-MM-dd HH:mm:ss.SSSSSS"
              },
              "op_ts" : {
                "type" : "date",
                "format" : "YYYY-MM-dd HH:mm:ss.SSSSSS"
            [...]

Note that the two timestamp columns are now date type. If you still see them as strings, make sure you've set the topic.schema.ignore configuration as shown above in the Kafka Connect properties for the Elasticsearch connector.

Looking at the row count, we can see that all the records from the topic have been successfully replayed from Kafka and loaded into Elasticsearch. This ability to replay data on demand whilst developing and testing the ingest into a subsequent pipeline is a massive benefit of using Kafka!

$ curl -s -X "GET" "http://localhost:9200/soe.logon/_count?pretty=true"|jq '.count'
16

Over in Kibana head to the Index Patterns setting page (http://<server>:5601/app/kibana#/settings/indices), or from the Settings -> Indices menu buttons at the top. If you already have the index defined here then delete it - we want Kibana to pick up the new shiny version we've created because it includes the timestamp columns. Now configure a new index pattern:

Note that the Time-field name field is now populated. I've selected op_ts. Click on Create and then go to the Discover page (from the option at the top of the page). You may well see "No results found" - if so use the button in the top-right of the page to change the time window to broaden it to include the time at which you inserted record(s) to the SOE.LOGON table in the testing above.

To explore the data further you can click on the add button that you get when hovering over each of the fields on the left of the page, which will add them as columns to the main table, replacing the default _source (which shows all fields):

In this example you can see that there was quite a few testing records inserted (op_type = I), with nothing changing between than the LOGON_DATE.

Connector errors after adding dynamic templates

Note that if you get an error like this when running the connector:

[pool-2-thread-1] ERROR org.apache.kafka.connect.runtime.WorkerSinkTask - Task elasticsearch-sink-02-0 threw an uncaught and unrecoverable exception
org.apache.kafka.connect.errors.ConnectException: Cannot create mapping:{"kafka-connect":{"properties":{"[...]
at io.confluent.connect.elasticsearch.Mapping.createMapping(Mapping.java:65)

then check the Elasticsearch log/stdout, where you'll find more details. This kind of thing that can cause problems would be an index not deleted before re-running it with the new template, as well as a date format in the template that doesn't match the data.

Running a Full Swingbench Test Configuration

If you've made it this far, congratulations! Now we're going to set up the necessary configuration to run Swingbench. This will generate a stream of changes to multiple tables, enabling us to get a feel for how the pipeline behaves in 'real world' conditions.

To start will, let's get a list of all the tables involved:

$ rlwrap sqlplus soe/soe@orcl

SQL*Plus: Release 12.1.0.2.0 Production on Tue Sep 6 11:45:02 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Fri Sep 02 2016 15:49:03 +01:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select table_name from user_tables;

TABLE_NAME
--------------------------------------------------------------------------------
WAREHOUSES
PRODUCT_INFORMATION
PRODUCT_DESCRIPTIONS
ORDER_ITEMS
ORDERS
ORDERENTRY_METADATA
LOGON
INVENTORIES
CUSTOMERS
CARD_DETAILS
ADDRESSES

11 rows selected.

SQL>

The OGG replication is already defined with a wildcard, to pick up all tables in the SOE schema:

[oracle@bigdatalite config]$ cat /u01/ogg/dirprm/ext1.prm
EXTRACT EXT1
USERID SYSTEM, PASSWORD welcome1
EXTTRAIL ./dirdat/lt
SOURCECATALOG ORCL
TABLE SOE.*;

[oracle@bigdatalite config]$ cat /u01/ogg-bd/dirprm/rconf.prm
REPLICAT rconf
TARGETDB LIBFILE libggjava.so SET property=dirprm/conf.props
REPORTCOUNT EVERY 1 MINUTES, RATE
GROUPTRANSOPS 1000
MAP *.*.*, TARGET *.*.*;

The OGG Kafka Connect handler will automatically create a topic for every table that it receives from OGG. So all we need to do now is add each table to the Elasticsearch Sink configuration. For this, I created a second version of the configuration file, at /opt/elasticsearch-2.4.0/config/elasticsearch-kafka-connect-full.properties

name=elasticsearch-sink-full-00
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
tasks.max=1
connection.url=http://localhost:9200
type.name=kafka-connect
topics=ORCL.SOE.WAREHOUSES ,ORCL.SOE.PRODUCT_INFORMATION ,ORCL.SOE.PRODUCT_DESCRIPTIONS ,ORCL.SOE.ORDER_ITEMS ,ORCL.SOE.ORDERS ,ORCL.SOE.ORDERENTRY_METADATA ,ORCL.SOE.LOGON ,ORCL.SOE.INVENTORIES ,ORCL.SOE.CUSTOMERS ,ORCL.SOE.CARD_DETAILS ,ORCL.SOE.ADDRESSES
topic.schema.ignore=ORCL.SOE.WAREHOUSES ,ORCL.SOE.PRODUCT_INFORMATION ,ORCL.SOE.PRODUCT_DESCRIPTIONS ,ORCL.SOE.ORDER_ITEMS ,ORCL.SOE.ORDERS ,ORCL.SOE.ORDERENTRY_METADATA ,ORCL.SOE.LOGON ,ORCL.SOE.INVENTORIES ,ORCL.SOE.CUSTOMERS ,ORCL.SOE.CARD_DETAILS ,ORCL.SOE.ADDRESSES
topic.key.ignore=ORCL.SOE.WAREHOUSES ,ORCL.SOE.PRODUCT_INFORMATION ,ORCL.SOE.PRODUCT_DESCRIPTIONS ,ORCL.SOE.ORDER_ITEMS ,ORCL.SOE.ORDERS ,ORCL.SOE.ORDERENTRY_METADATA ,ORCL.SOE.LOGON ,ORCL.SOE.INVENTORIES ,ORCL.SOE.CUSTOMERS ,ORCL.SOE.CARD_DETAILS ,ORCL.SOE.ADDRESSES
topic.index.map=ORCL.SOE.WAREHOUSES:soe.warehouses ,ORCL.SOE.PRODUCT_INFORMATION:soe.product_information ,ORCL.SOE.PRODUCT_DESCRIPTIONS:soe.product_descriptions ,ORCL.SOE.ORDER_ITEMS:soe.order_items ,ORCL.SOE.ORDERS:soe.orders ,ORCL.SOE.ORDERENTRY_METADATA:soe.orderentry_metadata ,ORCL.SOE.LOGON:soe.logon ,ORCL.SOE.INVENTORIES:soe.inventories ,ORCL.SOE.CUSTOMERS:soe.customers ,ORCL.SOE.CARD_DETAILS:soe.card_details ,ORCL.SOE.ADDRESSES:soe.addresses

Having created the configuration, run the connector. If the previous connector from the earlier testing is running then stop it first, otherwise you'll get a port clash (and be double-processing the ORCL.SOE.LOGON topic).

/usr/bin/connect-standalone /etc/kafka/connect-standalone.properties /opt/elasticsearch-2.4.0/config/elasticsearch-kafka-connect-full.properties
Running Swingbench

I'm using charbench which is a commandline interface for Swingbench:

$ /opt/swingbench/bin/charbench -cs localhost:1521/orcl -u soe -p soe -v trans,users
Author  :        Dominic Giles
Version :        2.5.0.971

Results will be written to results.xml.
Hit Return to Terminate Run...

Time            NCR     UCD     BP      OP      PO      BO      SQ      WQ      WA      Users
06:59:14        0       0       0       0       0       0       0       0       0       [0/1]
06:59:15        0       0       0       0       0       0       0       0       0       [0/1]
06:59:16        0       0       0       0       0       0       0       0       0       [0/1]
06:59:17        0       0       0       0       0       0       0       0       0       [0/1]
06:59:18        0       0       0       0       0       0       0       0       0       [0/1]
06:59:19        0       0       0       0       0       0       0       0       0       [0/1]
06:59:20        0       0       0       0       0       0       0       0       0       [0/1]
06:59:21        0       0       0       0       0       0       0       0       0       [0/1]
06:59:22        0       0       0       0       0       0       0       0       0       [1/1]
06:59:23        2       0       2       0       0       0       0       0       0       [1/1]
06:59:24        3       0       4       5       0       0       0       0       0       [1/1]

Each of the columns with abbreviated headings are different transactions run, and as soon as you see numbers above zero in them it indicates that you should be getting data in the Oracle tables, and thus through into Kafka and Elasticsearch.

Auditing the Pipeline

Let's see how many records are on the ORDERS table:

$ rlwrap sqlplus soe/soe@orcl

SQL*Plus: Release 12.1.0.2.0 Production on Tue Sep 6 12:21:13 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Tue Sep 06 2016 12:21:09 +01:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select count(*) from orders;

COUNT(*)
----------
143001

But, this includes the records that were pre-seeded by Swingbench before we set up the OGG extract. How do we know how many have been read by GoldenGate since, and should therefore be downstream on Kafka, and Elasticsearch? Enter logdump. This is a GoldenGate tool that gives a commandline interface to analysing the OGG trail file itself. You can read more about it here, here, and here.

First, determine the trail file name:

$ ls -l /u01/ogg/dirdat/*
-rw-r-----. 1 oracle oinstall 64068 Sep  9 10:16 /u01/ogg/dirdat/lt000000015

And then launch logdump (optionally, but preferably, with rlwrap to give command history and search):

$ cd /u01/ogg/
$ rlwrap ./logdump

From the Logdump > prompt, open the trail file:

Logdump 1 >OPEN /u01/ogg/dirdat/lt000000015
Current LogTrail is /u01/ogg/dirdat/lt000000015

and then filter to only show records relating to the table we're interested in:

Logdump 2 >FILTER FILENAME ORCL.SOE.ORDERS

and then give a summary of the records present:

Logdump 3 >COUNT
LogTrail /u01/ogg/dirdat/lt000000015 has 46 records
Total Data Bytes              14056
  Avg Bytes/Record              305
Insert                           22
Update                           23
Metadata Records                  1
After Images                     45
Filtering matched           46 records
          suppressed       208 records

Here we can see that there are a total of 45 insert/update records that have been captured.

Let's check the replicat's trail file also matches:

$ ls -l /u01/ogg-bd/dirdat/*
-rw-r-----. 1 oracle oinstall 64416 Sep  9 10:16 /u01/ogg-bd/dirdat/rt000000000

$ cd /u01/ogg-bd
$ rlwrap ./logdump

Oracle GoldenGate Log File Dump Utility
Version 12.2.0.1.160419 OGGCORE_12.2.0.1.0OGGBP_PLATFORMS_160430.1401

Copyright (C) 1995, 2016, Oracle and/or its affiliates. All rights reserved.



Logdump 1 >OPEN /u01/ogg-bd/dirdat/rt000000000
Current LogTrail is /u01/ogg-bd/dirdat/rt000000000
Logdump 2 >FILTER FILENAME ORCL.SOE.ORDERS
Logdump 3 >COUNT
LogTrail /u01/ogg-bd/dirdat/rt000000000 has 46 records
Total Data Bytes              14056
  Avg Bytes/Record              305
Insert                           22
Update                           23
Metadata Records                  1
After Images                     45
Filtering matched           46 records
          suppressed       213 records

Average of 3 Transactions
Bytes/Trans .....       5421
Records/Trans ...         15
Files/Trans .....          4

Looks good - a total of 45 records again.

So from OGG, the data flows via the Kafka Connect connect into a Kafka topic, one per table. We can count how many messages there are on the corresponding Kafka topic by running a console consumer, redirecting the messages to file (and using & to return control to the console):

$ kafka-console-consumer --zookeeper localhost:2181 --topic ORCL.SOE.ORDERS --from-beginning > /tmp/kafka_orcl.soe.orders &

and then issue a wc to count the number of lines currently in the resulting file:

$  wc -l /tmp/kafka_orcl.soe.orders
45 /tmp/kafka_orcl.soe.orders

Since the console consumer process is still running in the background (type fg to bring it back to the foreground if you want to cancel it), you can re-issue the wc as required to see the current count of messages on the topic.

Finally, to see the number of documents on the corresponding Elasticsearch index:

$ curl -s -X "GET" "http://localhost:9200/soe.orders/_count?pretty=true"|jq '.count'
45

Here we've proved that the number of records written by Oracle are making it all the way through our pipeline.

Monitoring the Pipeline

Kafka and Kafka Connect expose metrics through JMX. There's a variety of tools for capturing, persisting, and visualising this, such as detailed here. For now, we'll just use JConsole to inspect the metrics and get an idea of what's available.

You'll need a GUI for jconsole, so either a desktop session on the server itself, X11 forwarded, or you can also run JConsole from a local machine (it's bundled with any JDK) and connect to the remote JMX. In this example I simply connected to the VM's desktop and ran JConsole locally there. You launch it by running it from the shell prompt:

$ jconsole

From here I connected to the 'Remote Process' on localhost:4242 to access the Kafka server process (because it's running as root the jconsole process (running as a non-root user) can't connect to it as a 'Local Process'). The port 4242 is what I specified as an environment variable as part of the kafka process launch.

On the MBeans tab there are a list of MBeans under which the bespoke application metrics (as opposed to JVM ones like heap memory usage) are found. For example, the rate at which data is being received and sent from the cluster:

By default when you see an attribute for an MBean is it point-in-time - doubleclick on it to make it a chart that then tracks subsequent changes to the number.

By connecting to localhost:4243 (press Ctrl-N for a new connection in the same JConsole instance) you can inspect the metrics from the Kafka Connect elasticsearch sink

You can also access JMX metrics for the OGG Kafka handler by connecting to the local processs (assuming you're running JConsole locally). To find the PID for the RCONF replicat, run:

$ pgrep -f RCONF

Then select that PID from the JConsole connection list - note that the process name may show as blank.

The producer stats show metrics such as the rate at which topic is being written to:

Conclusion

In this article we've seen how stream transactions from a RDBMS such as Oracle into Kafka and out to a target such as Elasticsearch, utilising the Kafka Connect platform and its standardised connector framework. We also saw how to validate and audit the pipeline at various touchpoints, as well as a quick look at accessing the JMX metrics that Kafka provides.

This article was also posted on the Confluent blog, head over there for more great Kafka-related content!

Categories: BI & Warehousing

OTN Appreciation Day: - the day after - BPEL, SOASuite and SCA (in that order...)

Darwin IT - Wed, 2016-10-12 07:21
Unfortunately I noticed this nice initiative only yesterday: OTN Appreciation Day. I did not had a change to cook something up, but I do like to add some mustard after the meal, as we say in Dutch.

In the titles of the 'OTN Appreciation Day'-blogs I miss BPEL. In 2004,when Oracle acquired Collaxa, I worked at Oracle Consulting in the Netherlands. I worked with Oracle Workflow and Interconnect. Oracle wasn't yet into SOA really. But with BPEL PM they acquired the tool that stood at the base of SOA Suite, together with Webservices Manager and what we now know as the Mediator. And it changed my professional life, really. Its extremely powerful, especially with the added  JCA Adapters, xslt-mapper, and since 11g the SCA architecture wich enables you to assemble composite applications with Adapters BusinessRules, Human Workflow, Mediator, Spring Components and of course ... BPEL 2.0.

Sorry, Tim, for being late.

Is it possible to find out the problem SQL in a procedure which was executed days ago

Tom Kyte - Wed, 2016-10-12 06:26
Hi Team, Our system was written by plenty a lot of procedures, and one of them did not performance well. This procedure includes tens of SQL statments, suppose that there's only 1 or 2 SQL statements in the procedure caused this problem. I mean, th...
Categories: DBA Blogs

dbca

Tom Kyte - Wed, 2016-10-12 06:26
CLSN00107: CRS-5017: L'action de ressource "ora.dbfor.db start" a rencontre l'erreur suivante : ORA-12546: TNS : acces refuse . Pour plus de details, reportez-vous a "(:CLSN00107:)" dans "/exec/products/oracle/grid/log/servfor/agent/ohasd/oraagent_...
Categories: DBA Blogs

Why was my job not running?

Tom Kyte - Wed, 2016-10-12 06:26
Dear all, I have tried to create a job (wizard) in Oracle SQL Developer 4.0 as follows: ----------SUMMARY------------------ Job Name - TEST Enabled - true Description - Job Class - null Type of Job - PL/SQL Block When to Execute Job - IM...
Categories: DBA Blogs

Documentum story – dm_LogPurge and dfc.date_format

Yann Neuhaus - Wed, 2016-10-12 05:05

What is the relation between dfc.date_format and dm_LogPurge? This is the question we had to answer as we hit an issue. An issue with the dm_LogPurge job.
As usual once a repository has been created we are configuring several Documentum jobs for the housekeeping.
One of them is the dm_LogPurge. It is configured to run once a day with a cutoff_days of 90 days.
So all ran fine until we did another change.
On request of an application team we had to change the dfc.date_format to dfc.date_format=dd/MMM/yyyy HH:mm:ss to allow the D2 clients to use Months in letters and not digits.
This change fulfilled the application requirement but since that day, the dm_LogPurge job started to remove too many log files (to not write ALL). :(

So let’s explain how we proceed to find out the reason of the issue and more important the solution to avoid it.
We have been informed not by seeing that too many files have been removed but by checking the repository log file. BTW, this file is checked automatically using nagios with our own dbi scripts. So in the repository log file we had errors like:

2016-04-11T20:30:41.453453      16395[16395]    01xxxxxx80028223        [DM_OBJ_MGR_E_FETCH_FAIL]error:   "attempt to fetch object with handle 06xxxxxx800213d2 failed "
2016-04-11T20:30:41.453504      16395[16395]    01xxxxxx80028223        [DM_SYSOBJECT_E_CANT_GET_CONTENT]error:   "Cannot get  format for 0 content of StateOfDocbase sysobject. "
2016-04-11T20:26:10.157989      14679[14679]    01xxxxxx80028220        [DM_OBJ_MGR_E_FETCH_FAIL]error:   "attempt to fetch object with handle 06xxxxxx800213c7 failed "
2016-04-11T20:26:10.158059      14679[14679]    01xxxxxx80028220        [DM_SYSOBJECT_E_CANT_GET_CONTENT]error:   "Cannot get  format for 0 content

 

Based on the time stamp, I saw that the issue could be related to the dm_LogPurge. So I checked the job log file as well the folders which are cleaned out. In the folder all old log files were removed:

[dmadmin@content_server_01 log]$ date
Wed Apr 13 06:28:35 UTC 2016
[dmadmin@content_server_01 log]$ pwd
$DOCUMENTUM/dba/log
[dmadmin@content_server_01 log]$ ls -ltr REPO1*
lrwxrwxrwx. 1 dmadmin dmadmin      34 Oct 22 09:14 REPO1 -> $DOCUMENTUM/dba/log/<hex docbaseID>/
-rw-rw-rw-. 1 dmadmin dmadmin 8540926 Apr 13 06:28 REPO1.log

 

To have more information, I set the trace level of the dm_LogPurge job to 10 and analyzed the trace file.
In the trace file we had:

[main] com.documentum.dmcl.impl.DmclApiNativeAdapter@9276326.get( "get,c,sessionconfig,r_date_format ") ==> "31/1212/1995 24:00:00 "
[main] com.documentum.dmcl.impl.DmclApiNativeAdapter@9276326.get( "get,c,08xxxxxx80000362,method_arguments[ 1] ") ==> "-cutoff_days 90 "

 

So why did we have 31/1212/1995 ?

Using API I confirmed an issue related to the date format

API> get,c,sessionconfig,r_date_format
...
31/1212/1995 24:00:00

API> ?,c,select date(now) as dateNow from dm_server_config
datenow
-------------------------
14/Apr/2016 08:36:52

(1 row affected)

 

Date format? So as all our changes are documented, I easily found that we changed the dfc_date_format for the D2 application.
By cross-checking with another installation, used by another application where we did not change the dfc.date_format, I could confirm that the issue was related to this dfc parameter change.

Without dfc.date_format in dfc.properties:

API> get,c,sessionconfig,r_date_format
...
12/31/1995 24:00:00

API> ?,c,select date(now) as dateNow from dm_server_config
datenow
-------------------------
4/14/2016 08:56:13

(1 row affected)

 

Just to be sure that I did not miss something, I checked also if not all log files were removed after starting manually the job. They were still there.
Now the solution would be to rollback the dfc.date_format change but this would only help the platform but not the application team. As the initial dfc.date_format change was validated by EMC we had to find a solution for both teams.

After investigating we found the final solution:
Add dfc.date_format=dd/MMM/yyyyy HH:mm:ss in the dfc.properties file of the ServerApps (in the JMS directly so!)

With this solution the dm_LogPurge job does not remove too many files and the Application Team can still use the Month written in letters in its D2 applications.

 

 

Cet article Documentum story – dm_LogPurge and dfc.date_format est apparu en premier sur Blog dbi services.

Database Crash : ORA-27300: OS system dependent operation:semctl failed with status: 22 (RedHat 7)

Online Apps DBA - Wed, 2016-10-12 04:38

Database Installation and Operation Fails if RemoveIPC=yes Is Configured for systemd If RemoveIPC=yes is configured for systemd, interprocess communication (IPC) is terminated for a non-system user’s processes when that user logs out. This setting, which is intended for laptops, can cause software problems on server systems. For example, if the user is a database software […]

The post Database Crash : ORA-27300: OS system dependent operation:semctl failed with status: 22 (RedHat 7) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

SQL Developer: Live and Let Live my db-connection

Darwin IT - Wed, 2016-10-12 03:59
At my current customer I have SQLDeveloper open the whole day, and regularly I come back to it to query my throttle-table to see if my requests have been picked up. But regularly my database connection have been broken because of being idle. Probably because of a nasty firewall between my remote development desktop and the database.

Googling on it I found an article of That Jeff Smith on busy connections. That blog is really one to follow. Feed it to your Feedly if your a recent SQL Developer user.

But in my search I found the 'keepAlive-4' extension for SQL Developer 4. Download the KeepAlive zip, go to Help->Check for Updates in SQLDeveloper and install it via the 'Install From Local File' option. Then look for the Keep Alive icon in the toolbar:
Click on it and enter a check frequency, of at least 60 seconds. I try 180 seconds.

You can disable it by clicking it again. Another click and it asks for a new interval again.

Balancing EBS JRE Requirements With Other Java Apps

Steven Chan - Wed, 2016-10-12 02:05

Java logoCustomers frequently ask me, "What's the best way of handling JRE requirements when I have other Java applications in addition to Oracle E-Business Suite?"

The short answer: you should always try to use the latest JRE release, but before you do that, you need to review your enterprise JRE requirements carefully to ensure that all of your applications are compatible with it.

Why you should always deploy the latest JRE updates

JRE updates include fixes for stability, performance, and security.  The most-important fixes are for security, of course.  Therefore, we strongly recommend that customers stay current with the latest JRE release.  This applies to all Java customers, not just E-Business Suite customers.

New JRE releases are always certified with EBS on day zero

The E-Business Suite is always certified with all new JRE releases on JRE 1.6, 1.7, and 1.8.  We test all EBS releases with beta versions of JRE updates before the latter are released, so there should be no EBS compatibility issues.  Customers can even turn on Auto-Update (which ensures that a new JRE update is automatically applied to a desktop whenever it’s available) with no concerns about EBS compatibility.

Which JRE codeline is recommended for Oracle E-Business Suite?

Oracle E-Business Suite does not have any dependencies on a specific JRE codeline.  All three JRE releases – 1.6, 1.7, 1.8 – work the same way with EBS. 

You should pick whichever JRE codeline works best with the rest of your third-party applications.

Check the compatibility of all third-party Java applications in use

Of course, Oracle cannot make any assurances about compatibility with other third-party products.  I have heard from some customers who have Java-based applications whose certifications lag behind the latest JRE release.

Organizations need to maintain a comprehensive matrix that shows the latest certified JRE releases for all of their Java applications. This matrix should include the E-Business Suite.

Take the "lowest-common denominator" approach

Customers with multiple Java-based applications generally are forced to take a “lowest-common denominator” approach.  Even if Oracle E-Business Suite and the majority of your Java-based applications are compatible with the latest JRE release, any lagging application's incompatibility with the latest JRE release will force you to remain on the earliest version that is common to all of them.

For example:

  • An organization runs Oracle E-Business Suite and four third-party Java-based applications
  • Oracle E-Business Suite is certified with the latest JRE release
  • Three of the third-party applications are certified with the latest JRE release
  • One of the third-party applications is certified only on the previous JRE release
  • The organization is forced to deploy the previous JRE release to their desktops. 

Contact all of your third-party vendors

Organizations whose JRE deployments are held back by a particular application's incompatibility should contact the vendor for that application and ask them test with Java Early Access downloads or to participate in the Oracle CAP Program.


Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator