Skip navigation.

Feed aggregator

storage cloud appliance in the cloud

Pat Shuff - Mon, 2016-05-02 09:46
Last week we focused on getting infrastructure as a service up and running. I wanted to move up the stack and talk about platform as a service but unfortunately, I got distracted with yet another infrastructure problem. We were able to install the storage cloud appliance software in a virtual machine but how do you install this in a compute cloud instance? This brings up two issues. First, how do you run a Linux 7 - 3.10 kernel in the Oracle Compute Cloud Service. Second, how do you connect and manage this service both from an admin perspective and client from another compute engine in the cloud service.

Let's tackle the first problem. How do you spin up a Linux 7 - 3.10 kernel in the Oracle Compute Cloud Service? If we look at the compute instance creation we can see what images that we can boot from.

There is not Linux 7 - 3.10 kernel so we need to download and import and image that we can boot from. Fortunately, Oracle has gone through a good importing a bootable image tutorial. If we follow these steps, we need to first download a CentOS 7 bootable image from cloud.centos.org. The cloud instance that we use is the CentOS-7-x86_64-OracleCloud.raw.tar.gz. We first download this to a local directory then upload it to the compute cloud image area. This is done by going to the compute console and clicking on the "Images" tab at the top of the screen.

We then upload the tar.gz file that is a bootable image. This allows us to create a new storage instance that we can boot from. The upload takes a few minutes and once it is complete we need to associate it with a bootable instance. This is done by clicking on the "Associate Image" button where we basically enter a name to use for the operating system as well as description.

Note that the OS size is 9 GB which is really small. We don't have a compute instance at this point. We either need to create a bootable storage element or compute instance based on this image. We will go through the storage create first since this is the easiest way of getting started. We first have to change from the Image tab to the Storage tab. We click on the Create Storage Volume and go through selection of the image, storage name, and size. We went with the storage size rather than resizing the storage we are creating.

At this point we should be able to create a compute instance based on this boot disk. We can clone the disk, boot from it, or mount it on another instance. We will go through and boot from this instance once it is created. We do this by going to the Instance tab and clicking on Create Instance. It does take 5-10 minutes to create the storage instance and need to wait till it is completed before creating a compute instance. An example of a creation looks like

We select the default network, the CentOS7 storage that we previously created, the 2016 ssh keys that we uploaded, and review and launch the instance.

After about 15 minutes, we have a compute instance based on our CentOS 7 image. Up to this point, all we have done is create a bootable Linux 7 - 3.10 kernel. Once we have the kernel available we can focus on connecting and installing the cloud storage appliance software. This follows the making backup better blog post. There are a couple of things that are different. First, we connect as the user centos rather than oracle or opc. This is a function of the image that we downloaded and not a function of the compute cloud. Second, we need to create a second user that allows us to login. When we use the centos user and install the oscsa_install.sh script, we can't login with our ssh keys for some reason. If we create a new user then whatever stops us from logging in as the centos user does not stop us from logging in as oracle, for example. The third thing that we need to focus on is creating a tunnel from our local desktop to the cloud instance. This is done with ssh or putty. What we are looking for is routing the management port for the storage appliance. It is easier to create a tunnel rather than change the management port and opening up the port through the cloud firewall.

From this we execute the commands we described in the maker backup better blog. We won't go through the screen shots on this since we have done this already. One thing is missing from the screenshot, you need to disable selinux vy editing /etc/sysconfig/selinux. You need to disable SELINUX by editing the file and rebooting. Make sure that you add a second user before rebooting otherwise you will get locked out and the ssh keys won't work once this change is made.

The additional steps that we need to do are create a user, copy the authorized_keys from an existing user into the .ssh directory, change the ownership, and assign a password to the new user, and add the user to /etc/sudoers.

useradd oracle
mkdir ~oracle/.ssh
cp ~centos/.ssh/authorized_keys ~oracle/.ssh
chown -R oracle ~oracle
passwd oracle
vi /etc/sudoers
The second major step is to create an ssh tunnel to allow you to connect in from your localhost into the cloud compute service. When you create the oscsa instance it starts up a management console using port 32769. To tunnel this port we use putty to connect.

At this point we should be able to spin up other compute instances and mount this file system internally using the command

 mount -t nfs -o vers=4,port=32770 e53479.compute-metcsgse00028.oraclecloud.internal:/ /local_mount_point
We might want to use the internal ip address rather than the external dns name. In our example this would be the Private IP address of 10.196.89.62. We should be able to mount this file system and clone other instances to leverage the object storage in the cloud.

In summary, we did two things in this blog. First, we uploaded a new operating system that was not part of the list of operating systems presented by default. We selected a CentOS instance that conforms to the requirements of the cloud storage appliance. Second, we configured the cloud storage appliance software on a newly created Linux 7 - 3.10 kernel and created a putty tunnel so that we can manage the directories that we create to share. This gives us the ability to share the object storage as an nfs mount internal to all of our compute servers. It allows for things like spinning up web servers or other static servers all sharing the same home directory or static pages. We can use these same processes and procedures to pull data from the Marketplace and configure more complex installations like JD Edwards, PeopleSoft, or E-Business Suite. We can import a pre-defined image, spin up a compute instance based on that image, and provision higher level functionality onto infrastructure as a service. Up next, platform as a service explained.

Video : Flashback Table

Tim Hall - Mon, 2016-05-02 06:51

Today’s video gives a quick run through of flashback table.

If you prefer to read articles, rather than watch videos, you might be interested in these.

The clip in today’s video comes courtesy of Deiby Gómez, an Oracle ACE Director, OCM 11g & 12c and consultant at Pythian.

Cheers

Tim…

Video : Flashback Table was first posted on May 2, 2016 at 1:51 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

#GoldenGate #Cloud Service (#GGCS) … what to expect?

DBASolved - Mon, 2016-05-02 06:30

As I sit here working on performing some GoldenGate migrations to AWS for a client, I’ve been thinking about the glimpse of GoldenGate Cloud Service (GGCS) that was provided to me earlier this week. That glimpse has helped me define what and how GGCS is going to work within the Oracle Cloud space. Ever since this service was announced back at Oracle Open World 2015, I’ve been wanting to get my hands on this cloud product from Oracle to just better understand it. Hopefully, what I’m about to share with you will provide some insight into what to expect.

First, you will need a cloud account. If you do not have a cloud account; visit http://cloud.oracle.com and sign up for an account. This will typically be the same account you use to login to My Oracle Support (MOS).

Once you have an account and are in the cloud interface, subscribe to some services. You will need a Database Cloud Service or an Compute Cloud Service. These services will be the end points for the GGCS to point to. As part of setting up the compute node, you will need to setup SSH access with a public/private key. Once you create the GGCS instance, the same public/private key should be use to keep everything simple.

Once GGCS is made available for trial, currently it is only available through the sales team, many of us will have the opportunity to play with this. The following screen captures and comments were taken from the interface I had access to while discussing GGCS with Oracle Product Management.

Like any of the other cloud services from Oracle, once you have access to GGCS it will appear in your dashboard as available cloud services. In the figure below, GGCS is listed at the top of the services that I had access to. You will notice over on the right, there is a link called “service console”.

When you click on the service console link, you are taken to the console that is specific to GGCS. On the left hand side of the console, you will see three primary areas. The “overview” area is the important one; it provides you with all the information needed about your GGCS environment. You will see the host and port number, what version of GGCS you are running and the status of your environment.

With the environment up and running, you will want to create a new GGCS instance. This instance is created under your cloud service console. On this screen you are given information that tells you how many instances you have running with the number of OCPUs, Memory and storage for the configuration along with the public IP address. Notice the button to the right, just below Public IPs, this is the button that allows you to create a new GGCS instance. In the figure below, the instance has already been created.

Drilling down into the instance, you are taken to a page that illustrates your application nodes for GGCS. Notice that the GGCS instance actually created a compute node VM to run GoldenGate from.

With everything configured from the Oracle Cloud interface, you can now access the cloud server using the details provided (do not have current screen shots of this). Once you access the cloud server, you will find that Oracle GoldenGate has been configured for you along with a TNS entry that points to a “target” location. These items are standard template items for you to build your GoldenGate environment from. The interesting thing about this configuration is that Oracle is providing a single virtual machine (compute node) that will handle all the apply process to a database (compute node).

With the GGCS service running, you are then ready to build out your GoldenGate environment.

Like many other GoldenGate architectures, you build out the source side of the architecture like anything else. You install the GoldenGate software, build an extract, trail files and a data pump. The data pump process is then pointed to the GoldenGate Cloud Service (GGCS) instance instead of the target instance. The local trail files will be shipped to the GGCS machine. Once on the GGCS instance, the replicat would need to be configured. Part of the configuration of the replicat at this point is updating the TNSNames.ora file to point to the correct “target” compute node/database instance. The below picture illustrates this concept.

You will notice that the GGCS is setup to be an intermediary point in the cloud. This allows you to be flexible with your GoldenGate architecture in the cloud. From a single GGCS service you can run multiple replicats that can point to multiple difference cloud compute nodes; turning your GGCS into a hub that can send data to multiple cloud resources.

In talking with the Oracle Product team about GGCS, the only downside to GGCS right now is that it cannot be used for bi-directional setup or pulling data from the cloud. In essence, this is a uni-direction setup that can help you move from on-premise to cloud with minimal configuration setup needed.

Well, this is my take on GGCS as of right now. Once GGCS trials are available, I’ll try to update this post or add more posts on this topic. Until then, hope you have gain a bit of information this topic and looking forward to using GGCS.

Enjoy!!

about.me: http://about.me/dbasolved


Filed under: Cloud, Golden Gate
Categories: DBA Blogs

FBDA -- 7 : Maintaining Partitioned Source Table

Hemant K Chitale - Mon, 2016-05-02 01:46
Taking up the TEST_FBDA_PARTITIONED table,  let's look at a couple of Partition Maintenance operations.

SQL> select partition_name, high_value, num_rows
2 from user_tab_partitions
3 where table_name = 'TEST_FBDA_PARTITIONED'
4 order by partition_position
5 /

PARTITION_NAME HIGH_VALUE NUM_ROWS
---------------- ------------------------- ----------
P_100 101 100
P_200 201 100
P_300 301 100
P_400 401 100
P_MAX MAXVALUE 301

SQL>


Let's try a TRUNCATE PARTITION

SQL> alter table test_fbda_partitioned truncate partition p_100;

Table truncated.

SQL>


So, that's supported.

Let's try a SPLIT PARTTIION

SQL> alter table test_fbda_partitioned       
2 split partition p_max at (501)
3 into (partition p_500, partition p_max)
4 /
alter table test_fbda_partitioned
*
ERROR at line 1:
ORA-55610: Invalid DDL statement on history-tracked table


SQL>


So, a SPLIT PARTITION fails.  We need to DISASSOCIATE the Flashback Archive.

SQL> execute dbms_flashback_archive.disassociate_fba('HEMANT','TEST_FBDA_PARTITIONED');

PL/SQL procedure successfully completed.

SQL> select table_name, flashback_archive_name, archive_table_name, status
2 from user_flashback_archive_tables
3 where table_name = 'TEST_FBDA_PARTITIONED'
4 /

TABLE_NAME
--------------------------------------------------------------------------------
FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
ARCHIVE_TABLE_NAME STATUS
----------------------------------------------------- -------------
TEST_FBDA_PARTITIONED
FBDA
SYS_FBA_HIST_93342 DISASSOCIATED


SQL>
SQL> alter table test_fbda_partitioned
2 split partition p_max at (501)
3 into (partition p_500, partition p_max)
4 /

Table altered.

SQL> execute dbms_flashback_archive.reassociate_fba('HEMANT','TEST_FBDA_PARTITIONED');

PL/SQL procedure successfully completed.

SQL>
SQL> select table_name, flashback_archive_name, archive_table_name, status
2 from user_flashback_archive_tables
3 where table_name = 'TEST_FBDA_PARTITIONED'
4 /

TABLE_NAME
--------------------------------------------------------------------------------
FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
ARCHIVE_TABLE_NAME STATUS
----------------------------------------------------- -------------
TEST_FBDA_PARTITIONED
FBDA
SYS_FBA_HIST_93342 ENABLED


SQL>


While a Table is disassociated with it's Flashback Archive, DDL that would not normally be permitted may be done under strict control to ensure that there is no data divergence.

.
.
.
Categories: DBA Blogs

Partition Storage -- 7 : Revisiting HWM - 2 (again)

Hemant K Chitale - Mon, 2016-05-02 01:19
Revisiting the previous test case, but with a larger AVG_ROW_LEN

SQL> create table part_table_large
(id_column number(6), data_col_1 varchar2(100), data_col_2 varchar2(100))
partition by range (id_column)
(partition p_100 values less than (101),
partition p_200 values less than (201),
partition p_300 values less than (301),
partition p_400 values less than (401),
partition p_max values less than (maxvalue))
/
2 3 4 5 6 7 8 9

Table created.

SQL>
SQL> insert into part_table_large values
(51,rpad('String',75,'X'), rpad('Another',60,'Y'))
2 3
SQL> /

1 row created.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 100000
loop
insert into part_table_large
values (25, rpad('String',75,'X'), rpad('Another',60,'Y'));
commit;
cntr := cntr + 1;
end loop;
end;
2 3 4 5 6 7 8 9 10 11 12 13
14 /

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_large
values (45, rpad('String',75,'X'), rpad('Another',60,'Y'));
commit;
cntr := cntr + 1;
end loop;
end;
2 3 4 5 6 7 8 9 10 11 12 13
14 /

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_large
values (55, rpad('String',75,'X'), rpad('Another',60,'Y'));
commit;
cntr := cntr + 1;
end loop;
end;
2 3 4 5 6 7 8 9 10 11 12 13
14 /

PL/SQL procedure successfully completed.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_LARGE',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL>
SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_LARGE'
and partition_name = 'P_100'
/
2 3 4 5
AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
140 1100003 22349

SQL>
SQL>
SQL> alter table part_table_large move partition p_100 ;

Table altered.

SQL>
SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_LARGE',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL>
SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_LARGE'
and partition_name = 'P_100'
/
2 3 4 5
AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
140 1100003 22626

SQL>
SQL>
SQL> select extent_id, blocks
from dba_extents
where segment_name = 'PART_TABLE_LARGE'
and segment_type = 'TABLE PARTITION'
and partition_name = 'P_100'
and owner = 'HEMANT'
order by 1
/
2 3 4 5 6 7 8
EXTENT_ID BLOCKS
---------- ----------
0 1024
1 1024
2 1024
3 1024
4 1024
5 1024
6 1024
7 1024
8 1024
9 1024
10 1024
11 512
12 1024
13 1024
14 1024
15 1024
16 1024
17 1024
18 1024
19 1024
20 1024
21 1024
22 1024

23 rows selected.

SQL>


Aha ! Unlike the previous case (where an AVG_ROW_LEN of 11, a MOVE reduced the HWM from 3,022 to 2,484), with a larger row size, the HWM has moved from 22,349 to 22,626.

So, space consumption is a factor of both the AVG_ROW_LEN and the manner in which the rows are  inserted / relocated.

SQL> l
1 select avg_row_len*num_rows*1.2/8192 Expected_Blocks, Blocks
2 from user_tab_partitions
3 where table_name = 'PART_TABLE_LARGE'
4* and partition_name = 'P_100'
SQL> /

EXPECTED_BLOCKS BLOCKS
--------------- ----------
22558.6553 22626

SQL>

Also, see how the "Expected Blocks" count seems more accurate than earlier.
.
.
.



Categories: DBA Blogs

LittleCodingKata: Hardware Excuse Generator with gRPC

Paul Gallagher - Mon, 2016-05-02 01:10
gRPC is a very interesting lightweight middleware framework for language-neutral, cross-platform RPC.

When I heard about Natalie Silvanovich's Hardware Excuse Generator on Embedded.fm, I immediately recognised a better "Hello World" for testing out gRPC.

Introducing "The Explainer": my programming exercise to learn basic cross-language request/reply with gRPC.

I haven't completed the whole matrix of client-server possibilities yet, but here's a sampling...

Start up a server (e.g ruby version)
$ ./explainer.rb 
ShiFu is waiting to explain all of your problems...

And then ask it questions. Pick a language!

# Ruby client
$ ./explain.rb "Your phone is crashing because of REASON"
Your phone is crashing because of the
PCB not being manufactured to specification

# C# client
$ mono bin/Release/explain.exe "Your phone is crashing because of REASON"
Your phone is probably crashing because of stray harmonics

# C++ client
$ ./explain "Your phone is crashing because of REASON"
Your phone is crashing because of impedance in the coil

# node.js client
$ node ./explain.js "Your phone is crashing because of REASON"
Your phone is crashing because of a lack of shielding against
alpha radiation (cosmic rays) in antenna

# Python client
$ python explain.py "Your phone is crashing because of REASON"
Your phone is crashing because of residual capacitance
caused by the USB connector

All my notes and code are available in the LittleCodingKata GitHub repository.

Getting a C++11 compiler for Node 4, 5 and 6 on Oracle Linux 6

Christopher Jones - Sun, 2016-05-01 22:36

A newer compiler is needed on Oracle Linux 6 when you want to use add-ons like node-oracledb with Node 4 or later. This is because add-ons for those versions need to be built with a C++11 compatibile compiler. The default compiler on OL 6 doesn't have this support. OL 7 does have such a compiler, so these instructions are not needed for that version.

For OL 6 the easist way to get a new compiler is from the Software Collection Library (SCL). You enable the software collection yum channel, run a yum install command, and then the compiler is immediately available to use. Detailed installation SCL instructions are in the manual.

The steps below show how to install node-oracledb on Oracle Linux 6 for Node.js 4 or later.

Enabling the Software Collection Library

If you are using yum.oracle.com (formerly known as public-yum.oracle.com) then edit /etc/yum.repos.d/public-yum-ol6.repo and enable the ol6_software_collections channel:

  [ol6_software_collections]
  name=Software Collection Library release 1.2 packages for Oracle Linux 6 (x86_64)
  baseurl=http://yum.oracle.com/repo/OracleLinux/OL6/SoftwareCollections12/x86_64/
  gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
  gpgcheck=1
  enabled=1

If necessary, you can get the latest channel list from http://yum.oracle.com/public-yum-ol6.repo and merge any updates from it into your existing /etc/yum.repos.d/public-yum-ol6.repo file.

Alternatively, if your machine has a ULN support subscription, you can subscribe to the Oracle Software Collections 1.2 for Oracle Linux 6 channel in the Manage Subscription page on linux.oracle.com.

Installing the Updated Compiler

Once the channel is enabled, install the updated compiler with:

  yum install scl-utils devtoolset-3

This will install a number of packages that comprise the complete, updated tool set.

Installing node-oracledb

Installing node-oracledb on Node 4 (or later) is the same as in install instuctions, but using the new compiler. The Oracle Linux manual chapter Using the Software Collection Version of a Command shows various ways to enable the dev toolset.

In summary, to install node-oracledb on Node 4 or later using Oracle Linux 6, first install an Oracle client such as Instant Client. If you have anything except the Instant Client RPM packages, tell the installer where the libraries and header files are located, for example:

  export OCI_LIB_DIR=$HOME/instantclient
  export OCI_INC_DIR=$HOME/instantclient/sdk/include

If you are behind a firewall, set your proxy:

  export http_proxy=http://my.proxy.example.com:80/

In my development environments I often find some cleanup helps:

  which npm && rm -rf $(npm root)/oracledb $(npm root)/nan $HOME/.node-gyp $HOME/.npm \
        && npm cache clean

Now node-oracledb can be installed using the newer compiler:

  scl enable devtoolset-3 -- npm install oracledb
Using Node

Now you can use Node:

  $ node version.js 
  node.js version: v4.4.3
  node-oracledb version: 10900
  node-oracledb text format: 1.9.0
  oracle client library version: 1201000200
  oracle client library text format: 12.1.0.2.0
  oracle database version: 1201000200
  oracle database text format: 12.1.0.2.0

  $ cat /etc/oracle-release 
  oracle linux server release 6.7

Modern CX 2016

Oracle AppsLab - Sun, 2016-05-01 19:31

Last week, we were back in Las Vegas again for Oracle Modern Customer Experience Conference!  Instead of talking to customers and partners, we had the honor of chatting with UNLV Lee graduate students (@lbsunlv) and getting their feedback on how we envision the future of work, customers experience, marketing and data security.

We started off with Noel (@noelportugal) showing the portable Smart Office demo, including the Smart Badge, that we debuted at OpenWorld in October, followed by a break out session for the gradates to experience Glance and Virtual Reality at their own leisure.

VR1 VR2 VR3
The event was a hit! The 2 hour session flew by quickly. The same group of graduates who came in for the demos at the start of our session, left at the very last minute when we had to close down.

Experiencing these demos led into some exciting discussions that following day between the UNLV Lee Business School panelists and Rebecca Wettemann (@rebeccawettemann) from Nucleus Research (@NucleusResearch) on the future of work:

  • How will sales, marketing, customer service, and commerce change for the next generation?
  • What does the next generation expect from their employers?
  • Are current employers truly modern and using the latest technology solutions?

Fantastic panel with UNLV students @NucleusResearch @theappslab great UX = more prod work. pic.twitter.com/xg8RoF3cr5

— Erin Killian Evers (@keversca) April 28, 2016

Great thoughts from @unlv milleanials on #smartoffice opportunities in the workforce with @theappslab #SalesX16 pic.twitter.com/5T4CBfJ47q

— Gozel Aamoth (@gozelaamoth) April 28, 2016

While all of this was going on, a few of the developers and I were at the Samsung Developers Conference in SF discussing how we could build a more connected future. More on that in the next coming posts!Possibly Related Posts:

DISTRIBUTED mode deprecated

Anthony Shorten - Sun, 2016-05-01 17:50

Based upon feedback from partners and customers, the DISTRIBUTED mode used in the batch architecture has been deprecated in Oracle Utilities Application Framework V4.3.x and above. The DISTRIBUTED mode was originally introduced to the batch cluster architecture back in Oracle Utilities Application Framework V2.x and was popular but suffered from a number of restrictions. Given the flexibility of the batch architect was expanded in newer releases it was decided to deprecate the DISTRIBUTED mode to encourage more effective use of the architecture.

It is recommended that customers using this mode migrate to CLUSTERED mode using a few techniques:

  • For customers on non-production environments, it is recommended to use CLUSTERED mode using the single server (ss) template used by the Batch Edit facility. This is a simple cluster that uses CLUSTERED mode without the advanced configurations in a clustered environment. It is restricted to single host servers so it is not typically recommended for production or clustered environments that use more than one host server.
  • For customers on production environments, it is recommended to use CLUSTERED mode with the unicast (wka) template used by the Batch Edit facility. This will allow flexible configuration without the use of multi-cast which can be an issue on some implementations using CLUSTERED mode. The advantage of Batch Edit is that it has a simple interface to allow you to define this configuration without too much fuss. 

The advantage of Batch Edit when building your new batch configurations is that it is a simple to use as well as it generates an optimized set of configuration files that can be used directly for the batch architecture. Execution of the jobs would have to remove the DISTRIBUTED tags on the command lines or configuration files to use the new architecture.

Customers should read the Batch Best Practices (Doc Id: 836362.1) and the Server Administration Guide shipped with your product for advice on Batch Edit as well as the templates mentioned in this article.

Why I am a Dostoevskyan Humanist

Greg Pavlik - Sun, 2016-05-01 16:11
An explanation in 5 parts, by reference to the works of those who were not.*
'Lo! I show you the Last Man.

"What is love? What is creation? What is longing? What is a star?" -- so asks the Last Man, and blinks.

The earth has become small, and on it hops the Last Man, who makes everything small. His species is ineradicable as the flea; the Last Man lives longest.

"We have discovered happiness" -- say the Last Men, and they blink.

They have left the regions where it is hard to live; for they need warmth. One still loves one's neighbor and rubs against him; for one needs warmth.

Turning ill and being distrustful, they consider sinful: they walk warily. He is a fool who still stumbles over stones or men!

A little poison now and then: that makes for pleasant dreams. And much poison at the end for a pleasant death.

One still works, for work is a pastime. But one is careful lest the pastime should hurt one.

One no longer becomes poor or rich; both are too burdensome. Who still wants to rule? Who still wants to obey? Both are too burdensome.

No shepherd, and one herd! Everyone wants the same; everyone is the same: he who feels differently goes voluntarily into the madhouse.

"Formerly all the world was insane," -- say the subtlest of them, and they blink.

They are clever and know all that has happened: so there is no end to their derision. People still quarrel, but are soon reconciled -- otherwise it upsets their stomachs.

They have their little pleasures for the day, and their little pleasures for the night, but they have a regard for health.

"We have discovered happiness," -- say the Last Men, and they blink.'Friedrich Nietzsche: Thus Spoke Zarathustra


The Body of the Dead Christ in the Tomb, Hans Holbein

Now, did He really break the seal
And rise again?
We dare not say….
Meanwhile, a silence on the cross
As dead as we shall ever be,
Speaks of some total gain or loss,
And you and I are free
Auden, Friday’s Child
“Wherever an altar is found, there is civilization."Joseph de Maistre
“All actual life is encounter.”
Martin Buber, I and Thou
* model for composition stolen gratuitously from an online challenge.

Accessing Fusion Data from BI Reports using Java

Angelo Santagata - Sat, 2016-04-30 03:57
Introduction

In a recent article by Richard Williams on A-Team Chronicles, Richard explained how you can execute a BI publisher report from a SOAP Service and retrieve the report, as XML, as part of the response of the SOAP call.  This blog article serves as a follow on blog article providing a tutorial style walk through on how to implement the above procedure in Java.

This article assumes you have already followed the steps in Richard's blog article and created your report in BI Publisher, exposed it as a SOAP Service and tested this using SOAPUI, or another SOAP testing tool.

Following Richards guidance we know that he correct SOAP call could look like this

<soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope" xmlns:pub="http://xmlns.oracle.com/oxp/service/PublicReportService">
   <soap:Header/>
   <soap:Body>
      <pub:runReport>
         <pub:reportRequest>
            <pub:reportAbsolutePath>/~angelo.santagata@oracle.com/Bi report.xdo</pub:reportAbsolutePath>
            <pub:reportRawData xsi:nil="true" >true</pub:reportRawData>
            <pub:sizeOfDataChunkDownload>-1</pub:sizeOfDataChunkDownload>
            <pub:flattenXML>true</pub:flattenXML>
            <pub:byPassCache>true</pub:byPassCache>
         </pub:reportRequest>
         <pub:appParams/>
      </pub:runReport>
   </soap:Body>
</soap:Envelope>

Tip :One easy way to determine the reports location is to run the report and then examine the URL in the browser.

 

Implementing the SOAP call using JDeveloper 11g

We can now need to implement the Java SOAP Client to call our SOAP Service. For this blog we will use JDeveloper 11g, the IDE recommended for extending Oracle Fusion, however you are free to use your IDE of choice, e.g. NetBeans, Eclipse, VI, Notepad etc, the steps will obviously be different.

Creating the project

Within JDeveloper 11g start by creating a new Application and within this application create two generic projects. Call one project “BISOAPServiceProxy” and the other “FusionReportsIntegration”. The "BISOAPServiceProxy" project will contain a SOAP Proxy we are going to generate from JDeveloper 11g and the "FusionReportsIntegration" project will contain our custom client code. It is good practice to create separate projects so that the SOAP Proxies resides in its own separate project, this allows us to regenerate the proxy from scratch without affecting any other code.

Generating the SOAP Proxy

For this example we will be using the SOAP Proxy wizard as part of JDeveloper. This functionality generates a static proxy for us, which in turn makes it easier to generate the required SOAP call later.

  1. 1. With the BISOAPService project selected, start the JDeveloper SOAP Proxy wizard.
    File-> New-> Business Tier-> Web Services-> Web Service Proxy
  2. Proxy1
  3. 2. Click Next
  4. 3. Skipping the first welcome screen, in step 2 enter the JAX-WS Style as the type of SOAP Proxy you wish to generate in step 3 enter the WSDL of your Fusion Application BI Publisher webservice WSDL. It’s best to check this URL returns a WSDL document in your web browser before entering it here. The WSDL location will normally be something like : http://<your fusion Applications Server>/xmlpserver/services/ExternalReportWSSService?wsdl
  5. Proxy2
  6. It's recommended that you leave the copy WSDL into project check-box selected.
  7. 4. Give a package name, unless you need to it's recommended to leave the Root Package for generated types to be left blank
  8. proxy3
  9. 5. Now hit Finish
Fixing the project dependencies

We now need to make sure that the “FusionReportsIntegration” is able to see classes generated by the  “BISOAPServiceProxy” proxy. To resolve this in JDeveloper we simply need to setup a dependency between the two projects.

  1. 1. With the FusionReportsIntegration project selected, right-mouse click on the project and select “Project properties
  2. 2. In the properties panel select Dependencies
  3. 3. Select the little pencil icon and in the resulting dialog select “Build Output”. This selection tells JDeveloper that “this project depends on the successful build output” of the other project.
  4. 4. Save the Dialog
    dependancies1
  5. 5. Close [OK] the Project Properties dialog
  6. 6. Now is a good time to hit compile and make sure the SOAP proxy compiles without any errors, given we haven't written any code yet it should compile just fine.
Writing the code to execute the SOAP call

With the SOAP Proxy generated, the project dependency setup, we’re now ready to write the code which will call the BI Server using the generated SOAP Proxy

  1. 1. With the Fusion Reports Integration selected , right mouse Click -> New -> Java -> Java Class
    javacode
  2. 2. Enter a name, and java package name, for your class
  3. 3. Ensure that “Main Method” is selected. This is so we can execute the code from the command line, you will want to change this depending on where you execute your code from, e.g. A library, a servlet etc.
  4. 4. Within the main method you will need to enter the following code snippet, once this code snippet is pasted you will need to correct and resolve imports for your project.
  5. 1.	ExternalReportWSSService_Service externalReportWSSService_Service;
    2.	// Initialise the SOAP Proxy generated by JDeveloper based on the following WSDL xmlpserver/services/ExternalReportWSSService?wsdl
    3.	externalReportWSSService_Service = new ExternalReportWSSService_Service();
    4.	// Set security Policies to reflect your fusion applications
    5.	SecurityPoliciesFeature securityFeatures = new SecurityPoliciesFeature(new String[]
    6.	{ "oracle/wss_username_token_over_ssl_client_policy" });
    7.	// Initialise the SOAP Endpoint
    8.	ExternalReportWSSService externalReportWSSService = externalReportWSSService_Service.getExternalReportWSSService(securityFeatures);
    9.	// Create a new binding, this example hardcodes the username/password, 
    10.	// the recommended approach is to store the username/password in a CSF keystore
    11.	WSBindingProvider wsbp = (WSBindingProvider)externalReportWSSService;
    12.	Map<String, Object> requestContext = wsbp.getRequestContext();
    13.	//Map to appropriate Fusion user ID, no need to provide password with SAML authentication
    14.	requestContext.put(WSBindingProvider.USERNAME_PROPERTY, "username");
    15.	requestContext.put(WSBindingProvider.PASSWORD_PROPERTY, "password");
    16.	requestContext.put(WSBindingProvider.ENDPOINT_ADDRESS_PROPERTY, "https://yourERPServer:443/xmlpserver/services/ExternalReportWSSService");
    
    17.	// Create a new ReportRequest object using the generated ObjectFactory
    18.	ObjectFactory of = new ObjectFactory();
    19.	ReportRequest reportRequest = of.createReportRequest();
    20.	// reportAbsolutePath contains the path+name of your report
    21.	reportRequest.setReportAbsolutePath("/~angelo.santagata@oracle.com/Bi report.xdo");
    22.	// We want raw data
    23.	reportRequest.setReportRawData("");
    24.	// Get all the data
    25.	reportRequest.setSizeOfDataChunkDownload(-1); 
    26.	// Flatten the XML response
    27.	reportRequest.setFlattenXML(true);
    28.	// ByPass the cache to ensure we get the latest data
    29.	reportRequest.setByPassCache(true);
    30.	// Run the report
    31.	ReportResponse reportResponse = externalReportWSSService.runReport(reportRequest, "");
    32.	// Display the output, note the response is an array of bytes, you can convert this to a String
    33.	// or you can use a DocumentBuilder to put the values into a XLM Document object for further processing
    34.	System.out.println("Content Type="+reportResponse.getReportContentType());
    35.	System.out.println("Data ");
    36.	System.out.println("-------------------------------");
    37.	String data=new String (reportResponse.getReportBytes());
    38.	System.out.println(data);
    39.	System.out.println("-------------------------------");
  6. Going through the code
  7.  LineWhat does it do1-3This is the instantiation of a new class containing the WebService Proxy object. This was generated for us earlier5Initialise a new instance of a security policy object, with the correct security policy, for your Oracle Fusion server . The most common security policy is that of “oracle/wss_username_token_over_ssl_client_policy", however your server maybe setup differently8Calls the factory method to initialise a SOAP endpoint with the correct security features set9-16These lines setup the SOAP binding so that it knows which endpoint to execute (i.e. the Hostname+URI of your webservice which is not necessarily the endpoint where the SOAP Proxy was generated, the username and the password.In this example we are hard coding the details because we are going to be running this example on the command line. If this code is to be  executed on a JEE server, e.g. Weblogic, then we recommend this data is stored in the Credential store as CSF keys.17-19Here we create a reportRequest object and populate it with the appropriate parameters for the SOAP call. Although not mandatory its recommended that you use the objectFactory generated by the SOAP proxy wizard in JDeveloper.21This set the ReportPath parameter, including path to the report23This line ensures we get the raw data without decoration, layouts etc.25By default BI Publisher publishes data on a range basis, e.g. 50 rows at a time, for this usecase we want all the rows, and setting this to -1 will ensure this27Tells the webservice to flatten out the XML which is produced29This is an optional flag which instructs the BI Server to bypass the cache and go direct to the database30This line executes the SOAP call , passing the “reportReport” object we previously populated as a parameter. The return value is a reportResponse object34-39These lines print out the results from the BI Server. Of notable interest is the XML document is returned as a byte array. In this sample we simply print out the results to the output, however you would normally pass the resulting XML into Java routines to generate a XML Document.

 

 

Because we are running this code from the command line as a java client code we need to import the Fusion Apps Certificate into the Java Key Store. If you run the code from within JDeveloper then the java keystore is stored in <JDeveloperHome>\wlserver_10.3\server\lib\DemoTrust.jks

Importing certificates

 

  1. 1. Download the Fusion Applications SSL certificate, using a browser like internet explorer navigate to the SOAP WSDL URL
  2. 2. Mouse click on the security Icon which will bring you to the certificate details
  3. 3. View Certificate
    4. Export Certificate as a CER File
  4. 5. From the command line we now need to import the certificate into our DemoTrust.jks file using the following commandkeytool -import -alias fusionKey -file fusioncert.cer -keystore DemoIdentity.jks

jks

Now ready to run the code!

With the runReport.java file selected press the “Run” button, if all goes well then the code will execute and you should see the XML result of the BI Report displayed on the console.

 

Speakers: Put your Twitter Handle on the Windows taskbar!

The Oracle Instructor - Sat, 2016-04-30 03:56

If you speak often at conferences, sharing your screen to demo things, this could be helpful:

Twitter Handle on the Windows taskbar

Throughout your presentation, the audience will be able to see your Twitter Handle, reminding them to include it with tweets about the event. I used to include it in the slides, but this is better, because it works also with live demonstrations where no slides are being showed. Which is incidentally my favorite way to do presentations:-)

Now how can you do it? Quite easy, you open the Windows Control Panel and click on Region and Language. Then click on Additional settings:

Region and Language 1

Then you insert your Twitter Handle (or any other text you like to see on the taskbar) as AM and PM symbol. Make sure to select Time formats with trailing tt:

Region and Language 2

That’s it. If you want the font size as large as on the first picture above, that can be done here:

twitterhandel_taskbar4

I did that with Windows 7 Professional 64 bit. Hope you find it useful:-)


Tagged: speaker tip
Categories: DBA Blogs

Loading Data into Oracle Cloud ERP R10 using the new LoadAndImportData operation

Angelo Santagata - Sat, 2016-04-30 03:54

 

Introduction

As part of Oracle ERP cloud release 10 a new SOAP function has been made available to our customers which greatly simplifies the loading of ERP data using the batch oriented SOAP Services.

This article aims to give the reader, details of this new SOAP Service and how it helps in loading data files into Oracle ERP cloud.

Assuming the input file has been already produced, loading the data into Oracle ERP cloud service is traditionally a multi-step process.

The typical "happy" path is :

  1. 1. Load the file into Oracle Fusion ERP UCM service
  2. 2. Execute the first ESS Job which transfers the file from UCM to the Oracle ERP interface tables
  3. 3. Using a polling technique check to see when the ESS job has finished transferring the file into the interface tables
  4. 4. Execute a second ESS job, which transfers the file from Oracle ERP interface tables to the Oracle ERP data object tables
  5. 5. Use a polling technique to check to see when the file has been processed
    6. Finally execute a call to the downloadESSJobExecutionDetails() operation to download a log file so you can check for success,or any errors, which need dealing with.

Whilst this approach appears attractive, as it allows the developer a great deal of control of the process, in truth this internal processing should be something that the SaaS application [Oracle ERP Cloud] should manage and provide feedback to the developer when things finish

New SOAP method in R10

As of Oracle ERP cloud Release 10 there is a new API called "loadAndImportData", which is held within the ERPintegrationService, ( https://(FinancialDomain,Financial Common)/publicFinancialCommonErpIntegration/ErpIntegrationService?WSDL). This service has been specifically created to simplify the loading of data into Oracle ERP Cloud service by allowing you the ability to submit a file which is then automatically taken through the various stages of processing within Oracle ERP Cloud, without the user needing to execute each step of the process manually.

The operation takes the following parameters :

Element NameTypeDescriptiondocumentDocument Information SDOList of elements, each containing the details of the file to be uploaded. The details include the file content, file name, content type, file title, author, security group, and accountjobListProcess Details SDOList of elements, each containing the details of the Enterprise Scheduling Service job to be submitted to import and process the uploaded file. The details include the job definition name, job package name, and list of parametersinterfaceDetailsstringThe interface whose data is to be loaded.notificationCodestringA two-digit number that represents the manner and timing in which a notification is sent.callbackURLstringThe callback URL of the service implemented by customers to receive the Enterprise Scheduling Service job status on completion of the job

 

Diving into the Details

A sample soap payload, which imports journal records, looks like the following :

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:typ="http://xmlns.oracle.com/apps/financials/commonModules/shared/model/erpIntegrationService/types/" xmlns:erp="http://xmlns.oracle.com/apps/financials/commonModules/shared/model/erpIntegrationService/">
   <soapenv:Header/>
   <soapenv:Body>
      <typ:loadAndImportData>
         <typ:document>
            <erp:Content>  UEsDBBQAAAAIAMG2b0hvJGqkiAAAAKsBAAAPAAAAR2xJbnRlcmZhY2UuY3N2tY+xDoJADIZ3E9+hD9BIexojIwQWB0wU49yQqgMcyYnv7wELiYEwQIe2/9+mzZelD2Q0xMeA9gExxlKKLRRyJ/bzVIdXrepmoO+3ZLgfoU9EhGzYtA11ajTCcNePc5UKIuhKLE3xPnnz4r+8FM7111kpW+c/eOp8miXbzXJQB2aeAQU91rpUP1BLAQIUABQAAAAIAMG2b0hvJGqkiAAAAKsBAAAPAAAAAAAAAAAAIAAAAAAAAABHbEludGVyZmFjZS5jc3ZQSwUGAAAAAAEAAQA9AAAAtQAAAAAA</erp:Content>
            <erp:FileName>LoadGLData1.zip</erp:FileName>
            <erp:ContentType>zip</erp:ContentType>
            <erp:DocumentTitle>ImportJournalEntry</erp:DocumentTitle>
            <erp:DocumentAuthor></erp:DocumentAuthor>
            <erp:DocumentSecurityGroup>FAFusionImportExport</erp:DocumentSecurityGroup>
            <erp:DocumentAccount>fin$/journal$/import$</erp:DocumentAccount>
         </typ:document>
         <typ:jobList>
      
           <erp:JobName>oracle/apps/ess/financials/generalLedger/programs/common,JournalImportLauncher</erp:JobName>
           <erp:ParameterList>1061,Balance Transfer,1,123,N,N,N</erp:ParameterList>
         </typ:jobList>
         <typ:interfaceDetails>15</typ:interfaceDetails>
         <typ:notificationCode>50</typ:notificationCode>
         <typ:callbackURL>http://somecallbackserver.domain.com/mycallback</typ:callbackURL>
      </typ:loadAndImportData>
   </soapenv:Body>
</soapenv:Envelope>

Now lets dive into each element and explain what it represents and more importantly where you derive the data from :

  • Document : This element contains the details of the document to be uploaded
    • content : This is the document itself, base64 encoded and in-lined in the soap payload. There  are many tools on the internet to base64encode a document and in Java there is a  helper Base64.Encoder which does this for you.
    • contentType : This value should be set to "zip", this means your files must be zipped before base64encoding them and in lining them above
    • documentTitle : A title for the document, this is so you can find it in UCM later if you need to.
    • documentSecurityGroup : Needs to be set to a security group that secures the document, for our example we've used FAFusionImportExport
    • documentAccount : This needs to be set to the correct account depending on the data which is being loaded. For our journal import we need to set the account to fin$/journal$import. This is the same Account used when you "manually" upload files into Oracle ERP for loading.. If you don't know the what UCM account your data should be loaded into you can find it by going into File Based Data Import for Financials Cloud documentation and searching for your data object. In our case the object is "Journal Import" and the documentation states that the UCM account is fin/journal/import. For our SOAP Service we need to prefix each "/" with a "$"
  • jobList : This element contains data describing the job which needs to be executed for this batch upload
    • jobName : This is the "package name" of the ESS job which loads the data into Oracle ERP Cloud. You can find this in FusionAppsOER or in the documentation. The format for the field is "packageName,jobName"
    • parameterList : This is the list of parameters which the job requires to execute. The parameters depend on the ESS Job being executed. In our case the ESS Job is for journals and in our case the parameters are Data Access Set ID, Source (Balance Transfer), LedgerID, GroupID (aka BatchID) etc
    • [caption id="attachment_37609" align="alignnone" width="300"]journalimport Example from FusionOER[/caption]
  • interfaceDetails :  This is set to 15 for journals  (no longer needed in R11)
  • notificationCode : This is set to 50 (no longer needed in R11)
  • callBackURL :  The magic about this service is that it executes all of the ESS jobs in the background and then executes a callback to your service when its finished. This response contains the "last" ESSjob ID executed so you can then query the status of the jobs using the downloadESSJobExecutionDetails method.

 

Handling the callback
  • As mentioned earlier, the loadAndImportData operation does all heavy lifting, and orchestration , within Oracle Fusion ERP SaaS, the only thing the developer needs to implement [optionally but very desirable], is a webservice endpoint which manages the callback generated by the ESS framework. This service needs to implement the ESS onJobCompletion operation, which will deliver three pieces of data, the requestId of the ESSJob which completed, the state of the process and a status message. For more information on handing the ESS callbacks please see this documentation link, and additionally if you are using BPEL to execute the SOAP Service then this documentation link may be of interest (Section 11.7.7 : Receive the Job Completion Status)

 

Conclusion

The new LoadAndImportData operation will most certainly make importing of data into Oracle ERP a much simpler process, its biggest advantage is that developers will easily be able to trigger the import with a single SOAP call which can easily be done without the need to worry about orchestration. There are however scenarios when you would you probably use the traditional , step by step method, for example when you want to control/trigger external notification providers that each step has been executed at the macro level or when the import file size is very large (>100Mb). In the latter case you might want to upload the file into Oracle UCM using UCMs native IdcWebService, which supports MTOM and then execute the ESS jobs in order as we have traditionally done.

 

 

Links for 2016-04-29 [del.icio.us]

Fuad Arshad - Sat, 2016-04-30 01:00

Leaving Behind the Limits of Binary Thinking for Full Inclusiveness

Pythian Group - Fri, 2016-04-29 14:39

diversityaward

When Pythian became the first tech company in Canada to release gender-based metrics last November, we wanted to make a bold statement with the launch of the Pythia Program. And apparently it worked. We’ve already increased the amount of female applicants by more than 10% over just one quarter. Our internal Pythia Index has also risen 3% from 56% to 59%. And just this week, Pythian’s CEO Paul Vallée received the WCT Diversity Champion award in recognition of his leadership and efforts to promote diversity in the workplace, and a more inclusive tech industry that promotes men and women from all backgrounds.

Despite a clear case for gender parity, and research confirming the financial return for companies, full inclusion is still ‘controversial’ to implement. A lot of this has to do with the unconscious associations we still have with male and female roles which are placed in opposition. This kind of binary thinking is rampant, especially in our social constructions of what constitutes masculinity and femininity.

When the Pythia Program was in its early stages, we actually noticed a lot of binary, either/or thinking was shaping our assumptions. Off/On. 0/1. We can do this OR that. We can empower women technical professionals OR talk to employees about unconscious bias. We can take a stand on gender diversity OR maintain good relationships with male colleagues. Wait a minute…why can’t we do both?

If we had continued to believe our choices were that limited, it would have seriously eroded any impetus to act on our values of gender equity and inclusiveness. It was time to reframe our thinking, and that’s when we stopped compromising. A bolder stance emerged when we did away with limited, binary thinking that was trapping us in false dichotomies.

Let’s look at this from a data perspective, because that’s what we love and do best.

Current computer chips store information in electrical circuits as binary bits, either in a state of 0 or 1, so there’s a finite amount of data that can be processed. Quantum computer chips, or ‘qubits’ however, can be in the state of 0, 1, or both at the same time–giving quantum computers mind-blowing processing power.

So if we apply this idea of ‘binary’ vs. ‘quantum’ into a human context, could we potentially become quantum thinkers? Quantum thinking would be holistic, and enable the mind to function at a greater level of complexity. It’s an unlimited approach that ‘either/or’ binary thinking simply does not permit. Wouldn’t it be more exciting to break away from these limitations and move to a higher, more innovative level? Things look different when this binary thinking is disrupted. Start by replacing either/or with ‘and’.

We can help achieve gender parity AND we can achieve diversity in other important areas. Pythian can be inclusive, people-focused AND financially strong. Men can be powerful leaders AND feminist.

There is one big exception, one area where it’s either/or: whether you support the status quo of tech’s current ‘bro culture’, or inclusive leadership that embraces the value of multiple perspectives. Those two states cannot co-exist.

As he accepted his award for Diversity Champion at the WCT Gala on April 27, Pythian CEO Paul Vallée made his position clear “To the women who are working hard in high tech, and who are marginalized by bro culture — which is a real problem, we are in the midst of a culture war — I salute you and keep fighting the good fight because we will prevail. To the male leaders that have taken sides in this battle, the Pythia Index will help you keep score, whether you’re on my team [fighting to end bro culture] or the opposite team.”

As Einstein said, “you can’t solve problems with the same thinking used to create them.” And lack of gender diversity in the tech industry is a problem Pythian wants to help solve.

Categories: DBA Blogs

Partition Storage -- 6 : Revisiting Partition HWM

Hemant K Chitale - Fri, 2016-04-29 09:42
After the curious finding in my previous blog post, where a Partition's HighWaterMark was noticeably higher than that for a non-Partitioned Table but then shrunk on a MOVE operation, retrying the same rows with a different pattern of INSERT statements.

However, I am still sticking to a single session doing the INSERT (as I don't want ASSM spreading the incoming rows to different non-contiguous blocks)

This in 12.1.0.2
SQL> connect hemant/hemant
Connected.
SQL> create table part_table_3(id_column number(6), data_column varchar2(100))
2 partition by range (id_column)
3 (partition p_100 values less than (101),
4 partition p_200 values less than (201),
5 partition p_300 values less than (301),
6 partition p_400 values less than (401),
7 partition p_max values less than (maxvalue))
8 /

Table created.

SQL> insert into part_table_3 values (51,'Fifty One');

1 row created.

SQL>
SQL> commit;

Commit complete.

SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 100000
loop
insert into part_table_3 values (25, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end;
2 3 4 5 6 7 8 9 10 11 12
13 /

PL/SQL procedure successfully completed.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_3 values (55, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end; 2 3 4 5 6 7 8 9 10 11
12 /

PL/SQL procedure successfully completed.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_3 values (45, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end; 2 3 4 5 6 7 8 9 10 11
12 /

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL>
SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_3',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL>
SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_3'
and partition_name = 'P_100' 2 3 4
5 /

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100003 3022

SQL>
SQL> alter table part_table_3 move partition p_100 ;

Table altered.

SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_3',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_3'
and partition_name = 'P_100'
/ 2 3 4 5

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100003 2484

SQL>
SQL> select extent_id, blocks
from dba_extents
where segment_name = 'PART_TABLE_3'
and segment_type = 'TABLE PARTITION'
and partition_name = 'P_100'
and owner = 'HEMANT'
order by 1
/ 2 3 4 5 6 7 8

EXTENT_ID BLOCKS
---------- ----------
0 1024
1 1024
2 1024

SQL>


So, a Row-By-Row Insert still resulted in the the HWM being 3,022 and shrinking to 2,484 after a MOVE.



Let's try the same data-set in 11.2.0.4
SQL> connect hemant/hemant
Connected.
SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
PL/SQL Release 11.2.0.4.0 - Production
CORE 11.2.0.4.0 Production
TNS for Linux: Version 11.2.0.4.0 - Production
NLSRTL Version 11.2.0.4.0 - Production

SQL>
SQL> create table part_table_3(id_column number(6), data_column varchar2(100))
partition by range (id_column)
(partition p_100 values less than (101),
partition p_200 values less than (201),
partition p_300 values less than (301),
partition p_400 values less than (401),
partition p_max values less than (maxvalue))
/

2 3 4 5 6 7 8
Table created.

SQL> SQL> show parameter deferr

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
deferred_segment_creation boolean TRUE
SQL>
SQL> insert into part_table_3 values (51,'Fifty One');

1 row created.

SQL> commit;

Commit complete.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 100000
loop
insert into part_table_3 values (25, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end; 2 3 4 5 6 7 8 9 10 11
12 /

PL/SQL procedure successfully completed.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_3 values (55, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end; 2 3 4 5 6 7 8 9 10 11
12 /

PL/SQL procedure successfully completed.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_3 values (45, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end;
2 3 4 5 6 7 8 9 10 11 12
13 /

PL/SQL procedure successfully completed.

SQL>
SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_3',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_3'
and partition_name = 'P_100'
/ 2 3 4 5

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100003 3022

SQL>
SQL> alter table part_table_3 move partition p_100 ;

Table altered.

SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_3',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_3'
and partition_name = 'P_100'
/ 2 3 4 5

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100003 2484

SQL>
SQL> select extent_id, blocks
from dba_extents
where segment_name = 'PART_TABLE_3'
and segment_type = 'TABLE PARTITION'
and partition_name = 'P_100'
and owner = 'HEMANT'
order by 1
/ 2 3 4 5 6 7 8

EXTENT_ID BLOCKS
---------- ----------
0 1024
1 1024
2 1024

SQL>


So, 11.2.0.4 and 12.1.0.2 display the same behaviour for the Partition HWM.  A HWM of 3,022 blocks shrinking to 2,484 blocks.

The next test would be with a larger AVG_ROW_LEN.
.
.
.


Categories: DBA Blogs

Server Problem : A Resolution?

Tim Hall - Fri, 2016-04-29 08:09

AWSIt’s been a pretty annoying couple of days on the website server front.

The server locking up intermittently is one thing, and for all I know, maybe my fault? The incompetence of the hosting company is quite something else.

Just so you are aware why I was doing my nut yesterday, the hosting company had disabled my ability to force a power cycle of my dedicated server while they did a hardware test. They forgot to re-enable it when they finished. I rang to ask them to re-enable it and also power cycle to server. It took them over 70 minutes to achieve the power cycle and it was the following day before the interface to allow me to force a power cycle was enabled again. Amateurs!

They offered to give me a free month of hosting, but I refused. Last night I moved the whole thing to Amazon Web Services so that’s the new home for the website. I finished the build and testing, then flipped the DNS and went to bed, figuring the DNS propagation can take up to 24 hours, so why hang around.

VirtualBox 5.0.20

Tim Hall - Fri, 2016-04-29 07:32

It’s been 10-ish days since VirtualBox 5.0.18 was released and the world is screaming out, “Where is 5.0.20 already? We want to update all out Guest Additions!” Well, it’s here…

Experiments with Elastic’s Graph Tool

Rittman Mead Consulting - Fri, 2016-04-29 07:17

Elastic announced their Graph tool at ElastiCON 2016 (see presentation here). It’s part of the forthcoming X-Pack which bundles Graph along with other helper tools such as Shield and Marvel. Graph itself is two things; an extension of Elasticsearch’s capabilities, enabling the user to explore how items indexed in Elasticsearch are related, and a plugin for Kibana that acts as an optional front-end for this new functionality.

You can find a good introduction to Graph and the purpose and theory behind it in the documentation here. The installation of the components themselves is simple and documented here.

First Graph

To use Graph, you just point it at your existing data in Elasticsearch. The first data set I’m going to explore is one of the standard ones that everyone uses; Twitter. I’m streaming it in through Logstash (via Kafka for flexibility), but if you wanted you could ship it in via JDBC from any RDBMS, or from HDFS too.
See an important note at the end of this article about the slice of data within it, because it affects how the relationships visualised here should be viewed. 

On launching Kibana’s Graph plugin (http://localhost:5601/app/graph) I choose the index (note that index patterns, e.g. when partitioning by date, are not supported yet), and the field in the data that I want to use as my vertices. A point to note here – “vertices” are usually called “nodes” in Graph terminology, but since Elasticsearch already uses “nodes” as part of its infrastructure topology terminology, they had to pick a different term.

In the search box, I can put my search term from which I’m interested to see the related ‘vertices’.

Sounds baffling? It is, kinda – right up until you run it (hit enter from the search box or click the magnifying glass search icon) and see what happens:

Here we’re seeing the hashtags used in tweets that mention Kibana. The “connections” (Elastic term) or “edges” (general Graph term) show which vertices (nodes) are related, and the width indicates the strength of that relationship (based on Elasticsearch’s significant terms and scoring algorithm). For more details, see the “Behind the Scenes” section towards the end of this article.

We can add in a second set of vertices by running a second search (“Elasticsearch”) – the results for these are, in effect, appended to the existing ones:

Add Links

Since we’ve pulled back an additional set of vertices, it could be that there’s overlap between these and the first set (you’d kinda of expect it, Elasticsearch and Kibana being related). To visualise this, use the Add Links button

Note how the graph redraws itself with additional connections:

Blinked and you missed it? Use the Undo button to step back, and Redo button to re-apply.

Grouping Vertices

If you look closely at the graph you’ll see that Elasticsearch, ElasticSearch, and elasticsearch are all there as separate vertices. This is because I’m using a non-analyzed index field, so the strings are treated literally, case included. In this specific example, we’d probably re-run the graph using the analysed version of the field, which following the same two searches as above gives this:

But, sticking with our non-analysed example, we can use it to demonstrate Graph’s ability to group multiple terms together into a single vertex. Switch to Advanced Mode:

and then select the three vertices and click the group option

Now all three, and their connections, are as one:

Whilst the above analysed/non-analysed difference gave me excuse to show the group function (can you tell I’ve done many-a-failed-live-demo? ;-) ), I’m now going to switch over to a graph built on the analysed version of the hashtag field, as we saw briefly above:

Tidying up the Graph – Delete and Blacklist

There’s a few straglers on the Graph that are making it less easy to comprehend. We can temporarily remove them, or even blacklist them from appearing again in this session:

Expand Selection

One of the points of Graph analysis is visualising the relationships in your data in a way that standard relational methods may not lend themselves to so easily. We can now start to explore this further, by digging into the Graph that we’ve got so far. This process, along with the add links seen above, is often called “spidering“. By selecting the elasticsearch node and clicking on Expand selection we can see additional (by default, five) vertices related to this one:

So we see that kafka is related to Elasticsearch (in the view of the twitterati, at least), and let’s expand that Kafka vertex too:

By clicking the Expand selection button again for the same vertex we get further results added:

We can select one node (e.g. realtime) an using the Add Link see additional relationships:

But, there are many nodes, and we want to see any relationships. So, switch to Advanced Mode, select All

…add Add Link again:

Knob Twiddling

Let’s start with a blank canvas, in basic mode, showing hashtags related to … me (@rmoff)!

But, surely I do more than talk about OBIEE and ODI? Like, Elasticsearch? Let’s relax the Graph selection criteria, under Settings:

and run the search again (on top of the existing results):

There’s more results … but I know how much I tweet and it feels like I’m only seeing a part of the picture. By switching over to Advanced Mode, we can refine how many results each field returns:

I reset the workspace (undo to blank, or just reload), and run the search again, this time with a greater number of hashtag field values shown, and with the same relaxed search settings as shown above:

At this point I’m into “fiddling” territory, twiddling with the ‘Number of terms’, ‘Significant’ and ‘Certainty’ knobs to see how the results vary. You can read more about the algorithm behind the Significance setting here, and more about the Graph API here. The certainty setting is simply “The min number of documents that are required as evidence before introducing a related term”, so by lowering it we see more links, but potentially with more “noise” too, of terms that aren’t really related.

An important point to note here is the dataset that I’m using is already biased because of the terms I’m including in my twitter feed search, therefore I’d expect to see this skew in the results below. See the section at the end of this article for more details of the dataset.

  • 50 terms, significant unticked, certainty 1 (as above)
  • 50 terms, significant ticked, certainty 1
  • 50 terms, significant ticked, certainty 3
  • 20 terms, significant ticked, certainty 1
  • 20 terms, significant unticked, certainty 1

Based on the above, “Significant” seems to reduce the number of relationships discovered, but increase the level of weight shown in those that are there.

Adding Additional Vertex Fields

So we’ve seen a basic overview of how to generate Graphs, expand selections, and add relationships to those additional selections. Let’s look now at how multiple fields can be added to a Graph.

Starting with a blank workspace, I switched to Advanced Mode and added two fields from my twitter data:

  • user.screen_name
  • in_reply_to_screen_name

Note that you can customise the colour and icon of different fields.

Under Options I’ve left Significant Links enabled, and set Certainty to 1.

Let’s see who’s been interacting about the recent E4 summit:

Whilst it looks like Mark Rittman is the centre of everything, this is actually highlighting a skew in the source dataset – which includes everything Mark tweets but not all tweets about E4. See the section at the end of this article for more details of the dataset.

The lower cluster is Mark as the addressee of tweets (i.e. he is the in_reply_to_screen_name), whilst the upper cluster is tweets that Mark has sent addressing others (i.e. he is the user.screen_name).

If we click on Add Links a couple of times we can see that there’s other connections here – for example, Mark replies to Stewart (@stewartbryson), who Christian Berg (@Nephentur) talks to, who in turn talks to Mark.

This being twitter and the age of narcissism, I’ll click on my vertex and click Expand Selection to see the people who in turn talk to me:

And by using Add Link see how they relate to those already shown in the Graph:

Viewing Associated Records

Within Graph there’s the option to view the data associated with one or more vertices. We do this by selecting a vertex and clicking on View Example Docs (in Elasticsearch parlance, a document is akin to a ‘row’ as traditional RDBMS folk would know it). From here select the field – for twitter the text field has the contents of the tweet:

Adding Even more Vertex Fields

So, we’ve got a bit of a picture of who talks to whom, but can we see what they’re talking about? We could use the text field shown above to see the contents of tweets but that’s down in the weeds of individual tweets – we want to step back a notch and get a summarised view.

First I add in the hashtag field:

And then deselect the two username fields. This is so that I can expand existing vertices, and instead of showing related hashtags and users, instead I only expand it to show hashtags – and not additional users.

Now I select Mark as the orinator of a tweet, and Expand Selection followed by Add Links on all vertices until I get this:

The number of values selected is key in getting a representative Graph. Above I used a value of 10. Compare that to instead running the same process but with 50. Under Options I’ve left Significant Links enabled, and set Certainty to 1:

One interesting point we can see from this is that the user “itknowingness” in the cluster on the left seems to use all the hashtags, but doesn’t interact with anyone – from the Graph it’s easy to see, and a great example of where Graph gives you the answer to a question you didn’t necessarily know that you had, and which to get the answer out through a traditional RDBMS query would need a very specific query to do so. Looking at the source data via Kibana’s Discover panel shows that it is indeed a bot auto-retweeting anything and everything:

Building a Graph from Scratch

Now that we’ve seen all the salient functions, let’s start with a blank canvas, and see where we get.
The setttings I’m using are:

  • Significant Links unticked
  • Certainty = 1
  • Field entities.hashtags.text.analyzed max terms = 10
  • Field user.screen_name max terms = 10
  • Initial search term rmoff

Then I click on markrittman and Expand Selection, the same for mrainey, and also for the two hashtags e4 and hadoop:

Within the clusters, let’s see what links exist. With no vertices select I click on Add Links (which seems to be the same as selecting all vertices and doing the same). With each click additional links are added, all related to the hadoop/bigdata area:

I’m interested now in the E4 region of the Graph, and the vertices related to Mark Rittman. Clicking on his vertex and clicking “Select Neighbours” does exactly that:

Now I’m more interested in digging into the terms (hashtags) that are related that people, so I deselect the user.screen_name field, and then Expand Selection and Add Links again.

Note the width of the connections – a strong relationship between Mark Rittman, “Hadoop” and “SQL”, which is presumably from the tweets around the presentation he did recently on the subject of… SQL on Hadoop. Other terms, including Hive and Impala, are also related, as you’d expect.

Graphing Tweet Text Contents

By making sure that the tweet text is available as an analysed field we can produce a Graph based on the ‘tokens’ within the tweet, rather than the literal 140 characters. Whilst hashtags are there deliberately to help with the classification and grouping of tweets (so that other people can follow conversations on the same subject) there are two reasons why you’d want to look at the tweet text too:

  1. Not everyone uses hashtags
  2. Not all relationships are as boolean as a hashtag or not – maybe a general discussion in an area re-uses the same words which overall forms a relationship between the terms.

Here I’m going back to the default settings:

  • Significant Links ticked
  • Certainty = 3

And returning two fields – hashtag and tweet text

  • Field entities.hashtags.text.analyzed max terms = 20
  • Field text.analyzed max terms = 50
  • Initial search term kafka

I then tidy it up a bit :

  • Joining the same/near-same text and hashtags, such as “kafkasummit” hashtag and the same text. If you think about the contents of a tweet, hashtags are part of the text, therefore, there’s going to be a lot of this duplication.
  • Blacklisted text terms that are URL snippets. Here I’m using the Example Docs function to check the context of the term in the whole text field

    I also blacklisted common words (“the”, “of”, etc), and foreign ones (how British…).

Behind the Scenes

The Kibana Graph plugin is just a front-end for the Graph extension in Elasticsearch. It’s useful (and fun!) for exploring data, but in practice you’d be making direct REST API calls into Elasticsearch to retrieve a list of vertices and connections and relative weights for use in your application. You can see details of this from the Settings page and Last Request option

Looking at an example (the one used in the first example on this article), the request is pretty simple:

{
    "query": {
        "query_string": {
            "default_field": "_all",
            "query": "kibana"
        }
    },
    "controls": {
        "use_significance": true,
        "sample_size": 2000,
        "timeout": 5000
    },
    "connections": {
        "vertices": [
            {
                "field": "entities.hashtags.text.analyzed",
                "size": 5,
                "min_doc_count": 3
            }
        ]
    },
    "vertices": [
        {
            "field": "entities.hashtags.text.analyzed",
            "size": 5,
            "min_doc_count": 3
        }
    ]
}

and the response not too complex either, just long.

{
    "took": 201,
    "timed_out": false,
    "failures": [],
    "vertices": [
        {
            "field": "entities.hashtags.text.analyzed",
            "term": "logstash",
            "weight": 0.1374238061561338,
            "depth": 0
        },
        {
            "field": "entities.hashtags.text.analyzed",
            "term": "timelion",
            "weight": 0.12719678206002483,
            "depth": 0
        },
        {
            "field": "entities.hashtags.text.analyzed",
            "term": "elasticsearch",
            "weight": 0.11733085557405047,
            "depth": 0
        },
        {
            "field": "entities.hashtags.text.analyzed",
            "term": "osdc",
            "weight": 0.00759026383038536,
            "depth": 1
        },
        {
            "field": "entities.hashtags.text.analyzed",
            "term": "letsencrypt",
            "weight": 0.006869972953128271,
            "depth": 1
        },
        {
            "field": "entities.hashtags.text.analyzed",
            "term": "kibana",
            "weight": 0.6699955212823048,
            "depth": 0
        },
        {
            "field": "entities.hashtags.text.analyzed",
            "term": "filebeat",
            "weight": 0.004700657388257993,
            "depth": 1
        },
        {
            "field": "entities.hashtags.text.analyzed",
            "term": "elk",
            "weight": 0.09717015256984456,
            "depth": 0
        },
        {
            "field": "entities.hashtags.text.analyzed",
            "term": "justsayin",
            "weight": 0.005724977460940227,
            "depth": 1
        },
        {
            "field": "entities.hashtags.text.analyzed",
            "term": "elasticsearch5",
            "weight": 0.004700657388257993,
            "depth": 1
        }
    ],
    "connections": [
        {
            "source": 0,
            "target": 3,
            "weight": 0.00759026383038536,
            "doc_count": 26
        },
        {
            "source": 7,
            "target": 5,
            "weight": 0.02004197094823259,
            "doc_count": 26
        },
        {
            "source": 5,
            "target": 4,
            "weight": 0.006869972953128271,
            "doc_count": 6
        },
        {
            "source": 5,
            "target": 0,
            "weight": 0.018289612748107368,
            "doc_count": 48
        },
        {
            "source": 0,
            "target": 6,
            "weight": 0.004700657388257993,
            "doc_count": 11
        },
        {
            "source": 7,
            "target": 0,
            "weight": 0.0038135609650491726,
            "doc_count": 10
        },
        {
            "source": 0,
            "target": 5,
            "weight": 0.0052711254217388415,
            "doc_count": 48
        },
        {
            "source": 0,
            "target": 9,
            "weight": 0.004700657388257993,
            "doc_count": 11
        },
        {
            "source": 5,
            "target": 1,
            "weight": 0.033204869273453314,
            "doc_count": 29
        },
        {
            "source": 1,
            "target": 5,
            "weight": 0.04492364819068228,
            "doc_count": 29
        },
        {
            "source": 5,
            "target": 8,
            "weight": 0.005724977460940227,
            "doc_count": 5
        },
        {
            "source": 2,
            "target": 5,
            "weight": 0.00015519515214322833,
            "doc_count": 80
        },
        {
            "source": 5,
            "target": 7,
            "weight": 0.022734810798933344,
            "doc_count": 26
        },
        {
            "source": 7,
            "target": 2,
            "weight": 0.0006823241440183544,
            "doc_count": 13
        }
    ]
}

Note how the connections are described using the relative (zero-based) instance number of the vertices. You can also see that the width of a connection is based on the weight (calculated from the significant terms algorithm), rather than document count. Compare the connection width of timelion/kibana (vertices 1 and 5 respectively), with a weighting of 0.33 (kibana -> timelion) and 0.045 (timelion -> kibana) but overlapping document count of 29:

with elasticsearch -> kibana that has an overlapping document count of 80 but only a weight of 0.0001.

Elasticsearch’s documentation describes the significant terms algorithm thus, using the example of suggesting “H5N1” when users search for “bird flu” in text:

In all these cases the terms being selected are not simply the most popular terms in a set. They are the terms that have undergone a significant change in popularity measured between a foreground and background set. If the term “H5N1” only exists in 5 documents in a 10 million document index and yet is found in 4 of the 100 documents that make up a user’s search results that is significant and probably very relevant to their search. 5/10,000,000 vs 4/100 is a big swing in frequency.

So from this, we can roughly say that Graph is looking at the number of documents in which timelion is mentioned as a proportion of the whole dataset, and then in the number of documents in which the hashtag Kibana exists and also timelion is mentioned. Since the former is a plugin of the latter, the close relationship would be expected. You can use Kibana to explore the significant terms concept further – for example, taking the same ‘seed’ as the original Graph query above, Kibana, gives a similar set of results as the Graph:

More information about the scoring can be found here, which includes the fact that the scoring is, in part, based on TF-IDF (Term Frequency-Inverse Document Frequency).

Licensing

Graph requires a licence – see here for details.

Conclusion

This tool is a great way to dip one’s toe into the waters of Graph analysis and visualisation. It’s another approach to consider in the data discovery phase of your analytics work, when you don’t even know the questions that you’ve got for the data in front of you. Your data can remain in Elasticsearch in the same format it’s always been, and the Graph function just runs on top of it.

I’ll not profess to be a Graph theory expert, so can’t pass much comment on the theoretical rigour of the results and techniques seen. One thing that struck me with it was that there’s no (apparent) way to manually influence the weight of connections and vertices – for example, based on the number of followers someone has one twitter consider them more (or less) relevant when determining relationships.

For a well-informed view on Graph theory and Social Network Analysis (SNA), see Jordan Meyer’s presentation here (and associated R code), as well as Mark Rittman’s presentation from BIWA this year.

Footnote: The Twitter Dataset

The dataset I’m using is a live stream from Twitter, via Logstash and Kafka, searching for a set of terms related to me and the field I work in. Therefore, there’s going to be a bunch of relationships missing (if I’ve not included the relevant term in my tweet search), and relationships over-stated (because as a proportion of all the records the terms I’ve selected will dominate).
An interesting use of Graph (or Elasticsearch’s significant terms aggregation in general) could be to identify all the relevant terms that I should be including in my twitter search, by sampling an ‘unpolluted’ feed for relationships. For example, if I’m interested in capturing Kafka tweets, perhaps I should also be capturing those related to Samza, Spark, and so on.

The post Experiments with Elastic’s Graph Tool appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Turkish Oracle User Group Conference in Istanbul 2016 #TROUGDays

The Oracle Instructor - Fri, 2016-04-29 01:31

Straight after the Oracle University Expert Summit in Berlin – which was a big success, by the way – the circus moved on to another amazing place: Istanbul!

Istanbul viewThe Turkish Oracle User Group (TROUG) did its annual conference in the rooms of the Istanbul Technical University with local and international speakers and a quite attracting agenda.

Do you recognize anyone here?:-)

#TROUGDays speakers

I delivered my presentation “Best of RMAN” again like at the DOAG annual conference last year:

Uwe Hesse speaking in Istanbul

Many thanks to the organizers for making this event possible and for inviting us speakers to dinner

Istanbul speakers dinner

The conference was well received and in my view, it should be possible to attract even more attendees in the coming years by continuing to invite high-profile international speakers

audience

My special thanks to Joze, Yves and Osama for giving me your good company during the conference – even if that company was sometimes very tight during the car rides

Categories: DBA Blogs