Feed aggregator

Drop all objects in your current schema

Learn DB Concepts with me... - Thu, 2016-03-17 14:49

**********************************************************************
You can use this simple pl-sql block to drop all objects in your current schema.
**********************************************************************

declare
VSQL varchar2(4000);
OBJ_NAME varchar2(100);
OBJ_TYPE varchar2(100);
OBJ_OWNER varchar2(100);
cursor c1 is select object_type,object_name from user_objects where object_type in ('TABLE','VIEW');

begin

open c1;
loop
fetch c1 into OBJ_TYPE,OBJ_NAME;
exit when c1%NOTFOUND;
IF OBJ_TYPE='TABLE' 
THEN
VSQL:=' drop '||OBJ_TYPE||' '||OBJ_NAME||' cascade constraints';
DBMS_OUTPUT.PUT_LINE(VSQL);
execute IMMEDIATE VSQL;
ELSE  
VSQL:=' drop '||OBJ_TYPE||' '||OBJ_NAME;
DBMS_OUTPUT.PUT_LINE(VSQL);
execute IMMEDIATE VSQL;
END IF;
end loop;
close c1;
end;
/




Note :

Remove "where object_type in ('TABLE','VIEW')" to drop all objects from current user.
Or you can edit to include only type of objects you want to be dropped.

You can Alter OBJECT TYPE any of below:


'TABLE','VIEW','SYNONYM','SEQUENCE','PROCEDURE','TRIGGER'
Categories: DBA Blogs

Collaborate 16: My sessions

Dan Norris - Thu, 2016-03-17 14:06

I’ll be at Collaborate 16 next month and looking forward to seeing lots of good friends, learning some new things, and sharing a little experience too. For the last of those, I’ll present 3 sessions, er, more like 2.2 sessions:

  • Wed, 13-Apr, 12:45-12:55pm: Oak Table World 10-minute Lightning talk “Tools used for security and compliance on Linux” [slides]
  • Wed, 13-Apr, 3-4pm: Oak Table World session “IPv6: What You Need to Know (with Linux & Exadata references)” [slides]
  • Thu, 14-Apr, 12:15-1:15pm (saving the best for the dead last slot in the conference?): “Exadata Database Machine Security” [slides]

I’ve spent a lot of the last year on security projects and related investigations, so you’ll notice that my topics also trend in that direction. Hopefully, the usually boring security stuff will be a little more fun or at least that’s one goal for my sessions.

Additionally, I’m one of the Oracle Liaisons for the Oracle Exadata Special Interest Group (a.k.a. Exadata SIG) and the group will also have a meeting at Collaborate.

  • Wed, 13-Apr, 4:15-5:15pm: Exadata SIG meeting

I’ll be at the conference all week, attending sessions, hopefully talking with some Exadata customers, and trying to learn a few things about topics foreign to me. Hope to see you there!

GoldenGate 12.2 Big Data Adapters: part 2 – FLUME

Pythian Group - Thu, 2016-03-17 12:36

In this blog post we continue our review of the new Oracle GoldenGate Big Data adapters. In the first part of the series I tested the basic HDFS adapter and checked how it worked with some DML and DDL. In this article I will try the Flume adapter and see how it works.

A quick reminder on what Flume is. It is not a topic about the popular Australian musician. Today we are talking about Apache Flume. In short, it is a pipeline or a kind of streaming system that allows you to move large amount of data. It has simple architecture and, in general, there are three main components:

a) Source: where data enters into Flume from an outside system.
b) Sink: responsible for passing data to the destination system whether it is the final destination, or another flow.
c) Channel: connects the Source and Sink.

I know that it is a rather simplistic description, but the main subject of this article is not what Flume can do, but how we can pass our data from Oracle to Flume using GoldenGate.
My first post discussed how you set up an Oracle source system, and how to start GoldenGate initial load and extract. I am not repeating it here. Let’s assume we have the source system. It is Oracle database, replicating DML and DDL for one particular schema, and GGTEST using Oracle GoldenGate 12.2 to trail files on our box where we already have GoldenGate for Big Data. Have a look at the first part to see how to set up the GoldenGate for Big Data (OGG BD).

So, we have our OGG BD setup and the manager up and running.

GGSCI (sandbox.localdomain) 1> info manager

Manager is running (IP port sandbox.localdomain.7839, Process ID 18521).


GGSCI (sandbox.localdomain) 2>

What we need now is to prepare our Flume agent to accept messages from OGG. I’ve already set up my Flume’s agent-ng service on my Linux box, and now we need to prepare the configuration file for the agent to handle the income stream, and pass it to the destination system. We will set up our source to “avro” and sink will be writing to HDFS. The source can be either avro or thrift. According to oracle documentation the Flume handler can stream data from a trail file to Avro or Thrift RPC Flume sources.

I have to admit that the destination as HDFS looks quite artificial since we have a special adapter for HDFS and don’t need a Flume to write there. But such a configuration can help us compare different adapters and what they can do.
I used Flume version 1.6.0:

[oracle@sandbox flume-ng]$ bin/flume-ng version
Flume 1.6.0
Source code repository: https://git-wip-us.apache.org/repos/asf/flume.git
Revision: 2561a23240a71ba20bf288c7c2cda88f443c2080
Compiled by hshreedharan on Mon May 11 11:15:44 PDT 2015
From source with checksum b29e416802ce9ece3269d34233baf43f
[oracle@sandbox flume-ng]$

Here is my configuration file for the Flume agent:

# Name/aliases for the components on this agent
agent.sources = ogg1
agent.sinks = hdfs1
agent.channels = ch1
#Avro source
agent.sources.ogg1.type = avro
agent.sources.ogg1.bind = 0.0.0.0
agent.sources.ogg1.port = 4141

# Describe the sink
agent.sinks.hdfs1.type = hdfs
agent.sinks.hdfs1.hdfs.path = hdfs://sandbox/user/oracle/ggflume
#agent.sinks.hdfs1.type = logger

# Use a channel which buffers events in memory
agent.channels.ch1.type = memory
agent.channels.ch1.capacity = 100000
agent.channels.ch1.transactionCapacity = 10000

# Bind the source and sink to the channel
agent.sources.ogg1.channels = ch1
agent.sinks.hdfs1.channel = ch1

I’ve made the configuration simple and clear. You may change agent.sources.ogg1.port and agent.sinks.hdfs1.hdfs.path  depending on your system.

On the target HDFS we have to create directory as it was defined in our sink configuration.

[oracle@sandbox ~]$ hadoop fs -mkdir /user/oracle/ggflume
[oracle@sandbox ~]$ hadoop fs -ls /user/oracle/ggflume
[oracle@sandbox ~]$

We can start our Flume agent now.

[root@sandbox conf]# service flume-ng-agent start
Starting Flume NG agent daemon (flume-ng-agent):           [  OK  ]
[root@sandbox conf]# service flume-ng-agent status
Flume NG agent is running                                  [  OK  ]
[root@sandbox conf]#
[root@sandbox conf]# tail /var/log/flume-ng/flume.log
25 Feb 2016 11:56:37,113 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:120)  - Monitored counter group for type: CHANNEL, name: ch1: Successfully registered new MBean.
25 Feb 2016 11:56:37,121 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:96)  - Component type: CHANNEL, name: ch1 started
25 Feb 2016 11:56:37,122 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:173)  - Starting Sink hdfs1
25 Feb 2016 11:56:37,123 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:184)  - Starting Source ogg1
25 Feb 2016 11:56:37,139 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.source.AvroSource.start:228)  - Starting Avro source ogg1: { bindAddress: 0.0.0.0, port: 4141 }...
25 Feb 2016 11:56:37,146 INFO  [lifecycleSupervisor-1-2] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:120)  - Monitored counter group for type: SINK, name: hdfs1: Successfully registered new MBean.
25 Feb 2016 11:56:37,147 INFO  [lifecycleSupervisor-1-2] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:96)  - Component type: SINK, name: hdfs1 started
25 Feb 2016 11:56:38,114 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:120)  - Monitored counter group for type: SOURCE, name: ogg1: Successfully registered new MBean.
25 Feb 2016 11:56:38,115 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:96)  - Component type: SOURCE, name: ogg1 started
25 Feb 2016 11:56:38,116 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.source.AvroSource.start:253)  - Avro source ogg1 started.
[root@sandbox conf]#

Flume is ready, and we can now prepare our OGG configuration. We have examples for the Flume adapter configuration files in $OGG_HOME/AdapterExamples/big-data/flume/ :

[oracle@sandbox oggbd]$ ll AdapterExamples/big-data/flume/
total 12
-rw-r--r--. 1 oracle oinstall 107 Dec  9 12:56 custom-flume-rpc.properties
-r-xr-xr-x. 1 oracle oinstall 812 Dec  9 12:56 flume.props
-rw-r--r--. 1 oracle oinstall 332 Dec  9 12:56 rflume.prm
[oracle@sandbox oggbd]$

We can copy the examples to our configuration directory and adjust them to our needs:

[oracle@sandbox oggbd]$ cp AdapterExamples/big-data/flume/* dirprm/

Here is configuration file for our adapter:

[oracle@sandbox oggbd]$ cat dirprm/flume.props

gg.handlerlist = flumehandler
gg.handler.flumehandler.type=flume
gg.handler.flumehandler.RpcClientPropertiesFile=custom-flume-rpc.properties
gg.handler.flumehandler.format=avro_op
gg.handler.flumehandler.mode=tx
#gg.handler.flumehandler.maxGroupSize=100, 1Mb
#gg.handler.flumehandler.minGroupSize=50, 500 Kb
gg.handler.flumehandler.EventMapsTo=tx
gg.handler.flumehandler.PropagateSchema=true
gg.handler.flumehandler.includeTokens=false
gg.handler.flumehandler.format.WrapMessageInGenericAvroMessage=true

goldengate.userexit.timestamp=utc
goldengate.userexit.writers=javawriter
javawriter.stats.display=TRUE
javawriter.stats.full=TRUE

gg.log=log4j
gg.log.level=INFO

gg.report.time=30sec

gg.classpath=dirprm/:/usr/lib/flume-ng/lib/*:

javawriter.bootoptions=-Xmx512m -Xms32m -Djava.class.path=ggjava/ggjava.jar

You will need to adjust your gg.classpath depending on your system, as it has to include Flume java classes and a file with Flume source properties (custom-flume-rpc.properties file).
Here is my example for the custom-flume-rpc.properties file which will be used by OGG adapter to connect to the flume-ng agent. I’ve placed it to the dirprm directory along with other parameters.

[oracle@sandbox oggbd]$ cat dirprm/custom-flume-rpc.properties
client.type=default
hosts=h1
hosts.h1=localhost:4141
batch-size=100
connect-timeout=20000
request-timeout=20000</pre>
<pre>

As you can see, my flume-ng agent is on the same host as the OGG which may not be the same for you. In your case you may need to provide hostname and port for your running glume-ng agent. We need to prepare the configuration file for our initial load. The OGG trail file is located in the dirdat/ directory and has name initld.

</pre>
<pre></pre> <pre>Here is our parameter file for initial load using passive replicat: [oracle@sandbox oggbd]$ cat dirprm/irflume.prm --initial REPLICAT irflume -- Command to run REPLICAT in passive mode -- ./replicat paramfile dirprm/irflume.prm reportfile dirrpt/irflume.rpt SPECIALRUN END RUNTIME EXTFILE /u01/oggbd/dirdat/initld --DDLERROR default discard DDL include all TARGETDB LIBFILE libggjava.so SET property=dirprm/flume.props REPORTCOUNT EVERY 1 MINUTES, RATE GROUPTRANSOPS 10000 MAP ggtest.*, TARGET bdtest.*; 

Let’s run the load and see what we get in the end:

[oracle@sandbox oggbd]$ ./replicat paramfile dirprm/irflume.prm reportfile dirrpt/irflume.rpt

The command completed successfully and we got three new files on HDFS. The first 2 files had the schema description and the 3-d one had the data for the replicated tables.

[root@sandbox ~]# hadoop fs -ls /user/oracle/ggflume
Found 12 items
-rw-r--r--   1 flume oracle       1833 2016-03-10 11:17 /user/oracle/ggflume/FlumeData.1457626634620
-rw-r--r--   1 flume oracle       1762 2016-03-10 11:17 /user/oracle/ggflume/FlumeData.1457626634621
-rw-r--r--   1 flume oracle       1106 2016-03-10 11:17 /user/oracle/ggflume/FlumeData.1457626634622
[root@sandbox ~]# hadoop fs -tail /user/oracle/ggflume/FlumeData.1457626634620
 {
        "name" : "PK_ID",
        "type" : [ "null", "string" ],
        "default" : null
      }, {
        "name" : "PK_ID_isMissing",
        "type" : "boolean"
      }, {
        "name" : "RND_STR",
        "type" : [ "null", "string" ],
        "default" : null
      }, {
        "name" : "RND_STR_isMissing",
        "type" : "boolean"
     ..................

[root@sandbox ~]# hadoop fs -tail /user/oracle/ggflume/FlumeData.1457626634621
 "string"
  }, {
    "name" : "primary_keys",
    "type" : {
      "type" : "array",
      "items" : "string"
    }
  }, {
    "name" : "tokens",
    "type" : {
      "type" : "map",
      "values" : "string"
    },
...........................
[root@sandbox ~]# hadoop fs -tail /user/oracle/ggflume/FlumeData.1457626634622
:?v??8?????	SaQm?"BDTEST.TEST_TAB_1Ñ??
                                          ?"BDTEST.TEST_TAB_1I42016-02-16 19:17:40.74669942016-03-10T11:17:14.448000(00000000-10000002012
PK_ID1371O62FX&2014-01-24:19:09:20RJ68QYM5&2014-01-22:12:14:30"BDTEST.TEST_TAB_1Ñ??
                                                                                   ?"BDTEST.TEST_TAB_1I42016-02-16 19:17:40.74669942016-03-10T11:17:14.459000(00000000-10000002155
PK_ID2371O62FX&2014-01-24:19:09:20HW82LI73&2014-05-11:05:23:23"BDTEST.TEST_TAB_1Ñ??
                                                                                   ?"BDTEST.TEST_TAB_1I42016-02-16 19:17:40.74669942016-03-10T11:17:14.459001(00000000-10000002298
PK_ID3RXZT5VUN&2013-09-04:23:32:56RJ68QYM5&2014-01-22:12:14:30"BDTEST.TEST_TAB_1Ñ??
                                                                                   ?"BDTEST.TEST_TAB_1I42016-02-16 19:17:40.74669942016-03-10T11:17:14.460000(00000000-10000002441
PK_ID4RXZT5VUN&2013-09-04:23:32:56HW82LI73&2014-05-11:05:23:23"BDTEST.TEST_TAB_2?????"BDTEST.TEST_TAB_2I42016-02-16 19:17:40.76289942016-03-10T11:17:14.466000(00000000-10000002926
PK_IDRND_STR_1ACC_DATE7IJWQRO7T&2013-07-07:08:13:52

The initial load has succeeded, and now we can create and start the proper ongoing replication to HDFS through Flume.
Let’s prepare a new parameter file for our permanent Flume replicat and starting it up.

GGSCI (sandbox.localdomain) 2> edit param rflume
REPLICAT rflume
-- Trail file for this example is located in "dirdat/" directory
-- Command to add REPLICAT
-- add replicat rflume, exttrail dirdat/or
TARGETDB LIBFILE libggjava.so SET property=dirprm/flume.props
REPORTCOUNT EVERY 1 MINUTES, RATE
GROUPTRANSOPS 10000
HANDLECOLLISIONS
MAP ggtest.*, TARGET bdtest.*;


GGSCI (sandbox.localdomain) 1> add replicat rflume, exttrail dirdat/or, begin now
REPLICAT added.


GGSCI (sandbox.localdomain) 2> start replicat rflume

Sending START request to MANAGER ...
REPLICAT RFLUME starting

Let’s insert a row and see what we get on the target system.

orclbd> insert into ggtest.test_tab_1
  2  values (7,dbms_random.string('x', 8), sysdate-(7+dbms_random.value(0,1000)),
  3  dbms_random.string('x', 8), sysdate-(6+dbms_random.value(0,1000))) ;

1 row inserted.

orclbd> commit;

Commit complete.

orclbd>

As soon as commit had been executed we received a couple of new files on HDFS where the first had the schema for the changed table, and the second had the data for the transaction or  “payload”.

[root@sandbox ~]# hadoop fs -ls /user/oracle/ggflume
.................
-rw-r--r--   1 flume oracle       1833 2016-03-10 11:17 /user/oracle/ggflume/FlumeData.1457626634620
-rw-r--r--   1 flume oracle       1762 2016-03-10 11:17 /user/oracle/ggflume/FlumeData.1457626634621
-rw-r--r--   1 flume oracle       1106 2016-03-10 11:17 /user/oracle/ggflume/FlumeData.1457626634622
-rw-r--r--   1 flume oracle       1833 2016-03-10 12:43 /user/oracle/ggflume/FlumeData.1457631817021
-rw-r--r--   1 flume oracle        605 2016-03-10 12:43 /user/oracle/ggflume/FlumeData.1457631817022
[root@sandbox ~]#

[root@sandbox ~]# hadoop fs -cat /user/oracle/ggflume/FlumeData.1457631817021
SEQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritable???:]B?9?k??	]kTSa?m??{
  "type" : "record",
  "name" : "TEST_TAB_1",
  "namespace" : "BDTEST",
  "fields" : [ {
    "name" : "table",
    "type" : "string"
  }, {
    "name" : "op_type",
    "type" : "string"
  }, {
..............................
[root@sandbox ~]# hadoop fs -cat /user/oracle/ggflume/FlumeData.1457631817022
{EQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritable~,?`?aHTZRB?Sa?ny
  "type" : "record",
  "name" : "generic_wrapper",
  "namespace" : "oracle.goldengate",
  "fields" : [ {
    "name" : "table_name",
    "type" : "string"
  }, {
    "name" : "schema_hash",
    "type" : "int"
  }, {
    "name" : "payload",
    "type" : "bytes"
  } ]
}Sa?nz?"BDTEST.TEST_TAB_1Ñ??
                            ?"BDTEST.TEST_TAB_1I42016-03-10 17:43:31.00169042016-03-10T12:43:33.464000(00000000080001408270
PK_ID7XYJN3Z31&2014-04-21:09:01:21FL6Z8RPN&2013-08-06:21:40:02

I prepared and executed a small regression testing of inserts and updates to the table using jmeter, and started to push inserts and updates with a rate about 29 transaction per second.Even with one flume channel and my small Hadoop environment, it had a pretty good response time without trashing any errors. Flume put about 900 transactions per a HDFS file.

-rw-r--r--   1 flume oracle     123919 2016-03-10 14:52 /user/oracle/ggflume/FlumeData.1457639485465
-rw-r--r--   1 flume oracle      35068 2016-03-10 14:52 /user/oracle/ggflume/FlumeData.1457639485466
-rw-r--r--   1 flume oracle     145639 2016-03-10 14:52 /user/oracle/ggflume/FlumeData.1457639485467
-rw-r--r--   1 flume oracle     178943 2016-03-10 14:52 /user/oracle/ggflume/FlumeData.1457639485468
-rw-r--r--   1 flume oracle     103285 2016-03-10 14:52 /user/oracle/ggflume/FlumeData.1457639485469
[oracle@sandbox Downloads]$ hadoop fs -cat /user/oracle/ggflume/FlumeData.1457639485467 | wc -l
804
[oracle@sandbox Downloads]$ hadoop fs -cat /user/oracle/ggflume/FlumeData.1457639485468 | wc -l
988
[oracle@sandbox Downloads]$ hadoop fs -cat /user/oracle/ggflume/FlumeData.1457639485469 | wc -l
570
[oracle@sandbox Downloads]$

I’ve also tried the “thrift” datasource for Flume and it worked well too. To switch from “avro” to “thrift” I changed the value in the parameter agent.sources.ogg1.type in the flume.conf and restarted the flume agent. You also have to change client.type from default to thrift in your custom-flume-rpc.properties file. It worked fine, and I was able to get the information from the trail and write to the hdfs.

[oracle@sandbox oggbd]$ ./replicat paramfile dirprm/irflume.prm reportfile dirrpt/irflume.rpt
[oracle@sandbox oggbd]$ hadoop fs -ls /user/oracle/ggflume
Found 3 items
-rw-r--r--   1 flume oracle       1833 2016-02-25 16:05 /user/oracle/ggflume/FlumeData.1456434311892
-rw-r--r--   1 flume oracle       1762 2016-02-25 16:05 /user/oracle/ggflume/FlumeData.1456434311893
-rw-r--r--   1 flume oracle       1106 2016-02-25 16:05 /user/oracle/ggflume/FlumeData.1456434311894
[oracle@sandbox oggbd]$
[oracle@sandbox oggbd]$ hadoop fs -cat  /user/oracle/ggflume/FlumeData.1456434311892
SEQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritableOG?????$?{qS@]?{
  "type" : "record",
  "name" : "TEST_TAB_1",
  "namespace" : "BDTEST",
  "fields" : [ {
    "name" : "table",
    "type" : "string"
  }, {
.....


[oracle@sandbox oggbd]$ hadoop fs -cat  /user/oracle/ggflume/FlumeData.1456434311894
SEQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritable?????t?E?9j??S@??"BDTEST.TEST_TAB_1Ñ??
                                                                                                              ?"BDTEST.TEST_TAB_1I42016-02-16 19:17:40.74669942016-02-25T16:05:11.480000(00000000-10000002012
PK_ID1371O62FX&amp;2014-01-24:19:09:20RJ68QYM5&amp;2014-01-22:12:14:30"BDTEST.TEST_TAB_1Ñ??
                                                                                   ?"BDTEST.TEST_TAB_1I42016-02-16 19:17:40.74669942016-02-25T16:05:11.498000(00000000-10000002155
PK_ID2371O62FX&amp;2014-01-24:19:09:20HW82LI73&amp;2014-05-11:05:23:23"BDTEST.TEST_TAB_1Ñ??
                                                                                   ?"BDTEST.TEST_TAB_1I42016-02-16 19:17:40.74669942016-02-25T16:05:11.498001(00000000-10000002298
PK_ID3RXZT5VUN&amp;2013-09-04:23:32:56RJ68QYM5&amp;2014-01-22:12:14:30"BDTEST.TEST_TAB_1Ñ??
                                                                                   ?"BDTEST.TEST_TAB_1I42016-02-16 19:17:40.74669942016-02-25T16:05:11.499000(00000000-10000002441
PK_ID4RXZT5VUN&amp;2013-09-04:23:32:56HW82LI73&amp;2014-05-11:05:23:23"BDTEST.TEST_TAB_2?????"BDTEST.TEST_TAB_2I42016-02-16 19:17:40.76289942016-02-25T16:05:11.505000(00000000-10000002926
PK_IDRND_STR_1ACC_DATE7IJWQRO7T&amp;2013-07-07:08:13:52[oracle@sandbox oggbd]$

You can see from the output that in the FlumeData.1456434311894 file we are getting the schema description and in the FlumeData.1456434311894 we have the data from the tables TEST_TAB_1 and TEST_TAB_2.

Let’s try some simple DDL commands.
If we truncate a table:

orclbd> truncate table ggtest.test_tab_1;

Table GGTEST.TEST_TAB_1 truncated.

orclbd>

It is not going to be replicated. If we are altering the table, we are not seeing it as a separate command, but it is going to be reflected in the new schema definition for any new transaction replicated to HDFS. You will get a file with new schema definition and the transaction itself in a next file.

orclbd> alter table ggtest.test_tab_1 add (new1 varchar2(10));

Table GGTEST.TEST_TAB_1 altered.

orcl> insert into ggtest.test_tab_1
  2  values (7,dbms_random.string('x', 8), sysdate-(7+dbms_random.value(0,1000)),
  3  dbms_random.string('x', 8), sysdate-(6+dbms_random.value(0,1000)),'new_col' );

1 row created.

orcl> commit;

Commit complete.

orcl>
[oracle@sandbox oggbd]$ hadoop fs -cat /user/oracle/ggflume/FlumeData.1457117136700
SEQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritable????)0???4(SB?Yc{
  "type" : "record",
  "name" : "TEST_TAB_1",
  "namespace" : "BDTEST",
  "fields" : [ {
    "name" : "table",
    "type" : "string"
........
........
        "name" : "NEW1",
        "type" : [ "null", "string" ],
        "default" : null
      }, {
        "name" : "NEW1_isMissing",
        "type" : "boolean"
........
[oracle@sandbox oggbd]$ hadoop fs -cat /user/oracle/ggflume/FlumeData.1457117136701
........
}SB???"BDTEST.TEST_TAB_1?????"BDTEST.TEST_TAB_1I42016-03-04 18:45:30.00131442016-03-04T13:45:34.156000(00000000000000014363
PK_ID7U09D0CTU&2013-08-15:12:53:50W0BSUWLL&2013-08-16:09:28:12new_col

As I’ve mentioned in my previous post, the deeper investigation of supported DDL changes is going to be the subject of a dedicated blog post. Here we can conclude that the adapter worked as expected, and supported the flow of transactions from our Oracle database down to Flume using Avro and Thrift sources. Of course it is not production implementation, and serves only as a basic functional and elementary regression testing. For a serious production workflow we need to develop appropriate architecture.

In my next few posts I plan to check Kafka and HBASE adapters and see how they work. Stay tuned!

Categories: DBA Blogs

Accessing Oracle Mobile Cloud Service (MCS) REST from Postman

Andrejus Baranovski - Thu, 2016-03-17 10:54
Do you want to test in your environment REST service running on Oracle MCS cloud? I have implemented public REST service hosted on Oracle MCS, you can call it from Postman application (Google Chrome extension).

Implemented use case - SOAP Connector in MCS is reading Stock Quote information for the company supplied in the request. Information is transformed by MCS Custom API (with Node.js) into REST structure and returned to the client through MCS Mobile backend.

Here is the REST GET url to invoke in Postman: https://mobileportalsetrial1304dev-mcsdem0001.mobileenv.us2.oraclecloud.com:443/mobile/custom/STOCKQUOTE_API_REDSAMURAI_BLOG_1/stockquote/ORCL. Try to run it, you can replace ORCL with another company ID. To execute this REST request in Postman, you need to provide configuration file (this allows to call MCS). Configuration file can be downloaded from here - Postman_MCS_1.json. This file contains all required info to make REST request from Postman:


To add configuration file to Postman, go to Configuration and upload it through Import data option:


Make sure MCS profile is selected:


When you copy paste REST URL, go to Headers section and select MCS from Presets - this will add two required headers:


You should see company stock information returned back from REST call:


Let's switch to MCS. There are three main concepts in MCS - Mobile Backend, custom API, Connector. Mobile Backend is a service interface, it allows to configure mobile access functionality and contains a list of Web Services (API's) exposed to the clients. It allows to track service usage and view statistics in the dashboard:


My example contains single Web Service (custom API). This service is responsible to handle REST GET method and return company stock quote data:


Custom API defines REST resource /stockquote/{company}, this is what we are executing in Postman URL:


Resource /stockquote/{company} is configured with GET method:


GET method is configured to handle two type of responses - 200 (OK) and 404 (Not Found):


To provide API implementation (mapping between Connector and API response), we can download (see example - stockquote_api_redsamurai_blog_1.zip) Node.js file from MCS and edit it locally (edited file can be uploaded back in the archive):


We need to specify Connector name to be available in Node.js, this is how it will call Web Service defined in the Connector:


Node.js function is responsible to call Web Service from Connector, pass parameter and check the result (here you can do complex transformations between SOAP and REST):


I'm using SOAP connector, pointing to external Web Service to fetch Stock Quote information for the company:

Hinting

Jonathan Lewis - Thu, 2016-03-17 08:10

A posting on the OTN database forum a few days ago demonstrated an important problem with hinting – especially (though it didn’t come up in the thread)  in the face of upgrades. A simple query needed a couple of hints to produce the correct plan, but a slight change to the query seemed to result in Oracle ignoring the hints. The optimizer doesn’t ignore hints, of course, but there are many reasons why it might have appeared to so I created a little demonstration of the problem – starting with the following data set:

rem
rem     Script:  OTN_DAG.sql
rem     Author:  J.P.Lewis
rem     Dated:   March 2016
rem

create table t1
nologging
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        mod(rownum,200)         n1,
        mod(rownum,200)         n2,
        rpad(rownum,180)        v1
from
        generator       g1,
        generator       g2
where
        rownum <= 24000
;

create table t2
nologging
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        trunc((rownum-1)/15)    n1,
        trunc((rownum-1)/15)    n2,
        rpad(rownum,180)        v1
from    generator
where
        rownum <= 3000
;
begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T2',
                method_opt       => 'for all columns size 1'
        );
end;
/

(Ignore the silliness of the way I’ve created the data, it’s a consequence of using my standard template).

For every row in t2 there are 8 rows in t1, so when I join t1 to t2 on n2 it would obviously be sensible for the resulting hash join to use the t2 (smaller) data set as the build table and the t1 data set as the probe table, but I’m going to pretend that the optimizer is making an error and needs to be hinted to use t1 as the build table and t2 as the probe. Here’s a query, and execution plan, from 11.2.0.4:

explain plan for
select
        /*+ leading(t1) use_hash(t2) no_swap_join_inputs(t2) */
        count(t1.n2)
from
        t1, t2
where
        t2.n2 = t1.n2
and     t1.n1 = 15
and     t2.n1 = 15
;

select * from table(dbms_xplan.display(null,null,'outline alias'));

----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |     1 |    16 |    97   (3)| 00:00:01 |
|   1 |  SORT AGGREGATE     |      |     1 |    16 |            |          |
|*  2 |   HASH JOIN         |      |    20 |   320 |    97   (3)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL| T1   |   120 |   960 |    85   (3)| 00:00:01 |
|*  4 |    TABLE ACCESS FULL| T2   |    15 |   120 |    12   (0)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T2"."N2"="T1"."N2")
   3 - filter("T1"."N1"=15)
   4 - filter("T2"."N1"=15)

As you can see, the optimizer has obeyed my hinting – the join order is t1 -> t2, I’ve used a hash join to join t2, and Oracle hasn’t swapped the join inputs despite the fact that the t1 data set is larger than the t2 data set (960 bytes vs. 120 bytes) which should have persuaded it to swap. (Technically, the leading() hint seems to block the swap of the first two tables anyway – see the “Special Case” section at this URL, but I’ve included it the no_swap_join_inputs() anyway to make the point explicit.)

So now, instead of just count n2, we’ll modify the query to count the number of distinct values for n2:


explain plan for
select
        /*+ leading(t1) use_hash(t2) no_swap_join_inputs(t2) */
        count(distinct t1.n2) 
from
        t1, t2
where
        t2.n2 = t1.n2
and     t1.n1 = 15
and     t2.n1 = 15
;

select * from table(dbms_xplan.display(null,null,'outline alias'));

----------------------------------------------------------------------------------
| Id  | Operation             | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |          |     1 |    13 |    98   (4)| 00:00:01 |
|   1 |  SORT AGGREGATE       |          |     1 |    13 |            |          |
|   2 |   VIEW                | VW_DAG_0 |    20 |   260 |    98   (4)| 00:00:01 |
|   3 |    HASH GROUP BY      |          |    20 |   320 |    98   (4)| 00:00:01 |
|*  4 |     HASH JOIN         |          |    20 |   320 |    97   (3)| 00:00:01 |
|*  5 |      TABLE ACCESS FULL| T2       |    15 |   120 |    12   (0)| 00:00:01 |
|*  6 |      TABLE ACCESS FULL| T1       |   120 |   960 |    85   (3)| 00:00:01 |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("T2"."N2"="T1"."N2")
   5 - filter("T2"."N1"=15)
   6 - filter("T1"."N1"=15)

Check operations 5 and 6 – Oracle has swapped the join inputs: t2 (the obvious choice) is now the build table. Has Oracle ignored the hint ? (Answer: No).
If you look at operation 2 you can see that Oracle has generated an internal view called VW_DAG_0 – this is an example of the “Distinct Aggregate” transformation taking place. It seems to be a pointless exercise in this case and the 10053 trace file seems to indicate that it’s a heuristic transformation rather than cost-based transformation (i.e. the optimizer does it because it can, not because it’s cheaper). Oracle has transformed the SQL to the following (to which I have applied a little cosmetic tidying):


SELECT  /*+ LEADING (T1) */
        COUNT(VW_DAG_0.ITEM_1) "COUNT(DISTINCTT1.N2)"
FROM    (
        SELECT  T1.N2 ITEM_1
        FROM    TEST_USER.T2 T2,TEST_USER.T1 T1
        WHERE   T2.N2=T1.N2
        AND     T1.N1=15
        AND     T2.N1=15
        GROUP BY
                T1.N2
        ) VW_DAG_0

Notice how the use_hash() and no_swap_join_input() hints have disappeared. I am slightly surprised that the leading() hint is still visible, I would have expected all three to stay or all three to disappear; regardless of that, though, the single remaining hint references an object that does not exist in the query block where the hint has been placed. The original hint has not been “ignored”, it has become irrelevant. (I’ll be coming back to an odd little detail about this transformed query a little later on but for the moment I’m going to pursue the problem of making the optimizer do what we want.)

We have three strategies we could pursue at this point. We could tell the optimizer that we don’t want it to do the transformation; we could work out the query block name of the query block that holds t1 and t2 after the transformation and direct the hints into that query block; or we could tell Oracle to pretend it was using an older version of the optimizer because that Distinct Aggregate transformation only appeared in 11.2.0.1.

You’ll notice that I used the ‘alias’ formatting command in my call to dbms_xplan.display() – this is the queryblock / alias section of the output:


Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$C33C846D
   2 - SEL$5771D262 / VW_DAG_0@SEL$C33C846D
   3 - SEL$5771D262
   5 - SEL$5771D262 / T1@SEL$1
   6 - SEL$5771D262 / T2@SEL$1

Strategy A says try adding the hint: /*+ no_transform_distinct_agg(@sel$1) */
Strategy B says try using the hints: /*+ leading(@SEL$5771D262 t1@sel$1 t2@sel$1) use_hash(@SEL$5771D262 t2@sel$1 no_swap_join_inputs(@SEL$5771D262 t2@sel$1) */
Strategy C says try adding the hint: /*+ optimizer_features_enable(‘11.1.0.7’) */

Strategies A and C (stopping the transformation) produce the following plan:


----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |     1 |    16 |    98   (4)| 00:00:01 |
|   1 |  SORT GROUP BY      |      |     1 |    16 |            |          |
|*  2 |   HASH JOIN         |      |    20 |   320 |    98   (4)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL| T1   |   120 |   960 |    85   (3)| 00:00:01 |
|*  4 |    TABLE ACCESS FULL| T2   |    15 |   120 |    12   (0)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T2"."N2"="T1"."N2")
   3 - filter("T1"."N1"=15)
   4 - filter("T2"."N1"=15)

Strategy B (allowing the transformation, but addressing the hints to the generated query block) produces this plan:


----------------------------------------------------------------------------------
| Id  | Operation             | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |          |     1 |    13 |    98   (4)| 00:00:01 |
|   1 |  SORT AGGREGATE       |          |     1 |    13 |            |          |
|   2 |   VIEW                | VW_DAG_0 |    20 |   260 |    98   (4)| 00:00:01 |
|   3 |    HASH GROUP BY      |          |    20 |   320 |    98   (4)| 00:00:01 |
|*  4 |     HASH JOIN         |          |    20 |   320 |    97   (3)| 00:00:01 |
|*  5 |      TABLE ACCESS FULL| T1       |   120 |   960 |    85   (3)| 00:00:01 |
|*  6 |      TABLE ACCESS FULL| T2       |    15 |   120 |    12   (0)| 00:00:01 |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("T2"."N2"="T1"."N2")
   5 - filter("T1"."N1"=15)
   6 - filter("T2"."N1"=15)

All three Strategies have produced plans that use t1, the larger data set, as the build table. It’s hard to resist asking if it’s possible to claim that one of the three strategies is the best strategy; it’s hard to say, but I think I’d favour using the no_transform_distinct_agg() hint because it’s precisely targetted – so avoids the brute force thuggish nature of the reverting back to an old version, and avoids the (possble) fragility of needing to know a very precise query block name which (possibly) might change for some reason if the query were to be modified very slightly. The argument, of course, comes from the perspective of a friendly consultant who visits for a couple of days, gets a bit clever with your SQL, then walks away leaving you to worry about whether you understand why your SQL now works the way it does.

Upgrades

My opening comment was about the difficulty of hinting across upgrades. Imagine you had been running this count(distinct) query in 10.2.0.5, and after some experimention had found that you got the path you needed by adding the hints: /*+ leading(t1 t2)  full(t1) use_hash(t2) no_swap_join_inputs(t2) full(t2) */. This is a careful and thorough piece of hinting (and it does work, of course, in 10.2.0.5).

When the big day for upgrading to 11.2 arrives (just in time for Oracle to ends extended support, possibly) you find that this query changes its execution plan. And this is NOT a rare occurrence. I’ve said it before, and I’ll keep saying it: hinting – especially with “micro-management” hints – is undesirable in a production system. You probably haven’t done it right, and even if the hints are (broadly speaking) perfect in the current version they may be pushed out of context by a new feature in the next version.  If you’ve hinted your code you have to check every single hinted statement to make sure the hints still have the same effect on the upgrade.

This is why I produce the sound-bite (which Maria Colgan nicked): “if you can hint it, baseline it”.  If you had generated a baseline (or outline) from a query with these hints in 10g Oracle would have included the /*+ optimizer_features_enable(‘10.2.0.5’) */ hint with the functional hints, and the upgrade wouldn’t have produced a different plan.

Technically, of course, you could have remembered to add the hint to your production code – but in many cases Oracle introduces far more hints in an SQL Baseline than you might want to put into your code; and by using the SQL Baseline approach you’ve given yourself the option to get rid of the “hidden hinting” in a future version of Oracle by dropping the baseline rather than rewriting the code and (perhaps) recompiling the application.

Inevitably there are cases where setting the optimizer_features_enable backwards doesn’t rescue new from a new plan – there are probably a few cases where the internal code forgets to check the value and bypass some subroutines; more significantly there are cases where one version of Oracle will give you an efficient plan because of an optimizer bug and setting the version backwards won’t re-introduce that bug.

Footnote

I said I’d come back to the “unparsed” query that the optimizer generated from the original count(distinct) statement and the way it left the leading(t1) hint in place but lost the use_hash(t2) and no_swap_join_inputs(t2). I got curious about how Oracle would optimize that query if I supplied it from SQL*Plus – and this is the plan I got:


explain plan for
SELECT  /*+ LEADING (T1) */
        COUNT(VW_DAG_0.ITEM_1) "COUNT(DISTINCTT1.N2)"
FROM    (
        SELECT  T1.N2 ITEM_1
        FROM    TEST_USER.T2 T2,TEST_USER.T1 T1
        WHERE   T2.N2=T1.N2
        AND     T1.N1=15
        AND     T2.N1=15
        GROUP BY
                T1.N2
        ) VW_DAG_0
;

-----------------------------------------------------------------------------------
| Id  | Operation             | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |           |     1 |    13 |    98   (4)| 00:00:01 |
|   1 |  SORT AGGREGATE       |           |     1 |    13 |            |          |
|   2 |   VIEW                | VM_NWVW_0 |    20 |   260 |    98   (4)| 00:00:01 |
|   3 |    HASH GROUP BY      |           |    20 |   320 |    98   (4)| 00:00:01 |
|*  4 |     HASH JOIN         |           |    20 |   320 |    97   (3)| 00:00:01 |
|*  5 |      TABLE ACCESS FULL| T1        |   120 |   960 |    85   (3)| 00:00:01 |
|*  6 |      TABLE ACCESS FULL| T2        |    15 |   120 |    12   (0)| 00:00:01 |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("T2"."N2"="T1"."N2")
   5 - filter("T1"."N1"=15)
   6 - filter("T2"."N1"=15)

Oracle has managed to do a transformation to this statement that it didn’t do when it first generated the statement – too much recursion, perhaps – and that floating leading(t1) hint has been squeezed back into action by a view-merging step in the optimization that got the hint back into a query block that actually contained t1 and t2!  At this point I feel like quoting cod-philosophy from the Dune trilogy: “Just when you think you understand …”

 


Connection timeout parameters in MySQL

Pythian Group - Thu, 2016-03-17 08:09
Introduction
  • wait_timeout
  • interactive_timeout
  • net_read_timeout
  • net_write_timeout

What do these timeouts do in MySQL? If you search the web for one or more of these, you may find complaints that no comprehensive explanation exists for all of these timeouts in one place (besides the obvious documentation of dynamic server system variables in MySQL). This blog post seeks to provide a central documentation source for timeouts and provide some practical explanation.

Knowing what timeouts do helps in a troubleshooting effort. It’s good to understand when an issue is timeout related and when it’s not, and to know the right reasons for changing timeout variables, or the right time to ask the developer or ad-hoc user to please tune the variables in the session, instead of asking the DBA to change the global variables.

Before diving into the meat of this topic, here are a few introductory concepts that I will touch on.

I mentioned session variables. The four timeout variables we are discussing: interactive_timeout, wait_timeout, net_read_timeout, net_write_timeout can all be set within the context of the session using the SET command. e.g. “set session wait_timeout=3600;“.  A general rule of thumb that I always try to follow: Once I have what I think are correct timeouts to protect my mission critical application, I do not want to change them unless there is a really good reason. If I’m encountering a client disconnection issue and it’s not coming from my main application, I need to see if the problem can be alleviated by adjusting timeouts on the session level using SET commands.

Second, it’s important to note that timeouts are there to protect your server and critical application. You don’t want too many clients connected to your server doing nothing (they can crowd out connections that need to do something). You don’t want clients that are in a failed state due to a network connection or other interruptive problem to continue to consume resources such as locks on your database. Timeouts should be set very thoughtfully, based on the unique environment. If you’re unsure, leave at defaults and adjust when needed.

Finally, I created a perl script to illustrate wait_timeout situations. I made a great effort to also incorporate net_read_timeout and net_write_timeout into the script and you’ll see empty functions there as place-holders. There are many layers to the “MySQL client/server onion” involving buffers on the system, client, and server layers. In lieu of providing a direct scripting examples of net_read_timeout and net_write_timeout, I hope you’ll find the other methods of explanation included below, useful.

interactive_timeout

According to the MySQL manual, Interactive_timeout is only used for connections from interactive clients like the command line MySQL clients.

The technical explanation is: “clients utilizing the “CLIENT_INTERACTIVE option to mysql_real_connect().” Check the specific documentation to your favorite client to see whether it fits this description.

If your client fits this behavior, wait_timeout is set to the value in interactive_timeout. The only benefit to this parameter is flexibility. If your command line or other interactive clients have a vastly different requirement than your application, feel free to set this differently than wait_timeout.

wait_timeout

Wait timeout is simply there to protect you in the common case of clients sitting there doing nothing but absorbing a connection. You want to set this to the lowest acceptable number of seconds in order to protect your server against an application malfunction or some other event that may cause too many connections to the database to be opened, crowding out other clients. If a client is doing nothing for wait_timeout seconds, the MySQL server will terminate the connection. The proper setting for this variable depends on the particular environment.

An environment I worked in for 3+ years set this to 120 seconds, because jboss connection pooling (and associated parameters) took care of making sure the mission critical application in the environment had available connections.

A common misconception and common misuse is to try to adjust wait_timeout for a situation in which a query has been interrupted. Wait timeout only applies to idle sessions. If the connection is in any other state besides idle, wait_timeout does not apply. This is why utilities like pt-kill exist. It does not help that an idle connection being interrupted by KILL produces the same error message.

Example using my perl script and the MySQL processlist:


jscott@js-trusty1:~$ ./test_timeouts.pl -K
Doing Killable Wait Timeout Test
ATTN: You have 20 seconds to kill the MySQL thread!!

in another shell

jscott@js-trusty1:~$ mysql -e "show processlist"
+----+------+-------------------+--------------+---------+-------+-------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+------+-------------------+--------------+---------+-------+-------+------------------+
| 36 | js | 172.19.73.2:58892 | NULL | Sleep | 28514 | | NULL |
| 79 | root | localhost:43350 | timeout_test | Sleep | 4 | | NULL |
| 80 | root | localhost | NULL | Query | 0 | NULL | show processlist |
+----+------+-------------------+--------------+---------+-------+-------+------------------+
jscott@js-trusty1:~$ mysql -e "kill 79";

back to the perl script output.

DBD::mysql::st execute failed: MySQL server has gone away at ./test_timeouts.pl line 134.
-------------
Err: 2006
ErrStr: MySQL server has gone away
State: HY000

 

Note that we got MySQL error code 2006 “MySQL server has gone away”.

Now we’ll allow wait_timeout to trigger instead of killing a process.


jscott@js-trusty1:~$ ./test_timeouts.pl -W
Doing Wait Timeout Test
DBD::mysql::st execute failed: MySQL server has gone away at ./test_timeouts.pl line 151.
-------------
Err: 2006
ErrStr: MySQL server has gone away
State: HY000

The fact that a killed idle process and a process receiving wait_timeout get the same error message and code (2006 “MySQL server has gone away”) causes much confusion. Rest assured, wait_timeout only kills idle connections, even if the error messages are the same.  If there is a running query, you will typically get error code 2013: “Lost connection to MySQL server during query”.

The most common reason a query is killed, in my experience, is a backup. Many backup utilities for MySQL have the default behavior of killing long-running queries.

net_read_timeout

net_read_timeout is the number of seconds mysql will wait between bytes as you are sending it data. As mentioned above, I tried very hard to synthetically cause a net_read_timeout with a perl script and was unsuccessful.

The best way to explain net_read_timeout is to use the example of an ETL (Extract, Transform, Load) job. At the beginning of an ETL job, you select data from one data source, then transform and load it into another data source, such as a MySQL data warehouse. Tools like “Pentaho Data Integration” (cited below) provide a GUI view of ETL.

Pentaho ETL Process

Pentaho ETL Process

If your ETL job is complex, and you use something similar to Pentaho Kettle, you have very little visibility into what is happening behind the scenes.

If the steps in between the initial query in an ETL job and an output step consume more than net_read_timeout seconds, then an error may occur due to incomplete statements being sent to the “output” steps.  In mid-transaction, if the MySQL server fails to receive data within the timeout interval, the connection will be terminated.

Another way net_read_timeout can occur: Normally all database connections used in a Pentaho ETL transformation are opened at the beginning of the transformation. If you have “use result streaming cursor” turned off on the database connection used for “table input”, the result set may take longer than “net_read_timeout” seconds to receive, causing an error on the database connection being used for the output steps.

Have you ever executed a simple query (with a large result set) on the MySQL command line and wondered why there is a pause of several seconds or even minutes prior to beginning to receive the results? Your client actually has begun receiving the results, but they simply have not been presented to you. The reason for this is, it’s generally better for a MySQL server to be “finished” with a query as soon as possible; therefore, the default behavior of the MySQL client libraries on a read-only query is to receive all the results first, before passing the results onto the program. The default behavior is called “mysql_store_result“.  The MySQL reference manual does a good job of explaining this in detail. As the manual states, you should not change the default behavior unless you are doing a minimal amount of processing on each result row.

Homemade ETL programs in other languages can have the same issue. Take care to use auto_reconnect features and/or streaming cursors / “mysql_use_result”, when appropriate.

ETLs and scripts are good examples of clients which can be modified with SET commands.  “SET SESSION net_read_timeout=<longer value>;” should be considered for jobs outside of your main application.

Lastly, net_read_timeout is also often caused by bad network connections. This is the default explanation offered in many blog posts on this topic.

The Error returned to the client will be (2013) “Lost connection to MySQL server during query”.

net_write_timeout

net_write_timeout is the number of seconds MySQL will wait for the client to receive additional data before terminating a connection. Using the example of the ETL job above, if steps in the middle of the transformation cause the “input” steps to pause for longer than net_write_timeout seconds, it’s possible to receive an error. I’ve never seen net_write_timeout hit because of an ETL job. The MySQL client has default behavior of buffering result sets.  Also, I try to use the “use result streaming cursor” in Pentaho or the “mysql_use_result” option in the Perl DBI, sparingly.

The most typical case of receiving net_write_timeout is a backup such as a mysqldump. During a long data transfer such as a mysqldump, a myriad of factors could come into play, not the least of which is network connectivity. Remember, if you run into problems with net_write_timeout on a mysqldump backup, mysqldump has its own section of the my.cnf. You can set an option file on a client or a server to specifically set this and other variables for a mysqldump session.

Categories: DBA Blogs

The future of data warehousing

Dylan's BI Notes - Thu, 2016-03-17 05:17
Data warehousing is really about preparation of the data for reporting.  The assumption are: You can predicate what typical queries look like to some extent. The data need to be prepared to make the query easier or faster, or make more sense from the data . You know where the data come from and you […]
Categories: BI & Warehousing

The Importance of BI Documentation

Rittman Mead Consulting - Thu, 2016-03-17 05:00
Why Is BI Documentation Important?

Business intelligence systems come with a lot of extra information. Even beautifully constructed analyses have piles of background information and histories. Administrators might often have memos and updates that they’d like share with analysts. Sales figures might have anomalies that need further explanation. But OBIEE does not currently have any options for BI Documentation inside the dashboard.

Let’s say a BI user for a cell phone distribution company is viewing a report comparing the yearly sales figures for several different cell phones. If the analyst notices that one specific cell phone is outperforming the others, but doesn’t know what makes that specific model unique, then they have to go searching for that information.


But what if the individual phone model specifications and advertising and marketing histories were already included as reports inside the dashboard? What if the analyst, with only a couple of clicks, discovered that the reason one cell phone was outperforming the others was due to its next-gen screen, camera, and chip upgrades, which proved popular with consumers? Or what if the analyst discovered that the popular phone, while containing outdated peripherals, was selling so well because a Q3 advertising push for that model only? All of this information might not be contained in the dashboard’s visuals, but greatly affects the analysts’ understanding of the reports.

Current Options for OBIEE Documentation

Some information can be displayed as visuals, but many times this isn’t a practical solution. Besides making dashboards too cluttered, memos, product descriptions, company directories, etc., are not practical as charts and graphs. As of right now, important documentation can be stored in a wide range of places outside of the BI dashboard, but the operating reality at most organizations means that important information is spread across several locations and not always accessible to the people who need it.


Workarounds are inefficient, cost time, cause BI users to leave the BI environment (potentially reducing usage), and increase frustration. If an analyst has to email several different people to locate the information she wants, that complicates her workflow and produces extraneous communications (who likes answering emails?). Before now, there wasn’t an easy solution to these problems.

ChitChat’s BI Documentation Features

With ChitChat, it’s now possible to store critical documentation where it belongs—at the source of the conversation. Keep phone directories, memos from administrators (or requests from analysts to administrators), product descriptions, analytical histories—really, the possibilities are endless—inside the dashboard where they are accessible to the people who need them. Shorten workflows and make life easier for your BI users.

ChitChat’s easy-to-use functionality allows BI users to copy and paste or write (ChitChat has a built-in WYSIWYG text editor) important information inside the BI dashboard, creating a quicker path to insightful and actionable analytics. And isn’t that the goal in the end?

To learn more about ChitChat’s many commentary features, or to request a demo, click here.

The post The Importance of BI Documentation appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Links for 2016-03-16 [del.icio.us]

Categories: DBA Blogs

GDC16 Day 3: Another Day of Fun & Data!

Oracle AppsLab - Thu, 2016-03-17 01:17

Early morning view of the GDC16 Expo Hall.

The Expo opened today and will be open until the end of Friday! There was a lot to see and do! I managed to explore 1/3 of the space. Walking in, we have the GDC Store to the left and the main floor below the stairs. Upon entering the main floor, Unity was smack dab in the center. It had an impressive set up, but not as impressive as the Oculus area nor Clash of Kings.

Built to look like a store :O

Clash of Kings. The biggest booth of all booths. They brought the game to real life with hired actors!

There were a lot of demos you could play, with many different type of controllers. Everyone was definitely drinking the VR Kool-Aid. Because of the popularity of some of the sessions, reservations for a play session are strongly encouraged. Most, if not all of the sessions ,were already booked for the whole day by noon. I managed to reserve the PS VR play session for tomorrow afternoon by scanning a QR code to their scheduling app!

The main floor was broken up into pavilions with games by their respective counties. It was interesting to overhear others call their friends to sync up and saying “I’m in Korea.” Haha.

I spent the rest of the time walking around the floor and observing others play.

Fly like a bird! #birdly #GDC16 pic.twitter.com/oeHUnmfhgp

— Tawny (@iheartthannie) March 16, 2016

I did get a chance to get in line for an arcade ride! My line buddy and I decided to get chased by a T-Rex! We started flying in the air as a Pterodactyl. The gleeful flight didn’t last long. The T-Rex was hungry and apparently really wanted us for dinner. It definitely felt like we were running quickly, trying to get away.

Another simulation others tried that we didn’t was a lala land roller coaster. In this demo, players can actually see their hand on screen.

Waiting to try out sim arcade ride that senses your hands! Fairytale coaster ride w/ bunny companion in tow #GDC16 pic.twitter.com/Y8mqDs2ILg

— Tawny (@iheartthannie) March 16, 2016

Sessions & Highlights

Playstation VR. Sony discusses development concepts, design innovations and what PS VR is and is not. I personally liked the direction they are going for collaboration.

  • Design with 2 screens in mind. For console VR, you may be making 2 games in 1. One in VR and one on TV. You should consider doing this to avoid having one headset per player and to allow for multiplayer cooperation. Finding an art direction for both is hard. Keep it simple for good performance.
  • Make VR a fun and social experience. In a cooperative environment, you get 2 separate viewpoints of the same environment (mirroring mode) or 2 totally different screen views (separate mode). This means that innovation between competitive and Co-op mode is possible.

The AppsLab team and I have considered this possibility of a VR screen and TV screen experience as well. It’s great that this idea is validated by one of the biggest console makers.

A year of user engagement data. A year’s worth of game industry data, patterns and trends was the theme of all the sessions I attended today.

  • There are 185 million gamers in the US. Half are women.
    • 72 million are console gamers. Of those console owners the average age is ~30 years old.
    • There are 154 million mobile gamers. This is thanks to the rise of free-2-play games. Mobile accessibility has added diversity to the market and brought a new group of players. Revenues grew because of broad expansion. The average age for the mobile group is ~39.4 years old.
    • There are 61 million PC gamers thanks to the rise of Steam. These gamers tend to be younger at an average age of ~29.5yrs.
  • There are different motivations as to why people play games. There are two group of players: Core vs. casual players. Universally, the primary reason casual players play games is when they are waiting to pass time and as a relaxing activity.
  • There is great diversity within the mobile market. There is an obvious gender split between what females and males play casually. Females tend to like matching puzzle (Candy Crush), simulation and casino games while males tend to like competitive games like sport, shooter and combat city builder games.
  • When we look internationally, players in Japan have less desire to compete when playing games. Success of games based on cooperative games.
  • Most homes have a game console. In 2015, 51% of homes owned at least 2 game consoles. At the start of 2016, there was an increase of 40% in sales for current 8th generation game consoles (PS4, Xbox One, etc minus the Wii).
  • Just concentrating on mobile gamers, 71% play games on both their smart phone and tablet, 10% play only on their tablet.
  • Top factors leading to churn are lack of interest, failure to meet expectation and too much friction.
  • Aside from Netflix and maybe Youtube, Twitch gobbles up more prime time viewers, almost 700K concurrent views as of March 2016. Its viewership is increasing despite competition with the launch of YouTube Gaming.

Day 1 — User research round table. This was my first round table during GDC, and it’s nice to be among those within the same profession. We covered user research for VR, preventing bias and testing on kids! Experts provided their failures on these topics and offers suggestions.

  • Testing for Virtual Reality.
    • Provide players with enough time warming up in the new environment before asking them to perform tasks. Use the initial immersive exposure for to calibrate them.
    • Be ready to pull them out at any indication of nausea.
    • Use questionnaires to screen out individuals who easily get motion sickness.
    • It’s important to remember that people experience sickness for different reasons. It’s hard to eliminate all the variables. Some people can have vertigo or claustrophobia that’s not necessarily the fault of the VR demo. There is a bias toward that in media. People think they are going to be sick so they feel sick.
    • Do not ask people if they feel sick before the experience else you are biasing them to be sick.
    • Individuals are only more likely to feel sick if your game experience does not match their expectations. Some people feel sick no matter what.
    • One researcher tested 700–800 people in VR. Only 2 persons said that they felt sick. 7–8 said they felt uncomfortable.
    • An important questions to ask is “At what point do they feel sick?” If you get frequent reports at that point vs. Generalized reports, then you can do something to make the game better.
  • Bias.
    • Avoid bragging language. Keep questions neutral.
    • Separate yourself from the product.
    • Remember participants think that you are an authority. Offload instructions to the survey, rather than relay the instructions yourself. It’s going to bias the feedback.
    • Standardize the experiment. Give the same spiel.
    • The order of question is important.
    • Any single geographic region is going to introduce bias. Only screen out regions if you think culture is going to be an issue.
  • Testing with kids.
    • It’s better to test with 2 kids in a room. With kids, they are not good at verbalizing what they know and do not know. Having 2 kids allows you to see them verbalize their thoughts to each other as they ask questions and help each other through the game.
    • When testing a group of kids at once, assign the kids their station and accessories. Allowing them to pick will end up in a fight over who gets the pink controller.
    • Younger kids aren’t granular so allow for 2 clear options on surveys. A thumbs up and thumbs down works.
    • Limit kids to one sugary drink or you’ll regret it.

Possibly Related Posts:

Something Amiss

Greg Pavlik - Thu, 2016-03-17 00:02
Looks like this curious non-review of the novel Laurus seems to have been referring to "Brahmins" as "Brahman" - I suppose republished to correct the mistake:
​Russian Brahman by Alan Jacobs | Articles | First Things
www.firstthings.com/article/2016/04/russian-brahman
First Things
Russian Brahman. by Alan Jacobs April 2016. Laurus by eugene vodolazkin translated by lisa hayden oneworld, 384 pages, $24.99. Eugene Vodolazkin's ...
​Russian Brahmin by Alan Jacobs | Articles | First Things
www.firstthings.com/article/2016/04/russian-brahmin
First Things
Russian Brahmin. by Alan Jacobs April 2016. Laurus by eugene vodolazkin translated by lisa hayden oneworld, 384 pages, $24.99. Eugene Vodolazkin's ...

Whatever his grasp of Hindu concepts, it's obvious Jacobs knows little to nothing about the tradition of Russian yurodivy, which makes this review overall kind of silly at best. Interested readers can refer to the hagiographies of Xenia of Petersburg or Feofil of the Kiev Caves Lavra to become acquainted with some of the conceptual background to the novel, both published by the monastery press in Jordanville, NY in English. As a complement the Pavel Lungin movie Ostrov is worth watching carefully - the film is based partly on Feofil, though like the life of St Xenia, it explores the theme of vicarious repentance. (It was not until the third time I saw the film that I fully grasped it - the visuals are stunning and in many respects a distraction.)

All of that aside, what continues to trouble me in general is the fact that most of the reviews of Laurus that I've seen have been oriented toward theological critiques - endorsements or arguments revolving around the reviewer's reading of what the author might want us to think about religion. And yet it is obvious that Vodolazkin did not write a religious apologetic (Jacobs invokes Karamazov, which is simultaneously a religious argument and a humanistic work - but Laurus is anything but the former). Laurus deserves a review as a work of notable - even great - world literature: which is to say first and foremost an exploration of what Vodolazkin is attempting to accomplish as a writer and what that has produced as a work of literature. The lack of serious analysis is particularly puzzling given the devices Vodolazkin uses to deal with language, identity, personality, relationship, and - yes - time. We could do with a few less sermons and a bit more thought.

Designing PL/SQL Programs

Andrew Clarke - Wed, 2016-03-16 18:57
When I started out, in COBOL, structured programming was king. COBOL programs tended to be lengthy and convoluted. Plus GOTO statements. We needed program desire to keep things under control.

So I noticed the absence of design methodologies when I moved into Oracle. At first it didn't seem to be a problem. SQL was declarative and self-describing, and apparently didn't need designing. Forms was a 4GL and provided its own structure. And PL/SQL? Well that was just a glue, and the programs were so simple.

Then one day I was debugging several hundred lines of PL/SQL somebody had written, and struggling to figure out what was going on. So I drew a flow chart of the IF branches and WHILE loops. Obvious really, but if the original author had done that they would have realised that the program had an ELSE branch which could never be chosen; more than one hundred lines of code which would never execute.
Let me sleep()
Good design is hard to define: in fact, good design is often unobtrusive. It's bad design we notice, because it generates friction and hinders our progress. By way of illustration, here is a poor design choice in Oracle's PL/SQL library: DBMS_LOCK.SLEEP() .

SLEEP() is a simple program, which suspends processing for a parameterized number of seconds. This is not something we want to do often, but it is useful in testing. The problem is its home in the DBMS_LOCK package, because that package is not granted to public by default.

DBMS_LOCK is a utility package for building our own locking mechanisms. There's not much need for this any more. Oracle's default locking model is pretty good. There is SELECT .. FOR UPDATE for pessimistic locking, which is even more powerful since the SKIP LOCKED syntax was permitted in 11g. We have Advanced Queuing, Job Scheduling, oh my. It's hard to find a use case for user-defined locks which isn't re-inventing the wheel, and easy to see how we might end up implementing something less robust than the built-in locks. So DBAs tend not to grant execute on DBMS_LOCK without being asked, and then often not without a fight.

But as developers we need access to a sleep routine. So DBAs have to grant execute on DBMS_LOCK, and then that gives away too much access. It would be better if SLEEP() was easily accessible in some less controversial place.

Why is this an example of bad design? Because user-defined locks need a sleep routine but  SLEEP()has other uses besides lock implementations. Putting  SLEEP() in DBMS_LOCK means it's harder to use it.
Riding the Hobby Horse
Occasionally in a recruitment interview I have asked the candidate how they go would design a PL/SQL program. Mostly the question is met with bemusement. PL/SQL design is not A Thing. Yet many of us work on huge PL/SQL code-bases. How do they turn out without a design methodology? Badly:
  • Do you have one schema crammed with hundreds of PL/SQL program units, perhaps named with a prefix to identify sub-systems?
  • Do you have a package called UTILS?
  • Do you query USER_PROCEDURES or USER_DEPENDENCIES (or even USER_SOURCE) to find a piece of code which implements some piece of functionality?
  • Do you have the same functionality implemented in several places?
  • Does a "simple change" cascade into changes across multiple program units and a regression testing nightmare?
All these are symptoms of poor design. But there are ways to avoid this situation.

Designing PL/SQL Programs series

Restricting database access to IP addeess using sqlnet.ora

Learn DB Concepts with me... - Wed, 2016-03-16 16:15
**************************************************************
                      FROM MY CLIENT MACHINE:
**************************************************************

C:\Windows\System32>tnsping pdborcl1

TNS Ping Utility for 32-bit Windows: Version 11.2.0.1.0 - Production on 07-MAR-2016 15:32:20

Copyright (c) 1997, 2010, Oracle.  All rights reserved.

Used parameter files:
C:\app\oracle\product\11.2.0\client_1\network\admin\sqlnet.ora


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 10.215.XX4.XX)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = pdbo
cl1)))
OK (0 msec)

**************************************************************
NOW I want my DB to be only be accessible from this machine and restrict all other. For this I will need to make changed to my DB server sqlnet.ora :
**************************************************************

ON DB SERVER:

oracle@Linux03 admin]$ cat sqlnet.ora
# sqlnet.ora Network Configuration File: /u01/app/oracle/product/12.1.0.2/db_1/network/admin/sqlnet.ora
# Generated by Oracle configuration tools.

NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)

tcp.validnode_checking = yes  ## I have just added this line to test if it works

[oracle@Linux03 admin]$ vi sqlnet.ora
[oracle@Linux03 admin]$ lsnrctl reload

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 07-MAR-2016 15:20:03

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=Linux03)(PORT=1521)))
TNS-00584: Valid node checking configuration error
 TNS-12560: TNS:protocol adapter error

As expected it doesn't work as listener needs to know where to accept or deny connections from.
Now I tweak the sqlnet.ora file to below

[oracle@Linux03 admin]$ cat sqlnet.ora
# sqlnet.ora Network Configuration File: /u01/app/oracle/product/12.1.0.2/db_1/network/admin/sqlnet.ora
# Generated by Oracle configuration tools.

NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)


tcp.validnode_checking = yes
tcp.invited_nodes = (Linux03)  ## I only want to accept connections from my localhost i.e Linux03

RESTART THE LISTENER NOW.

[oracle@Linux03 admin]$ lsnrctl reload

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 07-MAR-2016 15:21:06

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=Linux03)(PORT=1521)))
The command completed successfully

 (OR)

[oracle@Linux03 admin]$ lsnrctl stop

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 07-MAR-2016 15:28:36

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=Linux03)(PORT=1521)))
The command completed successfully
[oracle@Linux03 admin]$ lsnrctl start

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 07-MAR-2016 15:28:41

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Starting /u01/app/oracle/product/12.1.0.2/db_1/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 12.1.0.2.0 - Production
System parameter file is /u01/app/oracle/product/12.1.0.2/db_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/diag/tnslsnr/Linux03/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=Linux03)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=Linux03)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date                07-MAR-2016 15:28:41
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/12.1.0.2/db_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/Linux03/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=Linux03)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
The listener supports no services
The command completed successfully

****************************************************
NOW I TRY TO CONNECT FROM CLIENT AGAIN :
****************************************************

C:\Windows\System32>tnsping pdborcl1

TNS Ping Utility for 32-bit Windows: Version 11.2.0.1.0 - Production on 07-MAR-2016 15:32:57

Copyright (c) 1997, 2010, Oracle.  All rights reserved.

Used parameter files:
C:\app\oracle\product\11.2.0\client_1\network\admin\sqlnet.ora


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 10.215.XX4.XX)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = pdbo
cl1)))
TNS-12537: TNS:connection closed

C:\Windows\System32>


Nope it doesn't allow me to connect.
*************************************************************
Lets try to remove the tcp.validnode_checking & tcp.invited_nodes and see
**************************************************************

[oracle@Linux03 admin]$ cat sqlnet.ora
# sqlnet.ora Network Configuration File: /u01/app/oracle/product/12.1.0.2/db_1/network/admin/sqlnet.ora
# Generated by Oracle configuration tools.

NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)


[oracle@Linux03 admin]$ lsnrctl reload

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 07-MAR-2016 15:31:58

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=Linux03)(PORT=1521)))
The command completed successfully
[oracle@Linux03 admin]$ vi sqlnet.ora
[oracle@Linux03 admin]$ lsnrctl reload

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 07-MAR-2016 15:32:16

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=Linux03)(PORT=1521)))
The command completed successfully

C:\Windows\System32>tnsping pdborcl1

TNS Ping Utility for 32-bit Windows: Version 11.2.0.1.0 - Production on 07-MAR-2016 15:32:36

Copyright (c) 1997, 2010, Oracle.  All rights reserved.

Used parameter files:
C:\app\oracle\product\11.2.0\client_1\network\admin\sqlnet.ora


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 10.215.XX4.XX)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = pdbo
cl1)))
OK (0 msec)

**************************************************************************
SUCCESS YOU HAVE SUCCESFULLY IMPLEMENTED ANOTHER LAYER OF SECURITY ON DB SERVER
*************************************************************************


tcp.validnode_checking = yes
tcp.invited_nodes = (Linux03,HOSTNAME1,HOSTNAME2 ...)

YOU CAN ALSO EXPLICITLY EXCLUDE NODE LIKE THIS :

tcp.excluded_nodes = (192.168.100.101,HOSTNAME2,)

But : If you only specify invited nodes with tcp.invited_nodes, all other nodes will be excluded, so there is really no reason to do both. The same is true for excluded nodes. If you put tcp.excluded_nodes = (192.168.100.101,192.168.100.160) then IP containing 192.168.100.101 and 192.168.100.160 will be excluded/denied to connect to database as a database user while allowing others to connect.





Categories: DBA Blogs

Fishbowl Solutions at Collaborate 2016: Demos and Discussions on Oracle JET, ADF, Documents Cloud Service, Controlled Document Management and Portals

fbweb_003105
Fishbowl Solutions is looking forward to Collaborate 2016. We have another full list of activities planned, and we are always excited to meet with customers and discuss their initiatives around enterprise content management and portals, the cloud, as well as front-end user design and experience. With the release of Oracle WebCenter 12c back in October, customers are also eager to understand more of what the new version has to offer. Fortunately for WebCenter customers attending Collaborate, Fishbowl Solutions will be covering all these topics across the 5 presentations we will be giving, as well as one-on-discussions in our booth – #1028.

We are also privileged to be joined by two WebCenter customers who will give presentations on their WebCenter use cases. The first customer, ICON plc (www.iconplc.com/) based in Dublin, Ireland, will discuss the process of improving the front-end experience of the WebCenter-based portal they use to manage the clinical trials process.

The second customer is Rosendin Electric (www.rosendin.com) based in San Jose, CA, and they will share how they implemented Fishbowl’s ControlCenter solution to automate the contract management process within WebCenter.

The best part and biggest benefit of attending Collaborate is hearing stories from actual customers, like ICON and Rosendin. Collaborate is truly a user group conference, and hearing case studies on WebCenter deployments, enhancements, integrations, etc., are invaluable for other customers looking to do the same or similar. Less marketing speak and sales pitches, and more learning. As you plan your schedule for Collaborate, look for Session Types denoted as Case Studies.

Here is a preview of what Fishbowl currently has planned for Collaborate 2016.

Booth: 1028

Demos of our WebCenter Portal Solution Accelerator, User Experience (art of the possible) demos of portal and websites, and ControlCenter (controlled document management for WebCenter).

Presentations: 5

  • Monday, April 11, 10:30-11:30 AM:
    A Designer’s Introduction to the Oracle JET Framework

    This session is targeted at front-end developers looking to leverage their front-end design skills to create Modular Single Page applications with Oracle’s new JavaScript Extension Toolkit (JET) framework. The presentation will include an overview of the JET framework and discussion around the features and benefits, along with a quick comparison with Application Development Framework (ADF). Sim and Beatty will walk through and explain JET’s integrated open source libraries and frameworks, and then take developers through the creation of a simple, JET-integrated cloud application.

  • Monday, April 11, 4:30-5:30 PM:
    Integrating Oracle JET With ADF to Create a Modern and Engaging User Experience

    In this session you will learn about the pros and cons of Oracle’s new JET framework and ADF and how you can combine them to create a modern development experience writing Modular Single Page Applications. Sim and Weaver will discuss how front-end designers can create modern, platform agnostic extendable interfaces with JET, and how developers can create ADF integrations and extendable services with the back-end to serve up small data snippets (JSON).

  • Tuesday, April 12, 10:45-11:45 AM:
    Developing Hybrid Solutions for the Oracle Documents Cloud Service (DoCS)

    This session will provide an overview of Oracle’s Documents Cloud Service (DoCS), including its interface, security model, and how to embed the DoCS UI and integrate with the REST API and Applink Resource to create seamless hybrid off- and on-premise applications. As part of the lecture, Sim will provide live examples and code walkthroughs, as well as talk about hybrid application development and the best times to use the Applink Resource vs the REST API with Oracle’s new Oracle JET framework for developing cloud apps. The presentation will conclude with an overview of an integration that Fishbowl has created to support Oracle DoCs.

  • Tuesday, April 12, 4:45-5:45 PM
    ICON Enhances Its WebCenter Portal Design by Keeping the User in Mind

    ICON Clinical Research Limited is a global provider of outsourced development services to the pharmaceutical, biotechnology, and medical device industries. They specialize in the development, management, and analysis of programs that support clinical development. ICON implemented Oracle WebCenter as the platform for its ICONIK portal, which will be used by the clinical trials team to manage, maintain, and share content created during the trials process. Come to this session to hear how ICON and Fishbowl Solutions leveraged next generation, best practice portal design concepts and technologies to provide a high-end and rich user experience to end users. Learn how ICON leverages WebCenter Portal and Content to surface personalized study documents, quickly manage content, and collaborate with other team members, whether on a desktop or on the go through a mobile device. We will also discuss how ICON has streamlined their business to solve problems that contributed to delays in the clinical trials process, impeding ICON’s customers from bringing products to market.

  • Wednesday, April 13, 9:15-10:15 AM:
    Rosendin Electric Pairs a Modern User Experience with WebCenter Content to Automate Contract Management

    Rosendin Electric is the top-ranked private electrical contractor in the nation whose work spans preconstruction, prefabrication, building information modeling, and renewable energy. Join us to hear Rosendin describe how they leveraged Oracle WebCenter and Fishbowl Solutions’ ControlCenter to automate and improve their contract management process. Rosendin’s new contract management system provides an intuitive, mobile-enabled interface and dashboard view for their contracts team that shows working, pending, and executed contracts. This dashboard is specific for each user, enabling them to quickly take action not only on contracts, but also on associated documents such as non-disclosure agreements and corporate governance documentation. Come see how WebCenter Content has streamlined Rosendin’s contract management process, making it much more efficient while ensuring the lifecycle of contracts and related documents can be easily tracked, viewed, and archived within one enterprise repository.

We hope to see you at Collaborate 2016!

The post Fishbowl Solutions at Collaborate 2016: Demos and Discussions on Oracle JET, ADF, Documents Cloud Service, Controlled Document Management and Portals appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Find missing or stale statistics

DBA Scripts and Articles - Wed, 2016-03-16 15:20

Purpose Statistics are primordial for the optimizer to choose the best execution plan possible. Sometimes you encounter a sub-optimal plan and need to find out if the tables involved in the statement are up to date. This query will help you find out which tables have been modified significantly since the last statistics gathering. For … Continue reading Find missing or stale statistics

The post Find missing or stale statistics appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

OTN Developer Lab - CloudWorld DC - Register Today!

OTN TechBlog - Wed, 2016-03-16 14:59
OTN will be offering a Developer Lab at CloudWorld DC.

Topics to be covered in the Developer Lab sessions:
  • Provision a new database in the cloud with Oracle Database Cloud Service (DBCS)
  • Gain experience in building new applications for the Oracle Cloud Set up connectivity between the Compute tier and DBCS
  • Learn about the REST APIs available to access the Oracle Cloud
In order to take part in the Develop Lab please note the following requirements. Each participant must bring their own laptop with the following:
  • Intel Core i5 Laptop with 8GB Memory, 55GB Free Hard Disk Space
  • Windows 7 64-Bit
  • Microsoft Internet Explorer, Apple Safari or Mozilla Browser
There is no fee to participate in this Lab, but space is limited. Register today.

Firefox ESR 45 Certified with EBS 12.1 and 12.2

Steven Chan - Wed, 2016-03-16 13:57

Firefox ESR logo

Mozilla Firefox 45 Extended Support Release (ESR) is certified as a Windows-based client browser for Oracle E-Business Suite 12.1 and 12.2.

What is Mozilla Firefox ESR?

Mozilla offers an Extended Support Release based on an official release of Firefox for organizations that are unable to mass-deploy new consumer-oriented versions of Firefox every six weeks.  From the Mozilla ESR FAQ:

What does the Mozilla Firefox ESR life cycle look like?

Releases will be maintained for approximately one year, with point releases containing security updates coinciding with regular Firefox releases. The ESR will also have a two cycle (12 week) overlap between the time of a new release and the end-of-life of the previous release to permit testing and certification prior to deploying a new version.

Maintenance of each ESR, through point releases, is limited to high-risk/high-impact security vulnerabilities and in rare cases may also include off-schedule releases that address live security vulnerabilities. Backports of any functional enhancements and/or stability fixes are not in scope.

At the end of the support period for an ESR version:

  • the release will reach its end-of-life
  • no further updates will be offered for that version
  • an update to the next version will be offered through the application update service

E-Business Suite to be certified with Firefox Extended Support Releases

New personal versions of Firefox are released roughly every six weeks.  It is impractical for us to certify these new personal versions of Firefox with the Oracle E-Business Suite because a given Firefox release is generally obsolete by the time we complete the certification.

From Firefox 10 and onwards, Oracle E-Business Suite is certified only with selected Firefox Extended Support Release versions. Oracle has no current plans to certify new Firefox personal releases with the E-Business Suite.

EBS patching policy for Firefox compatibility issues

Mozilla stresses their goal of ensuring that Firefox personal versions will continue to offer the same level of application compatibility as Firefox Extended Support Releases. 

Oracle E-Business Suite Development will issue new E-Business Suite patches or workarounds that can be reproduced with Firefox Extended Support Releases.  If you report compatibility issues with Firefox personal releases that cannot be reproduced with Firefox Extended Support Releases, your options are:

  1. Deploy a certified Firefox Extended Support Release version instead of the Firefox personal version
  2. Report the incompatibility between Firefox ESR and Firefox personal to Mozilla
  3. Use Internet Explorer (on Windows) or Safari (on Mac OS X) until Mozilla resolves the issue
EBS Compatibility with Firefox ESR security updates

Mozilla may release new updates to Firefox ESR versions to address high-risk/high-impact security issues.  These updates are considered to be certified with the E-Business Suite on the day that they're released.  You do not need to wait for a certification from Oracle before deploying these new Firefox ESR security updates.

Certified desktop operating systems
  • Windows 10 (32-bit and 64-bit)
  • Windows 8.1 (32-bit and 64-bit)
  • Windows 7 SP1 (32-bit and 64-bit)
  • Windows Vista SP2

References

Related Articles

Categories: APPS Blogs

March 24: Fike Corporation―Oracle Sales Cloud and CPQ Cloud Customer Forum

Linda Fishman Hoyle - Wed, 2016-03-16 12:57

Join us for another Oracle Customer Reference Forum on March 24, 2016. Fike Corporation's Jeannie Foster, Manager, Global Business Support Sales & Corporate Reporting, and Jeff Hunter, CRM Business Analyst, will discuss the evolution of the business from the inside out.

Early in the company's success, Fike deployed JD Edwards to manage its overall business. As it matured, it found a need to give its sales organization a better way to configure, price, and quote. Enter Oracle CPQ Cloud (previously BigMachines).This solution gave Fike the ability to easily configure products, price within the parameters of the company, and fulfill its incoming quotes passed to JD Edwards.

Recently, Fike deployed Oracle Sales Cloud to further collaborate with its global customers within the regions, allowing better insight into customer value and forecasting for better lead-time in its manufacturing processes.

Register now to attend the live Forum on Thursday, March 24, 2016, at 8:00 a.m. PT / 10:00 a.m. CT, and learn more about Fike’s experience with Oracle Sales Cloud and CPQ Cloud.

Log Buffer #465: A Carnival of the Vanities for DBAs

Pythian Group - Wed, 2016-03-16 12:17

This Log Buffer Edition covers some tricks, tips, workarounds, and tech-dives covered in various blog posts from Oracle, SQL Server and MySQL.

Oracle:

SQLcl can run Ruby script!!.. JRuby to be precise.

Financial Information Discovery Integration with Oracle Assets

Data Vault Modeling and Snowflake Elastic Data Warehouse

Couchbase Bucket Index Status in NetBeans IDE 8.1

Make SQL Developer Run Faster

SQL Server:

Find a String in a String

Microsoft cloud platform allows you to provide resiliency for your on-premises SQL Server deployments. Marcin Policht describes this functionality, available as part of the Azure Site Recovery solution.

Trust, or Verify – How FKs Can (or Cannot) Help Performance

10 New Features Worth Exploring in SQL Server 2016

Validating a Windows Cluster Prior to Installing SQL Server 2014

MySQL:

When you think of Cloud services for database administration, Database as a Service (DBaaS) is what tends to comes to mind, but one of the ways that the Cloud has had an enormous impact on our work productivity is in the area of collaboration.

Introduction to MySQL Troubleshooting Webinar: Q & A for How to Create a Test Setup

MariaDB is a community-developed fork of MySQL and aims to be an enhanced, drop-in replacement for it.

The MySQL binary package have been growing a lot the last GA versions.

Indexing JSON documents via Virtual Columns

Categories: DBA Blogs

OFM Forum Session Slides - Oracle JET and WebSocket

Andrejus Baranovski - Wed, 2016-03-16 10:46
I would like to post slides from my session on Oracle Fusion Middleware Forum - Oracle JET and WebSocket. This session was done today.

Slides are available on slideshare:


Sample application used for the demo, can be downloaded from here (it consists of three parts - WebSocker server implementation, ADF BC tester application to generate continuos DB updates and JET application) - Oracle JET Live List with WebSocket.

ADF application with live data DVT component rendered in JET can be downloaded from here - When JET Comes To Rescue - Live Data Charts in ADF.

Pages

Subscribe to Oracle FAQ aggregator