Feed aggregator

connection issue

Tom Kyte - Wed, 2016-09-07 04:06
Hi Tom, I have deployed Oracle RDBMS software & created new database in Azure cloud.And also created Application users in this database. If I connect application user using SQL Developer I am getting error. IO Error: Connection reset ...
Categories: DBA Blogs

Checkpoints

Tom Kyte - Wed, 2016-09-07 04:06
Under what conditions checkpoint occurrs
Categories: DBA Blogs

UTL_FILE.FCOPY

Tom Kyte - Wed, 2016-09-07 04:06
I am having a problem using FCOPY to make a few copies of a file that I've created with UTL_FILE.PUT_LINE. The goal is to generate a txt file and dump into a directory .../output_1/file_1.txt and then copy & rename this file into another directory...
Categories: DBA Blogs

export jobs hang db

Tom Kyte - Wed, 2016-09-07 04:06
Hi team, In our production database we are using logical backup and there is 20 users and the size of data 200 GB each user So it is configure mid night but export started after some time my entire database getting very slow it is find when we ...
Categories: DBA Blogs

Partner Webcast - Enterprise Manager 13c Managing Hybrid Cloud as One

As organizations seek innovation by adopting public, private, and hybrid clouds, many of them are running up against formidable barriers. In some cases they find that the promised innovations of...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Links for 2016-09-06 [del.icio.us]

Categories: DBA Blogs

Oracle CPQ Cloud, the Gift that Keeps Giving, at Oracle OpenWorld 2016

Linda Fishman Hoyle - Tue, 2016-09-06 23:05

A Guest Post by Chris Haussler, Senior Principal Product Manager, Oracle (pictured left)

In the 1920s, an advertising salesman delivered the slogan. The gift that keeps giving, to market a record player. The idea was that if you gave this record player as a gift, the recipient would think of you every time he used it. Pretty clever.

What does this have to do with software? I’ll go out on a limb: CPQ Cloud is the front-office software gift that keeps giving. Like the record player, CPQ Cloud’s customers earn incrementally more benefit the more they use it. Don’t believe me?

Proof of Benefit

As you’d expect, companies that invest in front-office transformation tools, such as marketing automation, SFA, and CPQ, get benefits from the increased efficiency that these tools bring to their processes. But after that initial efficiency windfall, it’s important not to get complacent. The potential opportunity is much greater. Persistent use of CPQ Cloud not only helps sales reps serve customers day-to-day, it also builds a storehouse of information about buying behaviors and product mix. This data can be used to optimize product offerings, sales processes, and channel sell-through.

The result: better sales resource utilization, improved margin, and superior market competitiveness.

It Keeps on Giving

But here’s how CPQ Cloud continues its gifting ways. As markets change, the opportunities to review and retune CPQ Cloud continue, offering incremental improvements again and again. CPQ Cloud administration offers the easy-to-use tools to get these changes to market quickly. After 15+ years in the market, we’ve realized that the best CPQ Cloud customers use the tool not only for sales automation, but to drive continual growth and profit.

Key Sessions at OpenWorld

At Open World 2016, CPQ Cloud offers content for customers at every stage: whether new to sales transformation or looking for the next step in their sales optimization journey.

For beginners in every industry, we recommend a survey of sessions to see how Oracle CPQ Cloud can be combined with other products in the Oracle portfolio:

  • Maximize Results from Oracle Sales Cloud and Oracle CPQ Cloud [CON6937]
  • Graduate to full Quote-to-Cash in the Cloud with CPQ Cloud integrated to Oracle ERP and Oracle SCM Clouds [CON7079]
  • Optimize Customer Engagement with a Multi-Pillar Oracle Customer Experience Solution [CON7958]
  • Connect Oracle CPQ Cloud to E-Business Suite [CON7078]

Financial Services and Communications companies have sessions that demonstrate CPQ Cloud’s ability to serve the distinct sales processes in those industries:

  • Oracle CX for Financials Services – CPQ Cloud [CON7080]
  • Oracle CX for Communications – CPQ Cloud [CON7081]

Of interest to all, but particularly relevant for CPQ Cloud customers seeking that next ROI-building transformation:

  • What’s Your CPQ Maturity? A Business Panel Discussion [CON7076]
  • Oracle Configure, Price, and Quote Roadmap [CON6320]

Register Now

We encourage you to attend OpenWorld 2016 and participate in the CPQ Cloud sessions that speak to your needs. As CPQ Cloud is the gift that keeps giving, you will be glad you did ... again and again.

Session Catalog and Registration

Multitenant thin provisioning: PDB snapshots on ACFS

Yann Neuhaus - Tue, 2016-09-06 16:07

Database on ACFS is a long story. At first, the main reason for ASM was to bypass a filesystem layer that is not required by the database. ACFS was for the non-database files that had to be accessed by all cluster nodes. But then, storage vendors and other providers came with snapshots, compression and thin provisioning and Oracle had to answer: they implemented those storage features in ACFS and allowed database files on it.

When you create a database on an ODA X5, datafiles are created on an ACFS mount. There is only one ACFS mount for many databases. You probably want to snapshot at database level, but ACFS snapshots are only at filesystem level. To avoid that any write on the filesystem is copied when you need the snapshot for a single database, they implemented this way: At installation, the ACFS mount is created and a snapshot is taken when empty. Then each database created will create a snapshot. This means that in each snapshot you access only to one database. There is no overhead and no additional copies because the master is empty.
Then came multitenant where you can snapshot at PDB level for thin cloning (create PDB from … snapshot copy). But multitenant cannot be created on a snapshot. CDB must be at root level on ACFS. In ODA X6, an ACFS filesystem is created for each database. But then, when you thin clone a PDB, a snapshot is taken for the whole database. But this one is not empty: any write will have additional copy and overhead.

There’s more info about ACFS, copy-on-write which is not copy-on-write but redirect-on-write, and performance overhead in the excellent presentation and demo from Ludovico Caldara. Here I’ll show the snapshot overhead in multitenant when writing to the master, the clone, and the others.

PDB snapshots on ACFS

I start with a brand new CDB on ACFS with no snapshots:

[oracle@rac1 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvdb3 16G 5.0G 9.9G 34% /
tmpfs 7.3G 631M 6.7G 9% /dev/shm
/dev/xvdb1 477M 118M 330M 27% /boot
/dev/xvdc1 69G 22G 44G 34% /u01
/dev/asm/data-52 25G 6.6G 19G 27% /u02
/dev/asm/fra-401 18G 3.4G 15G 19% /u03
/dev/asm/redo-358 19G 8.2G 11G 44% /u04

[oracle@rac1 ~]$ acfsutil snap info /u02
number of snapshots: 0
snapshot space usage: 0 ( 0.00 )

This is what is created by the Oracle Public Cloud for a RAC DBaaS.

I have a PDB1 pluggable database.

I create another one, PDBx, which will be totally independent.

SQL> create pluggable database PDBx admin user admin identified by "Ach1z0#d";
Pluggable database created.
SQL> alter pluggable database PDBx open;
Pluggable Database opened

I create a thin clone pluggable database database PDB2,using PDB1 as the master:

SQL> create pluggable database PDB2 from PDB1 snapshot copy;
Pluggable database created.
SQL> alter pluggable database PDB2 open;
Pluggable Database opened

Here are my pluggable databases:

SQL> select pdb_name,GUID from dba_pdbs;
 
PDB_NAME GUID
-------- --------------------------------
PDB$SEED 3360B2A306C60098E053276DD60A9928
PDB1 3BDAA124651F103DE0531ADBC40A5DD3
PDBX 3BDCCBE4C1B64A5AE0531ADBC40ADBB7
PDB2 3BDCCBE4C1B74A5AE0531ADBC40ADBB7

The PDB2 being a snapshot clone, it has created a snapshot on the /u02 which is the ACFS filesystem where datafiles are:

[oracle@rac1 ~]$ acfsutil snap info /u02
snapshot name: 3BDCCBE4C1B74A5AE0531ADBC40ADBB7
snapshot location: /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7
RO snapshot or RW snapshot: RW
parent name: /u02
snapshot creation time: Tue Sep 6 19:28:35 2016
 
number of snapshots: 1
snapshot space usage: 3588096 ( 3.42 MB )

Space usage is minimal here as no write happened yet.

datafiles

Here are the datafiles of my CDB to see if PDB2 are on the snapshot:

RMAN> report schema;
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 780 SYSTEM YES /u02/app/oracle/oradata/ORCL/datafile/o1_mf_system_cwxwcgz4_.dbf
2 260 PDB$SEED:SYSTEM NO /u02/app/oracle/oradata/ORCL/3360B2A306C60098E053276DD60A9928/datafile/o1_mf_system_cwxwbzrd_.dbf
3 1030 SYSAUX NO /u02/app/oracle/oradata/ORCL/datafile/o1_mf_sysaux_cwxw98jl_.dbf
4 760 PDB$SEED:SYSAUX NO /u02/app/oracle/oradata/ORCL/3360B2A306C60098E053276DD60A9928/datafile/o1_mf_sysaux_cwxwdof3_.dbf
7 545 UNDOTBS1 YES /u02/app/oracle/oradata/ORCL/datafile/o1_mf_undotbs1_cwxwdl6s_.dbf
8 200 UNDOTBS2 YES /u02/app/oracle/oradata/ORCL/datafile/o1_mf_undotbs2_cwxwrw7y_.dbf
9 370 PDB1:SYSTEM NO /u02/app/oracle/oradata/ORCL/3BDAA124651F103DE0531ADBC40A5DD3/datafile/o1_mf_system_cwxx3fb0_.dbf
10 800 PDB1:SYSAUX NO /u02/app/oracle/oradata/ORCL/3BDAA124651F103DE0531ADBC40A5DD3/datafile/o1_mf_sysaux_cwxx3fbl_.dbf
11 5 USERS NO /u02/app/oracle/oradata/ORCL/datafile/o1_mf_users_cwxxop2q_.dbf
12 5 PDB1:USERS NO /u02/app/oracle/oradata/ORCL/3BDAA124651F103DE0531ADBC40A5DD3/datafile/o1_mf_users_cwxxopm9_.dbf
49 370 PDBX:SYSTEM NO /u02/app/oracle/oradata/ORCL/3BDCCBE4C1B64A5AE0531ADBC40ADBB7/datafile/o1_mf_system_cwy6688l_.dbf
50 800 PDBX:SYSAUX NO /u02/app/oracle/oradata/ORCL/3BDCCBE4C1B64A5AE0531ADBC40ADBB7/datafile/o1_mf_sysaux_cwy6688r_.dbf
51 5 PDBX:USERS NO /u02/app/oracle/oradata/ORCL/3BDCCBE4C1B64A5AE0531ADBC40ADBB7/datafile/o1_mf_users_cwy6688z_.dbf
52 370 PDB2:SYSTEM NO /u02/app/oracle/oradata/ORCL/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/datafile/o1_mf_system_cwy6725s_.dbf
53 800 PDB2:SYSAUX NO /u02/app/oracle/oradata/ORCL/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/datafile/o1_mf_sysaux_cwy67261_.dbf
54 5 PDB2:USERS NO /u02/app/oracle/oradata/ORCL/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/datafile/o1_mf_users_cwy67268_.dbf

The PDB2 datafiles are actually symbolic links to the snapshot:


[oracle@rac1 ~]$ ls -l /u02/app/oracle/oradata/ORCL/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/datafile
/u02/app/oracle/oradata/ORCL/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/datafile:
total 62484
lrwxrwxrwx 1 oracle oinstall 142 Sep 6 19:28 o1_mf_sysaux_cwy67261_.dbf -> /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/app/oracle/oradata/ORCL/3BDAA124651F103DE0531ADBC40A5DD3/datafile/o1_mf_sysaux_cwxx3fbl_.dbf
lrwxrwxrwx 1 oracle oinstall 142 Sep 6 19:28 o1_mf_system_cwy6725s_.dbf -> /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/app/oracle/oradata/ORCL/3BDAA124651F103DE0531ADBC40A5DD3/datafile/o1_mf_system_cwxx3fb0_.dbf
-rw-r----- 1 oracle oinstall 63971328 Sep 6 19:28 o1_mf_temp_cwy67267_.dbf
lrwxrwxrwx 1 oracle oinstall 141 Sep 6 19:28 o1_mf_users_cwy67268_.dbf -> /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/app/oracle/oradata/ORCL/3BDAA124651F103DE0531ADBC40A5DD3/datafile/o1_mf_users_cwxxopm9_.dbf

So, you have a snapshot on the/u02 which contains all the CDB datafiles but only the PDB2 datafiles will be read/written on the snapshot (though the symbolic links). The other CDB files are included in the snapshot, but without any reason. Nothing will read or write them. They are there only because ACFS cannot snapshot a folder or a set of file. It’s only a filesystem level.

wite on master

For the moment, the snapshot is small: the blocks are shared.

If I write 100MB on the master (PDB1), those blocks will be copied in order to persist the old image for the snapshot:

SQL> alter session set container=PDB1
Session altered.
SQL> truncate table DEMO;
Table truncated.
SQL> insert /*+ append */into DEMO select lpad('b',900,'b') x from xmltable('1 to 100000');
100000 rows created.

[oracle@rac1 ~]$ acfsutil snap info /u02
snapshot name: 3BDCCBE4C1B74A5AE0531ADBC40ADBB7
snapshot location: /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7
RO snapshot or RW snapshot: RW
parent name: /u02
snapshot creation time: Tue Sep 6 19:28:35 2016
 
number of snapshots: 1
snapshot space usage: 105025536 ( 100.16 MB )

Snapshot size increased by the volume that has been written, which is expected: old image is required by PDB2.

write on thin clone

If I write to the clone, copy has to happen as well:

SQL> alter session set container=PDB2
Session altered.
SQL> truncate table DEMO;
Table truncated.
SQL> insert /*+ append */into DEMO select lpad('b',900,'b') x from xmltable('1 to 100000');
100000 rows created.

[oracle@rac1 ~]$ acfsutil snap info /u02
snapshot name: 3BDCCBE4C1B74A5AE0531ADBC40ADBB7
snapshot location: /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7
RO snapshot or RW snapshot: RW
parent name: /u02
snapshot creation time: Tue Sep 6 19:28:35 2016
 
number of snapshots: 1
snapshot space usage: 211275776 ( 201.48 MB )

So, because I’ve written 200MB to blocks involved in the snapshots, the snapshot size is now 200MB.

However, look at the way I did it with truncate and insert. I’m writing on the same blocks as I did when writing on PDB1. To be sure I checked it from DBA_EXTENTS and had the same result in both pdbs:
SQL> select file_id,listagg(block_id,',')within group(order by block_id),blocks from dba_extents where segment_name='DEMO' and segment_type='TABLE' group by file_id,blocks;
 
FILE_ID LISTAGG(BLOCK_ID,',')WITHINGROUP(ORDERBYBLOCK_ID) BLOCKS
---------- ------------------------------------------------------------------------------------------------------------------------------------------------------ ----------
64 33024,33032,33040,33048,33056,33064,33072,33080,33088,33096,33104,33112,33120,33128,33136,33144 8
64 33152,33280,33408,33536,33664,33792,33920,34048,34176,34304,34432,34560,34688,34816,34944,35072,35200,35328,35456,35584,35712,35840,35968,36096,36224, 128
36352,36480,36608,36736,36864,36992,37120,37248,37376,37504,37632,37760,37888,38016,38144,38272,38400,38528,38656,38784,38912,39040,39168,39296,39424,
39552,39680,39808,39936,40064,40192,40320,40448,40576,40704,40832,40960,41088 1024
64 41216,42240,43264,44288,45312

So why do I have additional 100MB on my snapshots? Writing to the original blocks would be sufficient as they have been redirected to new ones by the write from PDB1? But because the ACFS snapshot, the previous image is kept. In addition to the current state of PDB1 and PDB2, the snapshot also keeps the blocks from PDB1 as they were at the time of the PDB2 clone. Who needs it?

Ok. This is not a big issue if we consider that you usually don’t write on the master, because it’s the master.

write on other PDB

Remember that multitenant is for consolidation. You don’t use a CDB only for the master and its clones. You may want to host other PDBs. If we had a snapshot for PDB1 and PDB2, writes to other files should not be concerned: no write overhead and no additional storage. However, because the snapshot was made on the whole filesystem, any write to it will copy the blocks, even those that are not concerned by the thin cloned PDB. Here I’m writing 100MB to PDBx which has nothing in common with PDB1 nor PDB2:

SQL> alter session set container=PDBx
Session altered.
SQL> truncate table DEMO;
Table truncated.
SQL> insert /*+ append */into DEMO select lpad('b',900,'b') x from xmltable('1 to 100000');
100000 rows created.

[oracle@rac1 ~]$ acfsutil snap info /u02
snapshot name: 3BDCCBE4C1B74A5AE0531ADBC40ADBB7
snapshot location: /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7
RO snapshot or RW snapshot: RW
parent name: /u02
snapshot creation time: Tue Sep 6 19:28:35 2016
 
number of snapshots: 1
snapshot space usage: 311214080 ( 296.79 MB )

This is an additional 100MB. Before image of PDBx has been saved, for no reason because we will never read this previous image.

snapshot keeps growing

after a few minutes without any user activity, snapshot has grown further:

[oracle@rac1 ~]$ acfsutil snap info /u02
snapshot name: 3BDCCBE4C1B74A5AE0531ADBC40ADBB7
snapshot location: /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7
RO snapshot or RW snapshot: RW
parent name: /u02
snapshot creation time: Tue Sep 6 19:28:35 2016
 
number of snapshots: 1
snapshot space usage: 332947456 ( 317.52 MB )

On the /u02 filesystem, there is all the CDB files, SYSTEM, UNDO, controlfile, etc. They have activity and they are copied when written, once again for no reason.

drop thin clone

Only when I drop the PDB2 this space is released:

SQL> alter pluggable database pdb2 close;
Pluggable database altered.
SQL> drop pluggable database pdb2 including datafiles;
Pluggable database dropped.

[oracle@rac1 ~]$ acfsutil snap info /u02
number of snapshots: 0
snapshot space usage: 0 ( 0.00 )

So what?

There’s no bug here. It works as designed because ACFS snapshot is at filesystem level. If you want to use multitenant thin provisioning, the recommendation is to dedicate a CDB for the master and its clones. Any other PDB will have their writes copied to the snapshot for no reason. Writes to common files: UNDO (in 12.1), SYSTEM, SYSAUX will also be concerned. The duration of the clones should be short life, refreshed frequently. And of course, thin cloning is not for production. Very few snapshot/compression/clone technologies can be used in production. Look at storage vendors solutions for that (XtremIO for example).

 

Cet article Multitenant thin provisioning: PDB snapshots on ACFS est apparu en premier sur Blog dbi services.

Using logdump to Troubleshoot the Oracle GoldenGate for Big Data Kafka handler

Rittman Mead Consulting - Tue, 2016-09-06 15:11

Oracle GoldenGate for Big Data (OGG BD) supports sending transactions as messages to Kafka topics, both through the native Oracle handler as well as a connector into Confluent's Kafka Connect. In some research that I was doing with it I found an interesting problem that I am going to demonstrate here and show the troubleshooting tools that may be useful to others encountering similar issues.

The source for the data is Swingbench running against Oracle 12c database (pluggable instance). OGG has been configured as follows:

  • Extract

    EXTRACT EXT1
    USERID SYSTEM, PASSWORD welcome1
    EXTTRAIL ./dirdat/lt
    SOURCECATALOG ORCL
    TABLE SOE.*;
    
  • Datapump (to local machine, an installation of OGG BD)

    EXTRACT EXTDP1
    RMTHOST LOCALHOST, MGRPORT 7810
    RMTTRAIL ./dirdat/rt
    SOURCECATALOG ORCL
    TABLE SOE.*;
    
  • Replicat

    REPLICAT rkafka
    TARGETDB LIBFILE libggjava.so SET property=dirprm/kafka.props
    REPORTCOUNT EVERY 1 MINUTES, RATE
    GROUPTRANSOPS 10000
    MAP *.*.*, TARGET *.*.*;
    

When I start the replicat, it abends almost straight away. In ggserr.log I see:

ERROR   OGG-15051  Oracle GoldenGate Delivery, rkafka.prm:  Java or JNI exception:
oracle.goldengate.util.GGException: Kafka Handler failed to format and process operation: table=[ORCL.SOE.CUSTOMERS], op pos=00000000000000006636, tx pos=00000000000000002437, op ts=2016-09-06 10:59:23.000589.
ERROR   OGG-01668  Oracle GoldenGate Delivery, rkafka.prm:  PROCESS ABENDING.

Within the properties file for the Kafka handler (dirprm/kafka.props) I increased the logging level

gg.log.level=DEBUG

and restart the replicat. Now we get a debug file written to dirrpt/RKAFKA_debug_log4j.log which includes successful work:

[...]
[main] DEBUG (UserExitDataSource.java:1190) - Received txInd is: WHOLE RBA is: 4939
[main] DEBUG (KafkaHandler.java:484) - Process operation: table=[ORCL.SOE.LOGON], op pos=00000000000000004939, tx pos=00000000000000002437, op ts=2016-09-06 10:59:23.000179
[main] DEBUG (KafkaHandler.java:529) - Creating the producer record and sending to Kafka Producer
[main] DEBUG (NonBlockingKafkaProducer.java:64) - Sending producer record to Non Blocking kafka producer
[main] DEBUG (NonBlockingKafkaProducer.java:76) - NBKP:send(): Returning status: OK
[main] DEBUG (PendingOpGroup.java:316) - now ready to checkpoint? false (was ready? false): {pendingOps=18, groupSize=0, timer=0:00:00.000 [total = 0 ms ]}
[main] DEBUG (UserExitDataSource.java:1401) - applyQueuedConfigurationChanges: on Operation? false
[main] DEBUG (UserExitDataSource.java:2342) - UpdateActivityTime call received
[...]

but then a failure, matching the more high-level message we got previously in ggserr.log:

DEBUG 2016-09-06 15:50:52,909 [main] DEBUG (KafkaHandler.java:484) - Process operation: table=[ORCL.SOE.CUSTOMERS], op pos=00000000000000006636, tx pos=00000000000000002437, op ts=2016-09-06 10:59:23.000589
INFO 2016-09-06 15:50:52,910 [main] INFO  (AvroOperationSchemaGenerator.java:36) - Generating the Avro schema for the table [ORCL.SOE.CUSTOMERS].
ERROR 2016-09-06 15:50:52,914 [main] ERROR (AvroOperationFormatter.java:188) - The Avro Formatter formatOp operation failed.
org.apache.avro.SchemaParseException: Illegal character in: SYS_NC00017$
    at org.apache.avro.Schema.validateName(Schema.java:1083)
    at org.apache.avro.Schema.access$200(Schema.java:79)
    at org.apache.avro.Schema$Field.<init>(Schema.java:372)
    at org.apache.avro.SchemaBuilder$FieldBuilder.completeField(SchemaBuilder.java:2124)
    at org.apache.avro.SchemaBuilder$FieldBuilder.completeField(SchemaBuilder.java:2116)
    at org.apache.avro.SchemaBuilder$FieldBuilder.access$5300(SchemaBuilder.java:2034)

So from this we've got the table (ORCL.SOE.CUSTOMERS), log offset (6636), and from the stack trace even a hint at what the issue may be (something to do with the Schema, and a column called SYS_NC00017$).

Now let's see if we can find out more. A colleague of mine pointed me towards Logdump, which is well documented and also covered by Oracle's A-Team blog here.

Launch logdump from the OGG BD folder, ideally using rlwrap so that you can scroll and search through command history:

$ rlwrap ./logdump

Oracle GoldenGate Log File Dump Utility
Version 12.2.0.1.160419 OGGCORE_12.2.0.1.0OGGBP_PLATFORMS_160430.1401

Copyright (C) 1995, 2016, Oracle and/or its affiliates. All rights reserved.



Logdump 1 >

Then enter the following, which will determine what information is shown:

After that, specify the trail file to be examined:

OPEN /u01/ogg-bd/dirdat/rt000000000

You can enter next (or simply n) to view the records one at a time:

Logdump 6 >OPEN /u01/ogg-bd/dirdat/rt000000000
Current LogTrail is /u01/ogg-bd/dirdat/rt000000000
Logdump 7 >n

2016/09/02 15:54:48.329.147 FileHeader           Len  1451 RBA 0
Name: *FileHeader*
3000 0338 3000 0008 4747 0d0a 544c 0a0d 3100 0002 | 0..80...GG..TL..1...
0005 3200 0004 2000 0000 3300 0008 02f2 61ba f7c1 | ..2... ...3.....a...
f3bb 3400 002d 002b 7572 693a 6269 6764 6174 616c | ..4..-.+uri:bigdatal
6974 653a 6c6f 6361 6c64 6f6d 6169 6e3a 3a75 3031 | ite:localdomain::u01
3a6f 6767 3a45 5854 4450 3135 0000 2f35 0000 2b00 | :ogg:EXTDP15../5..+.
2975 7269 3a62 6967 6461 7461 6c69 7465 3a6c 6f63 | )uri:bigdatalite:loc
616c 646f 6d61 696e 3a3a 7530 313a 6f67 673a 4558 | aldomain::u01:ogg:EX

Logdump 8 >n
-------------------------------------------------------------------
Hdr-Ind    :     E  (x45)     Partition  :     .  (x00)
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
RecLength  :     0  (x0000)   IO Time    : 2016/09/02 15:54:47.562.301
IOType     :   151  (x97)     OrigNode   :     0  (x00)
TransInd   :     .  (x03)     FormatType :     R  (x52)
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
AuditRBA   :          0       AuditPos   : 0
Continued  :     N  (x00)     RecCount   :     0  (x00)

2016/09/02 15:54:47.562.301 RestartOK            Len     0 RBA 1459
Name:
After  Image:                                             Partition 0   G  s

GGS tokens:
4e00 0004 4558 5431                               | N...EXT1

But ploughing through the file a transaction at a time is no fun, so lets zero-in on the problem record. We can either just jump straight to the transaction offset that we got from the error log using POSITION (or POS) followed by NEXT:

Logdump 12 >pos 6636
Reading forward from RBA 6636
Logdump 13 >n
-------------------------------------------------------------------
Hdr-Ind    :     E  (x45)     Partition  :     .  (x0c)
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
RecLength  :   256  (x0100)   IO Time    : 2016/09/06 11:59:23.000.589
IOType     :     5  (x05)     OrigNode   :   255  (xff)
TransInd   :     .  (x00)     FormatType :     R  (x52)
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
AuditRBA   :        393       AuditPos   : 30266384
Continued  :     N  (x00)     RecCount   :     1  (x01)

2016/09/06 11:59:23.000.589 Insert               Len   256 RBA 6636
Name: ORCL.SOE.CUSTOMERS  (TDR Index: 3)
After  Image:                                             Partition 12   G  b
0000 000a 0000 0000 0000 0001 86a1 0001 000a 0000 | ....................
0006 616e 7477 616e 0002 000b 0000 0007 7361 6d70 | ..antwan........samp
736f 6e00 0300 0600 0000 0275 7300 0400 0b00 0000 | son........us.......
0741 4d45 5249 4341 0005 000a 0000 0000 0000 0000 | .AMERICA............
8980 0006 001d 0000 0019 616e 7477 616e 2e73 616d | ..........antwan.sam
7073 6f6e 406f 7261 636c 652e 636f 6d00 0700 0a00 | pson@oracle.com.....
0000 0000 0000 0000 9500 0800 1500 0032 3031 362d | ...............2016-
Column     0 (x0000), Len    10 (x000a)
0000 0000 0000 0001 86a1                          | ..........
Column     1 (x0001), Len    10 (x000a)
0000 0006 616e 7477 616e                          | ....antwan
Column     2 (x0002), Len    11 (x000b)
0000 0007 7361 6d70 736f 6e                       | ....sampson
Column     3 (x0003), Len     6 (x0006)
0000 0002 7573                                    | ....us
Column     4 (x0004), Len    11 (x000b)
0000 0007 414d 4552 4943 41                       | ....AMERICA
Column     5 (x0005), Len    10 (x000a)
0000 0000 0000 0000 8980                          | ..........
Column     6 (x0006), Len    29 (x001d)
0000 0019 616e 7477 616e 2e73 616d 7073 6f6e 406f | ....antwan.sampson@o
7261 636c 652e 636f 6d                            | racle.com
Column     7 (x0007), Len    10 (x000a)
0000 0000 0000 0000 0095                          | ..........
Column     8 (x0008), Len    21 (x0015)
0000 3230 3136 2d30 392d 3036 3a30 303a 3030 3a30 | ..2016-09-06:00:00:0
30                                                | 0
Column     9 (x0009), Len    14 (x000e)
0000 000a 4f63 6361 7369 6f6e 616c                | ....Occasional
Column    10 (x000a), Len     9 (x0009)
0000 0005 4d75 7369 63                            | ....Music
Column    11 (x000b), Len    21 (x0015)
0000 3139 3635 2d30 352d 3130 3a30 303a 3030 3a30 | ..1965-05-10:00:00:0
30                                                | 0
Column    12 (x000c), Len     5 (x0005)
0000 0001 59                                      | ....Y
Column    13 (x000d), Len     5 (x0005)
0000 0001 4e                                      | ....N
Column    14 (x000e), Len    10 (x000a)
0000 0000 0000 0002 49f1                          | ........I.
Column    15 (x000f), Len    10 (x000a)
0000 0000 0000 0002 49f1                          | ........I.

or we can also use the FILTER command, but we'll come back to that in a moment. First let's have a look at the record in question that's causing the Kafka handler to abend. It's shown in full above.

The table name matches - ORCL.SOE.CUSTOMERS, and we can see that the operation was an INSERT along with the values for sixteen columns. Now, since we know that the error thrown by the Kafka handler was something to do with schema and columns, let's take a step back. The record we're looking at is the actual data record, but in the trail file will also be metadata about the table itself which will have been read by the handler. We can look for all records in the trail file relating to this table using the FILTER command (preceeded by a POS 0 to move the read back to the beginning of the file):

Logdump 37 >POS 0
Reading forward from RBA 0
Logdump 38 >FILTER INCLUDE FILENAME ORCL.SOE.CUSTOMERS
Logdump 39 >N
-------------------------------------------------------------------
Hdr-Ind    :     E  (x45)     Partition  :     .  (x00)
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
RecLength  :  1464  (x05b8)   IO Time    : 2016/09/06 11:59:26.461.886
IOType     :   170  (xaa)     OrigNode   :     2  (x02)
TransInd   :     .  (x03)     FormatType :     R  (x52)
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
DDR/TDR Idx:   (002, 003)     AuditPos   : 30266384
Continued  :     N  (x00)     RecCount   :     1  (x01)

2016/09/06 11:59:26.461.886 Metadata             Len 1464 RBA 5103
Name: ORCL.SOE.CUSTOMERS
*
1)Name          2)Data Type        3)External Length  4)Fetch Offset      5)Scale         6)Level
7)Null          8)Bump if Odd      9)Internal Length 10)Binary Length    11)Table Length 12)Most Sig DT
13)Least Sig DT 14)High Precision  15)Low Precision   16)Elementary Item  17)Occurs       18)Key Column
19)Sub DataType 20)Native DataType 21)Character Set   22)Character Length 23)LOB Type     24)Partial Type
*
TDR version: 1
Definition for table ORCL.SOE.CUSTOMERS
Record Length: 542
Columns: 18
CUSTOMER_ID        134     13        0  0  0 1 0      8      8      8 0 0 0 0 1    0 1   3    2       -1      0 0 0
CUST_FIRST_NAME     64     40       12  0  0 1 0     40     40      0 0 0 0 0 1    0 0   0    1       -1      0 0 0
CUST_LAST_NAME      64     40       58  0  0 1 0     40     40      0 0 0 0 0 1    0 0   0    1       -1      0 0 0
NLS_LANGUAGE        64      3      104  0  0 1 0      3      3      0 0 0 0 0 1    0 0   0    1       -1      0 0 0
NLS_TERRITORY       64     30      112  0  0 1 0     30     30      0 0 0 0 0 1    0 0   0    1       -1      0 0 0
CREDIT_LIMIT       134     11      148  2  0 1 0      8      8      8 0 0 0 0 1    0 0   3    2       -1      0 0 0
CUST_EMAIL          64    100      160  0  0 1 0    100    100      0 0 0 0 0 1    0 0   0    1       -1      0 0 0
ACCOUNT_MGR_ID     134     13      266  0  0 1 0      8      8      8 0 0 0 0 1    0 0   3    2       -1      0 0 0
CUSTOMER_SINCE     192     19      278  0  0 1 0     19     19     19 0 5 0 0 1    0 0   0   12       -1      0 0 0
CUSTOMER_CLASS      64     40      300  0  0 1 0     40     40      0 0 0 0 0 1    0 0   0    1       -1      0 0 0
SUGGESTIONS         64     40      346  0  0 1 0     40     40      0 0 0 0 0 1    0 0   0    1       -1      0 0 0
DOB                192     19      392  0  0 1 0     19     19     19 0 5 0 0 1    0 0   0   12       -1      0 0 0
MAILSHOT            64      1      414  0  0 1 0      1      1      0 0 0 0 0 1    0 0   0    1       -1      0 0 0
PARTNER_MAILSHOT    64      1      420  0  0 1 0      1      1      0 0 0 0 0 1    0 0   0    1       -1      0 0 0
PREFERRED_ADDRESS  134     13      426  0  0 1 0      8      8      8 0 0 0 0 1    0 0   3    2       -1      0 0 0
PREFERRED_CARD     134     13      438  0  0 1 0      8      8      8 0 0 0 0 1    0 0   3    2       -1      0 0 0
SYS_NC00017$        64     40      450  0  0 1 0     40     40      0 0 0 0 0 1    0 0   0    1       -1      0 0 0
SYS_NC00018$        64     40      496  0  0 1 0     40     40      0 0 0 0 0 1    0 0   0    1       -1      0 0 0
End of definition

I spy with my little eye ... SYS_NC00017$, which was named in the debug log that we saw above. Also note:

Columns: 18

So the OGG metadata for the table shows it with eighteen columns, including two SYS_[...]. If you look at the data shown in the record at position 6636 above you'll see that there are only sixteen columns of data. Let's now check out the schema for the table in question in Oracle.

SQL> select COLUMN_NAME,DATA_TYPE from user_tab_columns where table_name = 'CUSTOMERS';

COLUMN_NAME          DATA_TYPE
-------------------- ----------------
PREFERRED_CARD       NUMBER
PREFERRED_ADDRESS    NUMBER
PARTNER_MAILSHOT     VARCHAR2
MAILSHOT             VARCHAR2
DOB                  DATE
SUGGESTIONS          VARCHAR2
CUSTOMER_CLASS       VARCHAR2
CUSTOMER_SINCE       DATE
ACCOUNT_MGR_ID       NUMBER
CUST_EMAIL           VARCHAR2
CREDIT_LIMIT         NUMBER
NLS_TERRITORY        VARCHAR2
NLS_LANGUAGE         VARCHAR2
CUST_LAST_NAME       VARCHAR2
CUST_FIRST_NAME      VARCHAR2
CUSTOMER_ID          NUMBER

16 rows selected.

Sixteen columns. Not eighteen, as the OGG trail file Metadata record showed. Hmmm.

Interestingly, Google throws up a match for this very column in which the output of Dbvisit's replicate tool run against the Swingbench schema announces:

Column SYS_NC00017$ is special: virtual column. Excluding.
Column SYS_NC00017$ is special: hidden column. Excluding.
Column SYS_NC00017$ is special: system-generated column. Excluding.

That it's a hidden column we'd pretty much guessed given its elusiveness. But - virtual column? system generated? This then prompted me to look at the indices on the table:

SQL> SELECT TABLE_NAME, INDEX_NAME, COLUMN_NAME
     FROM USER_IND_COLUMNS 
     WHERE TABLE_NAME = 'CUSTOMERS';

TABLE_NAME       INDEX_NAME                       COLUMN_NAME
---------------- -------------------------------- --------------------
CUSTOMERS        CUST_ACCOUNT_MANAGER_IX          ACCOUNT_MGR_ID
CUSTOMERS        CUST_EMAIL_IX                    CUST_EMAIL
CUSTOMERS        CUST_FUNC_LOWER_NAME_IX          SYS_NC00017$
CUSTOMERS        CUST_FUNC_LOWER_NAME_IX          SYS_NC00018$
CUSTOMERS        CUSTOMERS_PK                     CUSTOMER_ID
CUSTOMERS        CUST_DOB_IX                      DOB

Aha! I spy system generated columns! Let's take a closer look at the CUST_FUNC_LOWER_NAME_IX index:

SQL> SELECT INDEX_NAME, INDEX_TYPE 
     FROM USER_INDEXES 
     WHERE TABLE_NAME = 'CUSTOMERS' 
       AND INDEX_NAME='CUST_FUNC_LOWER_NAME_IX';

INDEX_NAME                       INDEX_TYPE
-------------------------------- ---------------------------
CUST_FUNC_LOWER_NAME_IX          FUNCTION-BASED NORMAL

So we have a function-based index, which in the background appears to implement itself via two hidden columns. My guess is that the Kafka handler code is taking the metadata definition record of 18 columns too literally, and expecting to find a value for it in the transaction record when it reads it and falls over when it can't. Similar behaviour happens with the Kafka Connect OGG connector when it tries to process this particular record:

ERROR 2016-08-30 17:25:09,548 [main] ERROR (KafkaConnectFormatter.java:251) - The Kafka Connect Row Formatter formatOp operation failed.
java.lang.IndexOutOfBoundsException: Index: 16, Size: 16
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at oracle.goldengate.datasource.meta.TableMetaData.getColumnMetaData(TableMetaData.java:73)
at oracle.goldengate.kafkaconnect.formatter.KafkaConnectFormatter.formatAfterValues(KafkaConnectFormatter.java:329)
at oracle.goldengate.kafkaconnect.formatter.KafkaConnectFormatter.formatAfterValuesOp(KafkaConnectFormatter.java:278)
at oracle.goldengate.kafkaconnect.formatter.KafkaConnectFormatter.formatOp(KafkaConnectFormatter.java:212)
at oracle.goldengate.kafkaconnect.KafkaConnectHandler.formatOp(KafkaConnectHandler.java:309)
at oracle.goldengate.kafkaconnect.KafkaConnectHandler.transactionCommit(KafkaConnectHandler.java:186)
at oracle.goldengate.datasource.DsEventManager$2.send(DsEventManager.java:414)
at oracle.goldengate.datasource.DsEventManager$EventDispatcher.distributeEvent(DsEventManager.java:231)
at oracle.goldengate.datasource.DsEventManager.fireTransactionCommit(DsEventManager.java:422)
at oracle.goldengate.datasource.AbstractDataSource.fireTransactionCommit(AbstractDataSource.java:490)
at oracle.goldengate.datasource.UserExitDataSource.commitActiveTransaction(UserExitDataSource.java:1582)
at oracle.goldengate.datasource.UserExitDataSource.commitTx(UserExitDataSource.java:1525)
ERROR 2016-08-30 17:25:09,550 [main] ERROR (KafkaConnectHandler.java:312) - Confluent Kafka Handler failed to format and process operation: table=[PDB.SOE.CUSTOMERS], op pos=00000000000000008091, tx pos=00000000000000003011, op ts=2016-07-29 14:59:47.000137
java.lang.IndexOutOfBoundsException: Index: 16, Size: 16
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at oracle.goldengate.datasource.meta.TableMetaData.getColumnMetaData(TableMetaData.java:73)

Note the IndexOutOfBoundsException error.

Working around the error

I'm in the fortunate position of being in a sandbox environment in which I can modify the source schema to suit my needs - so I just dropped the function-based index. In reality this evidently would not be a good approach on the assumption that the index was there for a good reason!

DROP INDEX "SOE"."CUST_FUNC_LOWER_NAME_IX";

Having run this, we still have the question of how to get the replicat working. To do this we could go the whole-hog and drop and recreate the extracts; or, we can get the replicat to skip the section of the trail file with the records in that we can't process. Assuming you've run the above DROP and then written more data to the table, there'll be a second metadata record in the OGG trail file. We can use the FILTER command to find this:

Logdump 69 >FILTER INCLUDE FILENAME ORCL.SOE.CUSTOMERS;FILTER EXCLUDE RECTYPE 5,134;FILTER MATCH ALL

This shows records for just this table, and excludes record types 5 and 134 (INSERT and UPDATE respectively). We can then scan through the file with NEXT command and see:

Logdump 72 >n
Scanned     10000 records, RBA    2365691, 2016/09/06 12:12:16.001.191
Scanned     20000 records, RBA    4716374, 2016/09/06 14:48:54.971.161
Scanned     30000 records, RBA    7067022, 2016/09/06 14:57:34.000.170
Scanned     40000 records, RBA    9413177, 2016/09/06 15:07:41.000.186
Scanned     50000 records, RBA   11773709, 2016/09/06 15:16:07.000.594
Scanned     60000 records, RBA   14126750, 2016/09/06 15:24:38.001.063
-------------------------------------------------------------------
Hdr-Ind    :     E  (x45)     Partition  :     .  (x00)
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
RecLength  :  1308  (x051c)   IO Time    : 2016/09/06 17:11:21.717.818
IOType     :   170  (xaa)     OrigNode   :     2  (x02)
TransInd   :     .  (x03)     FormatType :     R  (x52)
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
DDR/TDR Idx:   (002, 009)     AuditPos   : 9986576
Continued  :     N  (x00)     RecCount   :     1  (x01)

2016/09/06 17:11:21.717.818 Metadata             Len 1308 RBA 14702330
Name: ORCL.SOE.CUSTOMERS
*
1)Name          2)Data Type        3)External Length  4)Fetch Offset      5)Scale         6)Level
7)Null          8)Bump if Odd      9)Internal Length 10)Binary Length    11)Table Length 12)Most Sig DT
13)Least Sig DT 14)High Precision  15)Low Precision   16)Elementary Item  17)Occurs       18)Key Column
19)Sub DataType 20)Native DataType 21)Character Set   22)Character Length 23)LOB Type     24)Partial Type
*
TDR version: 1
Definition for table ORCL.SOE.CUSTOMERS
Record Length: 450
Columns: 16
CUSTOMER_ID        134     13        0  0  0 1 0      8      8      8 0 0 0 0 1    0 1   3    2       -1      0 0 0
CUST_FIRST_NAME     64     40       12  0  0 1 0     40     40      0 0 0 0 0 1    0 0   0    1       -1      0 0 0
CUST_LAST_NAME      64     40       58  0  0 1 0     40     40      0 0 0 0 0 1    0 0   0    1       -1      0 0 0
NLS_LANGUAGE        64      3      104  0  0 1 0      3      3      0 0 0 0 0 1    0 0   0    1       -1      0 0 0
NLS_TERRITORY       64     30      112  0  0 1 0     30     30      0 0 0 0 0 1    0 0   0    1       -1      0 0 0
CREDIT_LIMIT       134     11      148  2  0 1 0      8      8      8 0 0 0 0 1    0 0   3    2       -1      0 0 0
CUST_EMAIL          64    100      160  0  0 1 0    100    100      0 0 0 0 0 1    0 0   0    1       -1      0 0 0
ACCOUNT_MGR_ID     134     13      266  0  0 1 0      8      8      8 0 0 0 0 1    0 0   3    2       -1      0 0 0
CUSTOMER_SINCE     192     19      278  0  0 1 0     19     19     19 0 5 0 0 1    0 0   0   12       -1      0 0 0
CUSTOMER_CLASS      64     40      300  0  0 1 0     40     40      0 0 0 0 0 1    0 0   0    1       -1      0 0 0
SUGGESTIONS         64     40      346  0  0 1 0     40     40      0 0 0 0 0 1    0 0   0    1       -1      0 0 0
DOB                192     19      392  0  0 1 0     19     19     19 0 5 0 0 1    0 0   0   12       -1      0 0 0
MAILSHOT            64      1      414  0  0 1 0      1      1      0 0 0 0 0 1    0 0   0    1       -1      0 0 0
PARTNER_MAILSHOT    64      1      420  0  0 1 0      1      1      0 0 0 0 0 1    0 0   0    1       -1      0 0 0
PREFERRED_ADDRESS  134     13      426  0  0 1 0      8      8      8 0 0 0 0 1    0 0   3    2       -1      0 0 0
PREFERRED_CARD     134     13      438  0  0 1 0      8      8      8 0 0 0 0 1    0 0   3    2       -1      0 0 0
End of definition


Filtering suppressed  62444 records

Here's the new table metadata, for sixten columns only and minus the SYS_[...] columns. Its position as shown in the record above is RBA 14702330. To get the commit sequence number (CSN), which we can use to restart the replicat, we need to enable the display of OGG-generated data in the records (ref):

GGSTOKEN ON
GGSTOKEN DETAIL

The Metadata record itself doesn't have a CSN, so disable the filtering

FILTER OFF

and then go to the next record

Logdump 123 >FILTER OFF
Logdump 124 >N
-------------------------------------------------------------------
Hdr-Ind    :     E  (x45)     Partition  :     .  (x0c)
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
RecLength  :   255  (x00ff)   IO Time    : 2016/09/06 17:11:18.000.200
IOType     :     5  (x05)     OrigNode   :   255  (xff)
TransInd   :     .  (x00)     FormatType :     R  (x52)
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
AuditRBA   :        396       AuditPos   : 9986576
Continued  :     N  (x00)     RecCount   :     1  (x01)

2016/09/06 17:11:18.000.200 Insert               Len   255 RBA 14703707
Name: ORCL.SOE.CUSTOMERS  (TDR Index: 9)
After  Image:                                             Partition 12   G  b
0000 000a 0000 0000 0000 0009 27c1 0001 000b 0000 | ............'.......
0007 6775 7374 6176 6f00 0200 0a00 0000 0663 6173 | ..gustavo........cas

[...]

GGS tokens:
TokenID x52 'R' ORAROWID         Info x00  Length   20
4141 4166 632f 4141 4141 4141 434d 6541 4162 0001 | AAAfc/AAAAAACMeAAb..
TokenID x4c 'L' LOGCSN           Info x00  Length    8
3131 3637 3235 3433                               | 11672543
TokenID x36 '6' TRANID           Info x00  Length    9
3236 2e32 372e 3139 35                            | 26.27.195
TokenID x69 'i' ORATHREADID      Info x01  Length    2
0001                                              | ..

It's an INSERT record for our table, with the LOGCSN shown as 11672543.

So if we're happy to ditch all the data in the trail file since it was set up until the point at which we 'fixed' the virtual column issue, we can run in GGSCI:

GGSCI (bigdatalite.localdomain) 44> start rkafka atcsn 0.11672543

Sending START request to MANAGER ...
REPLICAT RKAFKA starting

GGSCI (bigdatalite.localdomain) 49> info rkafka

REPLICAT   RKAFKA    Last Started 2016-09-06 17:32   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:09 ago)
Process ID           25860
Log Read Checkpoint  File ./dirdat/rt000000000
2016-09-06 17:11:22.000764  RBA 14724721

and over in Kafka itself we can now see the records coming through:

$ kafka-console-consumer --zookeeper localhost --topic ORCL.SOE.LOGON
ORCL.SOE.LOGONI42016-09-02 14:56:26.00041142016-09-06T15:50:52.194000(00000000000000002010LOGON_IDCUSTOMER_IDLOGON_DATE4242&2016-09-02:15:56:25
ORCL.SOE.LOGONI42016-09-05 14:39:02.00040942016-09-06T15:50:52.875000(00000000000000002437LOGON_IDCUSTOMER_IDLOGON_DATE4242&2016-09-05:15:39:00
ORCL.SOE.LOGONI42016-09-05 14:44:15.00046042016-09-06T15:50:52.877000(00000000000000002593LOGON_IDCUSTOMER_IDLOGON_DATE4242&2016-09-05:15:44:13
ORCL.SOE.LOGONI42016-09-05 14:46:16.00043642016-09-06T15:50:52.879000(00000000000000002748LOGON_IDCUSTOMER_IDLOGON_DATE4242&2016-09-05:15:46:11
ORCL.SOE.LOGONI42016-09-05 16:17:25.00014242016-09-06T15:50:52.881000(00000000000000002903LOGON_IDCUSTOMER_IDLOGON_DATE4242&2016-09-05:17:17:24
ORCL.SOE.LOGONI42016-09-05 16:22:38.00040142016-09-06T15:50:52.883000(00000000000000003058LOGON_IDCUSTOMER_IDLOGON_DATE4242&2016-09-05:17:22:37
ORCL.SOE.LOGONI42016-09-05 16:25:16.00015142016-09-06T15:50:52.885000(00000000000000003215LOGON_IDCUSTOMER_IDLOGON_DATE4242&2016-09-05:17:25:16
ORCL.SOE.LOGONI42016-09-05 16:26:25.00017542016-09-06T15:50:52.886000(00000000000000003372LOGON_IDCUSTOMER_IDLOGON_DATE4242&2016-09-05:17:26:25
ORCL.SOE.LOGONI42016-09-05 16:27:20.00018642016-09-06T15:50:52.888000(00000000000000003527LOGON_IDCUSTOMER_IDLOGON_DATE4242&2016-09-05:17:27:19
A Better Workaround?

Per Handling Other Database Properties, virtual columns can be handled by using the TABLE FETCHCOLS configuration on the extract to read the virtual values and MAP of the replicat to map them to actual columns on the target. Unfortunately, the system-generated column name isn't accepted by OGG in the FETCHCOLS syntax:

INFO    OGG-06507  Oracle GoldenGate Capture for Oracle, ext1.prm:  MAP (TABLE) resolved (entry ORCL.SOE.CUSTOMERS): TABLE "ORCL"."SOE"."CUSTOMERS", FETCHCOLS(SYS_NC00017$).
ERROR   OGG-00366  Oracle GoldenGate Capture for Oracle, ext1.prm:  Invalid column specified in FETCHCOLS/FETCHCOLSEXCEPT: SYS_NC00017$.
ERROR   OGG-01668  Oracle GoldenGate Capture for Oracle, ext1.prm:  PROCESS ABENDING.

Another tack to try, given that in our case we simply want to make sure the virtual columns don't get picked up at all - is to try and ignore the column altogether. Unfortunately from my experimentation with COLSEXCEPT it appears that OGG excludes specified columns from record data, but not the initial metadata (which is what causes the above problems in the first place). Even if this had worked, COLSEXCEPT doesn't like the system-generated column name, abending the Extract process with:

INFO    OGG-06507  Oracle GoldenGate Capture for Oracle, ext1.prm:  MAP (TABLE) resolved (entry ORCL.SOE.CUSTOMERS): TABLE "ORCL"."SOE"."CUSTOMERS", COLSEXCEPT(SYS_NC00017$).
ERROR   OGG-00366  Oracle GoldenGate Capture for Oracle, ext1.prm:  Invalid column specified in COLS/COLSEXCEPT: SYS_NC00017$.
ERROR   OGG-01668  Oracle GoldenGate Capture for Oracle, ext1.prm:  PROCESS ABENDING.
Conclusion

Oracle GoldenGate is a fantastic way to stream changes from many different RDBMS to a variety of targets, including Kafka. The potential that this offers in terms of data integration and pipelines is great. This post has hopefully shed a little bit of light on how to go about troubleshooting issues that can occur when using this set of tools. Do let me know in the comments below if you have better suggestions for how to deal with the virtual columns created as a result of the function-based index!

Categories: BI & Warehousing

New SHA-256 Certificate Now Available for WebADI and Report Manager

Steven Chan - Tue, 2016-09-06 13:53

WebADI and Report Manager need a digital certificate to integrate with Microsoft Office.  This digital certificate is used to sign the Visual Basic for Applications (VBA) code.

A new SHA-256 digital certificate is now available for EBS 12.1 and 12.2.  This replaces the previous certificate that expired in August 2016. 

There are four different updates available -- you should pick the patch that corresponds to your EBS version:

Known Issues

Office 2007 does not recognize documents that have been digitally signed using SHA-256 certificates. In such cases adding the documents storage location to 'Trusted Locations' can be used to avoid this issue. Further information is available through the Microsoft document:

References

Categories: APPS Blogs

Blast from the Past: I Don't Like Your Examples!

FeuerThoughts - Tue, 2016-09-06 10:23
Originally written in 2000, I thought you might like to check this out in 2016. 
I Don't Like Your Examples!10/11/2000 

I have been writing books about the Oracle PL/SQL programming language for the last five years. In 1999, O'Reilly published my fourth book, Oracle PL/SQL Programming Guide to Oracle8i Features, which created a bit of an uproar among my readership, caused considerable discussion within O'Reilly, and led to my writing this article.


Why did this book cause a sensation? Consider this excerpt from Chapter 2:
Let's look at a simple example. Suppose you are responsible for building a database to keep track of war criminals for the International Court of Justice. You create a package called wcpkg to keep track of alleged war criminals. One of the programs in the package registers a new criminal. You want that register program to always save its changes, even if the calling program hasn't yet issued a COMMIT. These characters are, after all, fairly slippery and you don't want them to get away. The package specification holds no surprises; the transaction type is not evident here:
CREATE PACKAGE wcpkg AS
    ...     PROCEDURE register (
        culprit IN VARCHAR2, event IN VARCHAR2);
END wcpkg;
/The package body, however, contains that new and wonderful pragma:

CREATE PACKAGE BODY wcpkg AS
    ...
    PROCEDURE register (
        culprit IN VARCHAR2, event IN VARCHAR2)
    IS
        PRAGMA AUTONOMOUS_TRANSACTION;
    BEGIN
        INSERT INTO war_criminal (name, activity)
            VALUES (culprit, event);
        COMMIT;
    END;
END wcpkg;
/
And now when I call wcpkg.register, I am assured that my changes have been duly recorded:

wcpkg.register ('Kissinger', 'Secret Bombing of Cambodia');
Now, I expect it's not every day you pick up a technology text and read that Henry Kissinger is a war criminal for the secret bombing of Cambodia. The examples I used in this book, in fact, were dramatically different from my earlier texts--and from just about any technology book you can buy. Here are some of the other topics I incorporated into my text:
  • Excessive CEO compensation--and excessive, destructive layoffs
  • Union-busting activities
  • Positive role of unions in society
  • Police brutality
  • NATO bombing of civilian targets in Serbia
  • Managed Care
  • National Rifle Association and gun control
  • The for-profit prison industry
  • Slashing social programs to finance tax cuts
I did give my readers ample warning. Here is a section from the preface titled "About the Examples."
"I've been writing intensively about PL/SQL since 1994, and I have a great time doing it. At the same time, I must admit that I have simultaneously grown a little bit bored with using the same set of examples again and again (yes, those infamous emp/employee and dept/department tables), and I'm also very concerned about the state of the world as we approach the end of the twentieth century. Sure, things could be worse, but things could be a whole lot better (with my examples and the world). "Given these twin preoccupations, I have decided to offer examples that are decidedly different from the usual. I'll be talking about topics ranging from the state of health care in the United States to the strength of the gun lobby, from wage structures to environmental issues. I believe that even if you don't agree with the positions I have on a particular issue, you will find that this "breath of fresh air" approach will help you engage with the technical material. "I would also be very happy to hear from you--whether you agree or disagree!--and I encourage you to visit my Web site, at www.StevenFeuerstein.com, where you can read more about my life and viewpoints and can get in touch."How Fresh Is That Air?

Though I thought these examples would be a "breath of fresh air," some of my readers felt that the air stank. Here are some typical responses:
Dear Mr. Feuerstein, I, thankfully before buying the book, was able to peruse a copy of your latest PL/SQL programming book. I think you have forgotten one basic principle when you planned the examples. This was supposed to be a book about PL/SQL, not blatant sociopolitical rantings. If I had bought the book, I would be returning it immediately for a complete refund. It doesn't matter whether I agreed or disagreed with your views (in some cases I agreed, in some cases I strongly disagreed). I found the examples so distracting that I was unable to get the information I needed out of the book. Please in the future, remember that we, the book buyers, are looking for information about using PL/SQL. I am as tired of the emp and dept tables as you are, but less distracting examples would have been more appropriate. Personally, I am no longer buying your books nor am I recommending them to my clients as long as they contain the types of examples you used in your latest books. I cannot, in good conscience, recommend them as PL/SQL manuals because the examples removed the books from that category.I have to admit, getting emails like these has not been fun. Here's another:
I have just been shown a copy of the Guide to Oracle 8i Features and to be quite honest am embarrassed on behalf of the O'Reilly publishing company. It is well-known throughout the industry that O'Reilly books are said to be the bibles for technical reference. I am appalled at the liberty that Feuerstein has taken in imposing his personal beliefs throughout the text and examples and am even more appalled that O'Reilly allowed this kind of content to be published. It is highly offensive regardless of freedom of speech and Mr. Feuerstein's belief system and to choose such an unwilling audience is absurd! I will not buy this book and will tell each and every person I know in the industry to do the same. I will as well be cautious when purchasing and or recommending any other O'Reilly technical reference books. This is not the forum for this kind of content!You get the idea. Now, I should also mention that:
  • I have received at least an equal amount of emails praising this particular book, sometimes for the political content explicitly, sometimes simply for the technical content, indicating that my choice of examples was not problematic.
  • O'Reilly & Associates reviewed the book's content and its lawyers did recommend making a few changes. (They didn't, for example, want me to explicitly and blatantly accuse a sitting governor of bribery.)
  • This book became a subject of active debate among O'Reilly editors about what limits, if any, should be placed on an author's desire to include possibly controversial examples.
  • Tim O'Reilly and I talked about this subject at length and he thought that it would make a great topic for public discussion. So here I am!
All the negative--in some cases strongly negative--feedback I got sent me back to the book to examine the content and ask myself some questions: Was I wrong to include this content? Why is it so difficult for people, especially those in the United States, to hear viewpoints that make them uncomfortable? Would I be willing to put these kinds of examples in my "bestseller," the foundation of my series, Oracle PL/SQL Programming, and take a chance at putting off readers? 

Were my examples full of opinions or facts? Can I really separate the two? And what about the examples in all those other books (mine and the several hundred other Oracle books, and thousands of other technical books)? Are they all free of political content?

Democracy and Political Discourse

As I work on this article, I am flying back from a week's teaching in the United Kingdom. As is usual when I spend time outside the United States, and particularly in the U.K. (where I can understand the language), I am struck by the open political discourse--and open challenge--in the media and among individuals.


It seems to me that one part of having a true and vibrant democracy is the free flow of ideas and active debate among neighbors on the crucial issues of our day. Does that go on around you? I sure don't experience it in my neck of the woods. On the contrary, I find that, in the United States, very few people are willing to talk "politics." It is, along with the topic of money and sex, generally veered away from in trepidation. Better to comment on the weather and sports.
Where would such an attitude come from? Much of any individual's behavior in society is patterned after what she or he perceives to be acceptable. Most of us do not want to stand out as different, and certainly not as "troublemakers." What determines acceptability in our society? To a large extent, the mass media.

Reflect on the television, radio, and print media reports you receive: How often do you see real political debate, crossing the entire spectrum, taking place? How often do you hear a member of the media truly challenge politicians and business "leaders" to justify their policies and actions? I believe that very little real debate ever takes place and our journalists, especially the high-profile ones, treat those in power with kid gloves. Sometimes it seems like there is a debate going on (within a T.V. program like "Crossfire," for example), but in fact that debate is missing/ignoring/silencing a large swath of viewpoints: pretty much anything to the left of Bill Clinton.

As a result, it is very difficult to talk politics in our society--especially if your politics are anywhere to the left of center. And it is almost impossible to present an informed, sophisticated critique of the role of global capitalism in the world today.

Now, you might well say to yourself, "Who cares?" You like global capitalism. You don't think it's all that bad, or at least you don't care if a few hundred million people are paid pennies for their labor. And, well, you don't want to talk politics. That's fine. That's your choice. But I also believe that almost every technology book we buy and read is full of politics.

The Hidden and Prevailing Ideology

I believe that just about every technical book comes with a body of politics, an ideology that governs and usually restricts its example set. We don't notice the political slant because it reflects the dominant viewpoint in our society and is thus invisible.


After reviewing many books, I feel comfortable in summarizing the vast majority of texts as having these characteristics:
  • Business-centric: Most examples used in technology books focus on how to make business work more efficiently, regardless of its impact on human society and the world as a whole. As a result, we constantly read about human-resource or personnel systems. And while examples frequently touch on education, these applications have more to do with managing students (the business side of education) than with improving the quality of education those students receive. All of this seems perfectly "natural" since the vast majority of technology is used by businesses to make profits. But does it have to be that way?
  • Consumer-oriented: Many, many examples promote the perspective that the only reason we exist in this world is to buy things. Just about every book about the Internet focuses on some aspect of e-commerce, such as how to maximize the use of banner ads, how to grab and hold eyeballs, how to present product information dynamically.In 1999 Addison-Wesley published a truly marvelous book titled Refactoring: Improving the Design of Existing Code, by Martin Fowler. In it, Martin offers a systematic method for improving the quality of our code without affecting the interface to and behavior of that code. To demonstrate his techniques, the author offers a refreshing example: video rentals. Yet it still comes down to commerce. We are what we buy, not what we think and do with our lives outside of the exchange of items for money.

  • Humans as numbered entities: This is particularly true in database-related books (including my own!). Technology is presented through a medium of scenarios that represent--and manipulate--humans as numbers. Just about any Oracle text you pick up is dominated by "emp-dept" examples: a personnel application that talks about salaries, bonuses, and commissions, when you were hired, which department you belong to, the name of an employee based on an employee_id, and so on. The message, so clearly presented in this dominant theme, is that we are defined primarily as workers and our value in life is derived from the contribution we make to our employer.
  • Everything and anything as numbered entities: Hey, it's not just people! Technical examples extend the quantification approach to natural resources, information, entertainment, etc. Oracle also offers a standard demonstration base of tables and data for a sales/order entry system. This, of course, makes perfect sense in the world of Oracle--driven by the obsessive personality of Larry Ellison to sell, sell, sell software and services. (I own shares of Oracle stock and have benefitted directly from Larry's obsessions.)
There are exceptions. Scott Urman's latest book on PL/SQL, Oracle8i Advanced PL/SQL Programming, uses a college registration system as his example base. Although many American colleges are overly focused on preparing young people for a life of drudgery in one job or another (and corporations are commercializing higher education to an alarming degree), I congratulate Scott on taking a road less traveled.

Breathing Life Into Technical Books

I could go on and on, but I think you get the drift. The bottom line for me is that books written about technology are written by human beings with perspectives and beliefs. Some of us center our lives around a particular technology or around the business applications of that technology. Many of us see the technology as one part of a rich, complex way of life--and dream of ways that this technology can transform and improve human society and our planet.

I don't see what any of us gain - writers and readers alike - from the unwritten but nonetheless rigorously followed rules that technical books must conform to and further support the status quo in our society.
Categories: Development

No More Aquariums or Zoos For Me

FeuerThoughts - Tue, 2016-09-06 10:05
I just finished reading Carl Safina's Beyond Words. It is the latest of a number of books (another great one is Out on a Limb by Ben Kilham) I have read that make it clear beyond any reasonable doubt that the individuals of many, many other species, including bears, octopuses, gorillas, chimpanzees, bonobos, crows, are turtles are self-aware; feel pain, sadness, joy; fear death; play; have individual personalities; work with tools; on and on and on.

In other words, they are no different from us. Except that their bodies have adapted to different circumstances, resulting in different body plans, different capabilities, different ways of manifesting their thoughts.


Yet we enslave them, control their reproduction, abuse and torture them, outright kill them en masse.

It is impossible to live in "civilization" without being at least complicit with much of this destruction (just imagine for a moment the thousands of factories that are needed to put a cell phone in your hands). 

It is, therefore, impossible not to sound like a hypocrite when expressing such thoughts.

Well, fine. Being a hypocrite is better than being an "all-in" abuser. 

And while I am not yet at the point in my life at which I can say goodbye to cars and cell phones, there are things I can do to minimize my hypocrisy and avoid overt support to human devastation of others.

Which brings me to zoos and aquariums. 

I can't do it anymore. I can't wander around exhibits, whether indoors or out, whether spacious or cramped, whether "humane" or neglectful, that restrain animals that should be living free. The justifications for these exhibits fall flat, sound weak and defensive. 

And if you do find any of them even slightly persuasive, simply substitute "Ota Benga" for "elelphant" or "stingray" and see how it "reads."

I do not look forward to the next time my family - my granddaughters! - wants to go to the aquarium or zoo, and I have to say "No thanks, you go without me."

But that's what I will be doing.


Categories: Development

Avoiding generating too much redo.

Tom Kyte - Tue, 2016-09-06 09:46
Hi, In my company most of the developers generate too much redo because :- 1. they load txt files into staging table. 2. update the table with missing information "lockup table and select for each rows" 3. insert the data from the staging table...
Categories: DBA Blogs

Smart Running Sum - Reset when total > 100

Tom Kyte - Tue, 2016-09-06 09:46
Hi, I m trying to add a culomn which will calculate a running sum of the date diffrents between each consecutive flights (In Minutes) Per Destination but with one tweak: every time that the running sum reach spesific amount (Prompt Value = 100 Min....
Categories: DBA Blogs

temporary tablespaces

Tom Kyte - Tue, 2016-09-06 09:46
On a database housing an Oracle Text index there is a user LOADMSST who runs an application that executes ctx_ddl.sync_index, there is another user EDMAPPS who is used by applications that pose Text queries. The table column on which the Oracle Text...
Categories: DBA Blogs

Restore using duplicate is failing

Tom Kyte - Tue, 2016-09-06 09:46
Hi, I am trying to restore target database with below command : rman auxiliary / run { allocate auxiliary channel ch1 type disk; allocate auxiliary channel ch2 type disk; allocate auxiliary channel ch3 type disk; allocate auxiliary chann...
Categories: DBA Blogs

Oracle

Tom Kyte - Tue, 2016-09-06 09:46
1)If I created a procedure in trigger after that i modify the procedure. my question is the trigger is working fine or procedure changes, effected trigger also?.. 2)There is one table. table1(inpt)...........target(output)... 1 ...
Categories: DBA Blogs

DB time

Tom Kyte - Tue, 2016-09-06 09:46
Hi Tom, Lucky I've got chance again. Can you please explain about the db time now shown in statspack reports (10g). The explanation I know of is: it is the time spent in database since instance startup. Can you please elaborate this. I can...
Categories: DBA Blogs

CRS-2615: Could not find server pool ‘smallpool’

Oracle in Action - Tue, 2016-09-06 04:37

RSS content

I received this error message when I was playing around with Extended Policy Framework in Oracle Clusterware 12c.

Policy-managed environment in Oracle clusterware 12c offers the capability to create policies to configure pools in accordance with business needs or application demand, so that pools provide the right service at the right time. Typically, administrators create multiple policies to reflect differences in requirements for the cluster based on business needs or demand, or based on calendar dates or times of the day. The collection of zero or more such user defined policies is called cluster policy set. Oracle Clusterware manages the server pools according to the active policy in the policy set. With a cluster configuration policy set, for example, more servers can be allocated to OLTP workload during weekly business, whereas on the weekends and evenings, batch workloads can be assigned more servers, while performing transitions of server pool configuration atomically.

I added Day policy for my Oracle 12.1.0.1 cluster.

[grid@host01 ~]$ crsctl add policy day -attr "DESCRIPTION='The Day Policy'"

When I tried to set  attributes of the  server pool smallpool for day policy, I received the error:

[root@host01 ~]# crsctl modify serverpool smallpool -attr "MAX_SIZE=2,MIN_SIZE=1,SERVER_CATEGORY=small" -policy day
CRS-2615: Could not find server pool 'smallpool'.
CRS-4000: Command Modify failed, or completed with errors.

[root@host01 ~]# oerr CRS 2615
2615, 1, "Could not find server pool '%s'."
// *Cause: An attempt was made to operate on a server pool that is not 
// registered.
// *Action: Check if the server pool is registered.

When I checked, serverpool smallpool existed and had two servers assigned to it

[root@host01 ~]# crsctl status serverpool smallpool
NAME=smallpool
ACTIVE_SERVERS=host04 host05

Reason:
Since server pool smallpool had not been specified to be in the scope of the cluster policy set, I was  getting above error.

Solution:
Add server pool name smallpool to the cluster  policyset

[root@host01 ~]# crsctl modify policyset -attr "SERVER_POOL_NAMES='smallpool free'" -ksp

Now that server pool smallpool has been added to the scope of the policyset, its configuration can be modified for day policy

[root@host01 ~]# crsctl modify serverpool smallpool -attr "MAX_SIZE=2,MIN_SIZE=1" -policy day

I hope this post was helpful.

Your comments and suggestions are always welcome!

Related Links :

Home

12c RAC Index

12c RAC : CRS-2917: The ‘Free’ server pool cannot be removed from the system



Tags:  

Del.icio.us
Digg

Copyright © ORACLE IN ACTION [CRS-2615: Could not find server pool 'smallpool'], All Right Reserved. 2016.

The post CRS-2615: Could not find server pool ‘smallpool’ appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

“Real-time” is getting real

DBMS2 - Tue, 2016-09-06 01:43

I’ve been an analyst for 35 years, and debates about “real-time” technology have run through my whole career. Some of those debates are by now pretty much settled. In particular:

  • Yes, interactive computer response is crucial.
    • Into the 1980s, many apps were batch-only. Demand for such apps dried up.
    • Business intelligence should occur at interactive speeds, which is a major reason that there’s a market for high-performance analytic RDBMS.
  • Theoretical arguments about “true” real-time vs. near-real-time are often pointless.
    • What matters in most cases is human users’ perceptions of speed.
    • Most of the exceptions to that rule occur when machines race other machines, for example in automated bidding (high frequency trading or otherwise) or in network security.

A big issue that does remain open is: How fresh does data need to be? My preferred summary answer is: As fresh as is needed to support the best decision-making. I think that formulation starts with several advantages:

  • It respects the obvious point that different use cases require different levels of data freshness.
  • It cautions against people who think they need fresh information but aren’t in a position to use it. (Such users have driven much bogus “real-time” demand in the past.)
  • It covers cases of both human and automated decision-making.

Straightforward applications of this principle include:

  • In “buying race” situations such as high-frequency trading, data needs to be as fresh as the other guy’s, and preferably even fresher.
  • Supply-chain systems generally need data that’s fresh to within a few hours; in some cases, sub-hour freshness is needed.
  • That’s a good standard for many desktop business intelligence scenarios as well.
  • Equipment-monitoring systems’ need for data freshness depends on how quickly catastrophic or cascading failures can occur or be averted.
    • Different specific cases call for wildly different levels of data freshness.
    • When equipment is well-instrumented with sensors, freshness requirements can be easy to meet.

E-commerce and other internet interaction scenarios can be more complicated, but it seems safe to say:

  • Recommenders/personalizers should take into account information from the current session.
  • Try very hard to give customers correct information about merchandise availability or pricing.

In meeting freshness requirements, multiple technical challenges can come into play.

  • Traditional batch aggregation is too slow for some analytic needs. That’s a core reason for having an analytic RDBMS.
  • Traditional data integration/movement pipelines can also be too slow. That’s a basis for short-request-capable data stores to also capture some analytic workloads. E.g., this is central to MemSQL’s pitch, and to some NoSQL applications as well.
  • Scoring models at interactive speeds is often easy. Retraining them quickly is much harder, and at this point only rarely done.
  • OLTP (OnLine Transaction Processing) guarantees adequate data freshness …
  • … except in scenarios where the transactions themselves are too slow. Questionably-consistent systems — commonly NoSQL — can usually meet performance requirements, but might have issues with the freshness of accurate
  • Older generations of streaming technology disappointed. The current generation is still maturing.

Based on all that, what technology investments should you be making, in order to meet “real-time” needs? My answers start:

  • Customer communications, online or telephonic as the case may be, should be based on accurate data. In particular:
    • If your OLTP data is somehow siloed away from your phone support data, fix that immediately, if not sooner. (Fixing it 5-15 years ago would be ideal.)
    • If your eventual consistency is so eventual that customers notice, fix it ASAP.
  • If you invest in predictive analytics/machine learning to support your recommenders/personalizers, then your models should at least be scored on fresh data.
    • If your models don’t support that, reformulate them.
    • If your data pipeline doesn’t support that, rebuild it.
    • Actual high-speed retraining of models isn’t an immediate need. But if you’re going to have to transition to that anyway, consider doing do early and getting it over with.
  • Your BI should have great drilldown and exploration. Find the most active users of such functionality in your enterprise, even if — especially if! — they built some kind of departmental analytic system outside the enterprise mainstream. Ask them what, if anything, they need that they don’t have. Respond accordingly.
  • Whatever expensive and complex equipment you have, slather it with sensors. Spend a bit of research effort on seeing whether the resulting sensor logs can be made useful.
    • Please note that this applies both to vehicles and to fixed objects (e.g. buildings, pipelines) as well as traditional industrial machinery.
    • It also applies to any products you make which draw electric power.

So yes — I think “real-time” has finally become pretty real.

Categories: Other

Pages

Subscribe to Oracle FAQ aggregator