Feed aggregator

Application Development at Oracle OpenWorld, San Francisco, September 2016

Christopher Jones - Thu, 2016-08-25 03:11

Well, there is certainly a lot going on at Oracle OpenWorld this September. You can browse the session catalog for interesting talks.

Here are a few highlights in my area:

That's some great content there.

The "Meet the Experts" session is the interactive session where you get to hear from, and ask questions to, our key developers and managers in the Scripting Language and .NET area. If you're shy, you don't have to speak - just come and learn.

We'll also have a demo booth open on the exhibition floor so you can come and chat. (Its location is yet to be announced).

I really hope to see you during the 2016 conference.

Basicfile LOBs 4

Jonathan Lewis - Wed, 2016-08-24 13:02

At the end of the previous installment we saw that a single big batch delete would (apparently) attach all the “reusable” chunks into a single freepool, and asked the questions:

  • Why would the Oracle developer think that this use of one freepool is a good idea ?
  • Why might it be a bad idea ?
  • What happens when we start inserting more data ?

(Okay, I’ll admit it, the third question is a clue about the answer to the second question.)

I find that this process of asking “what’s good, what’s bad, what could possibly go wrong” is an excellent way to prompt thoughts about why Oracle Corp. might have chosen a particular strategy and what that means in terms of the best (or expected) use of the feature and worst threats from misuse of the feature. So lets’s see what thoughts we can come up with.

  • Good idea: The only alternative to using a single freepool when you make chunks reusable is to spread the chunks uniformly across all the freepools – either putting the chunks onto the same free pool that the LOB was previously attached to or doing some sort of round-robin. If you go for either of these fair-share strategies you increase the amount of contention on LOB deletes if many users are deleting at the same time – which sounds like someething you might want to avoid, but LOBs are supposed to be fairly static (somewhere on MoS there’s a note that says the expected behaviour is pretty much: “we thought you’d write once, read many, and not update”) so surely a small amount of contention shouldn’t be a big problem
  • Bad idea: As mentioned in a previous post, it looks like the freepool picked by a process is dependent on the process id – so if you happen to have just a couple of processes doing large deletes they might, coincidentally, pick the same freepool and end up constantly contending with each other rather than drifting in and out of collisions. If, as often happens with archive-like processes, you use one or two processes to delete a large fraction of the data you end up with one or two freepools holding lots of reusable space and all the other freepools holding no freespace – which brings us to the third question.
  • What happens next: Let’s say 3% of your LOB (one day out of a month) is currently “reusable chunks” and the chunks are all attached to the same freepool; your process connects to insert some new LOBs and its process id identifies the wrong freepool. There are no free blocks below the highwater mark and the retention limit is long gone. Does your process (a) add an extent to create some more free space (this is the type of thing that used to happen with manual segment space management, freelist groups and freelists for non-LOB tables and indexes) or (b) start stealing from another freepool that has reusable chunks. In either case what’s going to happen in the longer term ?
  • What happens even later: Imagine you have 28 days of data and use a single process to delete data on the 29th day. For reasons of concurrency you have been running with freepools 20. If option (a) applies then (assuming everything works perfectly) at steady state you will end up with roughly 20 days worth of reusable chunks spread across your 20 freepools before the system stabilises and stops adding unnecessary extents; if option (b) applies then (assuming everything works perfectly) every night you put a load of reusable chunks on one freepool and all through the day your 20 processes are fighting (at the oldest end of the index) to reuse those chunks. I said in an earlier installment that multiple freepools got rid of “the two hot spots” – this single thread deletion strategy has just brought one of them back.

So what really happens ? By the end of the last installment I had deleted the oldest 3,000 LOBs and found them attached as reusable chunks in freepool 2 with several consecutive “empty”  (nrows=81, rrows=0) leaf blocks at the low end of all the other pools.  After running my 4 concurrent processes to insert 750 rows each (i.e. insert the replacements for the 3,000 rows I’ve deleted) this is what the index treedump looks like (with a little editing to show the main breaks between freepools):

----- begin tree dump
branch: 0x1800204 25166340 (0: nrow: 60, level: 1)
   leaf: 0x180020e 25166350 (-1: nrow: 22 rrow: 22)
   leaf: 0x1800212 25166354 (0: nrow: 76 rrow: 76)
   leaf: 0x1800216 25166358 (1: nrow: 81 rrow: 81)
   leaf: 0x180021a 25166362 (2: nrow: 74 rrow: 74)
   leaf: 0x1800239 25166393 (3: nrow: 81 rrow: 81)
   leaf: 0x180023d 25166397 (4: nrow: 81 rrow: 81)
   leaf: 0x1800206 25166342 (5: nrow: 81 rrow: 81)
   leaf: 0x180020a 25166346 (6: nrow: 81 rrow: 81)
   leaf: 0x180021e 25166366 (7: nrow: 81 rrow: 81)
   leaf: 0x1800222 25166370 (8: nrow: 81 rrow: 81)
   leaf: 0x180022a 25166378 (9: nrow: 81 rrow: 81)
   leaf: 0x180022e 25166382 (10: nrow: 78 rrow: 78)
   leaf: 0x1800232 25166386 (11: nrow: 151 rrow: 151)
   leaf: 0x1800226 25166374 (12: nrow: 0 rrow: 0)
   leaf: 0x180020f 25166351 (13: nrow: 64 rrow: 64)
   leaf: 0x1800213 25166355 (14: nrow: 77 rrow: 77)
   leaf: 0x1800217 25166359 (15: nrow: 81 rrow: 81)
   leaf: 0x1800261 25166433 (16: nrow: 81 rrow: 81)
   leaf: 0x1800265 25166437 (17: nrow: 81 rrow: 81)
   leaf: 0x1800269 25166441 (18: nrow: 81 rrow: 81)
   leaf: 0x180026d 25166445 (19: nrow: 81 rrow: 81)
   leaf: 0x1800271 25166449 (20: nrow: 81 rrow: 81)
   leaf: 0x1800275 25166453 (21: nrow: 81 rrow: 81)
   leaf: 0x1800279 25166457 (22: nrow: 81 rrow: 81)
   leaf: 0x180027d 25166461 (23: nrow: 81 rrow: 81)
   leaf: 0x1800242 25166402 (24: nrow: 122 rrow: 122)
   leaf: 0x1800229 25166377 (25: nrow: 0 rrow: 0)
   leaf: 0x1800214 25166356 (26: nrow: 36 rrow: 36)
   leaf: 0x1800230 25166384 (27: nrow: 81 rrow: 81)
   leaf: 0x1800238 25166392 (28: nrow: 81 rrow: 81)
   leaf: 0x180023c 25166396 (29: nrow: 81 rrow: 81)
   leaf: 0x1800225 25166373 (30: nrow: 81 rrow: 81)
   leaf: 0x180022d 25166381 (31: nrow: 75 rrow: 75)
   leaf: 0x1800231 25166385 (32: nrow: 81 rrow: 81)
   leaf: 0x1800235 25166389 (33: nrow: 81 rrow: 81)
   leaf: 0x180022b 25166379 (34: nrow: 81 rrow: 81)
   leaf: 0x180022f 25166383 (35: nrow: 81 rrow: 81)
   leaf: 0x1800233 25166387 (36: nrow: 81 rrow: 81)
   leaf: 0x1800237 25166391 (37: nrow: 134 rrow: 134)
   leaf: 0x1800215 25166357 (38: nrow: 1 rrow: 0)
   leaf: 0x180026e 25166446 (39: nrow: 4 rrow: 0)
   leaf: 0x180021b 25166363 (40: nrow: 1 rrow: 0)
   leaf: 0x180024b 25166411 (41: nrow: 2 rrow: 0)
   leaf: 0x1800276 25166454 (42: nrow: 2 rrow: 0)
   leaf: 0x180024f 25166415 (43: nrow: 0 rrow: 0)
   leaf: 0x180027e 25166462 (44: nrow: 4 rrow: 0)
   leaf: 0x1800221 25166369 (45: nrow: 0 rrow: 0)
   leaf: 0x180027a 25166458 (46: nrow: 0 rrow: 0)
   leaf: 0x1800218 25166360 (47: nrow: 0 rrow: 0)
   leaf: 0x180021c 25166364 (48: nrow: 152 rrow: 0)
   leaf: 0x1800220 25166368 (49: nrow: 152 rrow: 0)
   leaf: 0x1800224 25166372 (50: nrow: 152 rrow: 0)
   leaf: 0x1800228 25166376 (51: nrow: 152 rrow: 72)
   leaf: 0x180022c 25166380 (52: nrow: 152 rrow: 152)
   leaf: 0x1800234 25166388 (53: nrow: 152 rrow: 152)
   leaf: 0x1800253 25166419 (54: nrow: 152 rrow: 152)
   leaf: 0x1800257 25166423 (55: nrow: 152 rrow: 152)
   leaf: 0x180025b 25166427 (56: nrow: 152 rrow: 152)
   leaf: 0x180025f 25166431 (57: nrow: 152 rrow: 152)
   leaf: 0x1800263 25166435 (58: nrow: 1 rrow: 1)
----- end tree dump


The number of leaf blocks has dropped from 72 to 60 – I didn’t think that this could happen without an index coalesce or rebuild, but maybe it’s a special feature of LOBINDEXes or maybe it’s a new feature of B-trees in general that I hadn’t noticed. Some of the “known empty” leaf blocks seem to have been taken out of the structure.

We still see the half full / full split between the leaf blocks for the first 3 freepools when compared to the top freepool.

There are still some empty leaf blocks (rrow = 0), but apart from the top freepool no more than one per freepool for the other sections that are indexing LOBs.

The section of index that is the freepool 2 section for “reusable” chunks shows an interesting anomaly. There are some leafblocks that are now empty (rrow=0) but were only holding a few index entries (nrow=1-4 rather than the 75 – 140 entries that we saw in the previous installment) at the moment they were last updated; this suggests a certain level of contention with problems of read-consistency, cleanout, and locking between processes trying to reclaim reusable blocks.

It’s just slightly surprising the the top freepool shows several empty leaf blocks – is this just a temporary coincidence, or a boundary case that means the blocks will never be cleaned and re-used; if it’s a fluke will a similar fluke also reappear (eventually) on the other freepools. Is it something to do with the fact that freepool 2 happened to be the freepool that got the first lot of reusable chunks ? Clearly we need to run a few more cycles of deletes and inserts to see what happens.

We have one important conclusion to make but before we make it let’s look at the partial key “col 0” values in the row directory of the root block just to confirm that the breaks I’ve listed above do correspond to each of the separate freepool sections:

 0:     col 0; len 10; (10):  00 00 00 01 00 00 09 db 09 8f
 1:     col 0; len ..; (..):  00 00 00 01 00 00 09 db 0b
 2:     col 0; len 10; (10):  00 00 00 01 00 00 09 db 0b bc
 3:     col 0; len ..; (..):  00 00 00 01 00 00 09 db 0d
 4:     col 0; len 10; (10):  00 00 00 01 00 00 09 db 0d 51
 5:     col 0; len 10; (10):  00 00 00 01 00 00 09 db bf f4
 6:     col 0; len 10; (10):  00 00 00 01 00 00 09 db c0 77
 7:     col 0; len 10; (10):  00 00 00 01 00 00 09 db c1 90
 8:     col 0; len 10; (10):  00 00 00 01 00 00 09 db c2 77
 9:     col 0; len 10; (10):  00 00 00 01 00 00 09 db c2 fa
10:     col 0; len 10; (10):  00 00 00 01 00 00 09 db c4 45
11:     col 0; len ..; (..):  00 00 00 01 00 00 09 db c5

12:     col 0; len 10; (10):  00 02 00 01 00 00 09 da fb 74
13:     col 0; len 10; (10):  00 02 00 01 00 00 09 db 08 d9
14:     col 0; len 10; (10):  00 02 00 01 00 00 09 db 09 c0
15:     col 0; len ..; (..):  00 02 00 01 00 00 09 db 0b
16:     col 0; len 10; (10):  00 02 00 01 00 00 09 db 0b ee
17:     col 0; len 10; (10):  00 02 00 01 00 00 09 db bf 8b
18:     col 0; len 10; (10):  00 02 00 01 00 00 09 db c0 a4
19:     col 0; len 10; (10):  00 02 00 01 00 00 09 db c2 21
20:     col 0; len 10; (10):  00 02 00 01 00 00 09 db c3 6c
21:     col 0; len 10; (10):  00 02 00 01 00 00 09 db c4 21
22:     col 0; len 10; (10):  00 02 00 01 00 00 09 db c5 9e
23:     col 0; len 10; (10):  00 02 00 01 00 00 09 db c6 53
24:     col 0; len 10; (10):  00 02 00 01 00 00 09 db c6 d6

25:     col 0; len 10; (10):  00 04 00 01 00 00 09 da fd fb
26:     col 0; len 10; (10):  00 04 00 01 00 00 09 db 08 38
27:     col 0; len 10; (10):  00 04 00 01 00 00 09 db 0a 19
28:     col 0; len ..; (..):  00 04 00 01 00 00 09 db 0b
29:     col 0; len 10; (10):  00 04 00 01 00 00 09 db 0c 7d
30:     col 0; len 10; (10):  00 04 00 01 00 00 09 db bc 64
31:     col 0; len 10; (10):  00 04 00 01 00 00 09 db bc b5
32:     col 0; len ..; (..):  00 04 00 01 00 00 09 db bd
33:     col 0; len 10; (10):  00 04 00 01 00 00 09 db bd 51
34:     col 0; len 10; (10):  00 04 00 01 00 00 09 db bd a2
35:     col 0; len 10; (10):  00 04 00 01 00 00 09 db bd f3
36:     col 0; len 10; (10):  00 04 00 01 00 00 09 db be 44
37:     col 0; len 10; (10):  00 04 00 01 00 00 09 db be 95

38:     col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
39:     col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
40:     col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
41:     col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
42:     col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
43:     col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
44:     col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
45:     col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
46:     col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00

47:     col 0; len 10; (10):  00 06 00 01 00 00 09 da fe d4
48:     col 0; len 10; (10):  00 06 00 01 00 00 09 db 00 ca
49:     col 0; len 10; (10):  00 06 00 01 00 00 09 db 03 24
50:     col 0; len 10; (10):  00 06 00 01 00 00 09 db 05 4c
51:     col 0; len 10; (10):  00 06 00 01 00 00 09 db 07 a6
52:     col 0; len ..; (..):  00 06 00 01 00 00 09 db 0a
53:     col 0; len 10; (10):  00 06 00 01 00 00 09 db 0c 5a
54:     col 0; len 10; (10):  00 06 00 01 00 00 09 db bf da
55:     col 0; len 10; (10):  00 06 00 01 00 00 09 db c1 6c
56:     col 0; len 10; (10):  00 06 00 01 00 00 09 db c2 cc
57:     col 0; len 10; (10):  00 06 00 01 00 00 09 db c4 90
58:     col 0; len 10; (10):  00 06 00 01 00 00 09 db c6 22

I’ve broken the list and numbered the entries to match the treedump above, so it’s each to check that leaf blocks 38 to 46 are the now empty blocks for the reusable chunks. We started the reload with 3,001 entries for reusable chunks all in one freepool; we’ve ended it with none. Something has “stolen” the reusable chunks from freepool 2 so that they could be used for creating new LOBs that were distributed across all the freepools.

Oracle has been very efficient about re-using the index space, with a little bit of wastage creeping in, perhaps caused by coincidences in timing, perhaps by some code that avoids waiting too long when trying to lock index entries. We have a contention point because of the single threaded delete – but it doesn’t appear to be a disaster for space utilisation. Of course we need to look at the level of contention, and repeat the cycle a few times, changing the freepool used for deletion fairly randomly to see if we just got lucky or if the first few deletes are special cases. We can also ask questions about how the “stealing” takes place – does a process steal one index entry at a time, or does it take several consecutive index entries from the same block while it’s got the leaf block locked – but perhaps we don’t really need to know the fine details, the amount of time spent in contention (typically TX waits of some sort) could tell use whether or not we had a significant problem.

Contention and Resources

For each of the processes running the inserts I took a couple of snapshots – session stats and wait events – to see if anything interesting showed up. Naturally, the closer you look the more strange things you find. Here are a few sets of numbers from v$session_event and v$sesstat (in my snapshot format – with the four sessions always reported in the same order);

Event                                             Waits   Time_outs           Csec    Avg Csec    Max Csec
-----                                             -----   ---------           ----    --------    --------
enq: HW - contention                                985           0          93.15        .095           1
enq: HW - contention                                 10           0           5.46        .546           1
enq: HW - contention                              1,001           0         102.27        .102           1
enq: HW - contention                              1,010           0         106.12        .105           1

db file sequential read                           1,038           0          40.75        .039           2
db file sequential read                              39           0           3.21        .082           1
db file sequential read                           1,038           0          28.33        .027           1
db file sequential read                           1,046           0          34.89        .033           1

Name                                                                     Value
----                                                                     -----
physical reads                                                           1,038
physical reads direct                                                      979

physical reads                                                              39
physical reads direct                                                       19

physical reads                                                           1,038
physical reads direct                                                      998

physical reads                                                           1,046
physical reads direct                                                    1,005

session logical reads                                                  114,060
session logical reads                                                   22,950
session logical reads                                                  104,555
session logical reads                                                   93,173

data blocks consistent reads - undo records applied                      2,165
data blocks consistent reads - undo records applied                        119
data blocks consistent reads - undo records applied                      1,222
data blocks consistent reads - undo records applied                        193

My first thought when looking at the wait events was to get an idea of where most of the time went, and I had expected the HW enqueue to be the most likely contender: this enqueue is held not only when the high water mark for a segment is moved, it’s also held when a process is doing any space management for inserting a LOB. So my first suprise was that one session was hardly waiting at all compared to the other sessions.

Then I noticed that this one session was also suffering a very small number of “db file sequential read” waits compared to every other session – but why were ANY sessions doing lots of db file sequential reads: the LOB was declared as nocache so any reads ought to be direct path reads and although Oracle doesn’t always have to wait for EVERY direct path read we should have read (and rewritten) 1,500 “reusable” LOB chunks by direct path reads in each session – I refuse to believe we never waited for ANY of them. So take a look at the session stats: which show us the that the “db file sequential read” waits match exactly with the “physical reads” count but most of the “physical reads” are recorded “physical reads direct” – Oracle is recording the wrong wait event while reading the “reusable” chunks.

Okay, so our direct path read waits are being recorded incorrectly: but one session does hardly any physical reads anyway – so what does that mean ? It means the process ISN’T reusing the chunks – you can’t be reusing chunks if you haven’t t read them. But the dumps from the index tell us that all the reusable chunks have been reused – so how do we resolve that contradiction ? Something is reading the index to identify some reusable chunks, wiping the reference from the index, then not using the chunks so (a) we’ve got some reusable chunks “going missing” and (b) we must be allocating some new chunks from somewhere – maybe bumping the high water mark of the segment, maybe allocating new extents.

Fortunately I had used the dbms_space package to check what the lob segment looked like after I had loaded it. It was 8192 blocks long, with 66 blocks shown as unused and 8,000 (that’s exactly 2 blocks/chunks per LOB) marked as full. After the delete/insert cycle is was 8,576 blocks long, with 8,000 blocks marked as full and 444 marked as unused. We had added three extents of 1MB each that we didn’t need, and one session seems to have avoided some contention by using the new extents for (most of) its LOBs rather than competing for the reusable space with the other LOBs.

Was this a one-off, or a repeatable event. How bad could it get ?



Is there a way of discovering from SQL (perhaps with a low-cost PL/SQL function) the freepool for a LOB when it’s defined as Basicfile. You can get the LOBid for a Securefiles LOB using the dbms_lobutil package and the LOBid includes the critical first two bytes – but the package is only relevant to Securefiles. I rather fancy the idea of a process knowing which freepool it is associated with and only deleting LOBs that come out of that freepool.


Generate Admin Channels to improve Weblogic Admin Console performance (and of FMW-Control)

Darwin IT - Wed, 2016-08-24 11:02
At one of my customers we have quite an impressive domain configuration. It's a FMW domain with SOA, OSB, BAM, WSM, MFT in clusters of 4 nodes. The thing is that when having started all the servers, the console becomes slooooooowwwww. Not to speak of FMW Control (em).

One suggestion is to set the 'Invocation Timeout Seconds' under MyDomain->Configuration->General->Advanced to a value like 2. And 'Management Operation Timeout' under Preferences->Shared Preferences to a value like 5:

This surely makes the console responsive again. But it actually means that the console gives up right away when querying for the (health) state of the servers. So in stead of a health of 'OK', you get a 'server unreachable' message.

When having a lot of servers in the domain, they all share the same Admin Channel, and this seams to get over flooded. AdminServer does not get the responses in time. Sometimes a new request leads to a proper response, but in fact it takes a lot of time.

To reduce the load on the default channel, we created a admin server per managed server. Since it's a lot of servers, and we need to do it on several environments, so I created a wlst-script for it.
The script createAdminChannels.py:
# Create AdminChannels for WebLogic Servers
# @author Martien van den Akker, Darwin-IT Professionals
# @version 1.1, 2016-08-24
# Modify these values as necessary
import sys, traceback
scriptName = sys.argv[0]
def usage():
print 'Call script as: '
print 'Windows: wlst.cmd '+scriptName+' -loadProperties localhost.properties'
print 'Linux: wlst.sh '+scriptName+' -loadProperties environment.properties'
print 'Property file should contain the following properties: '
print "adminUrl=localhost:7001"
print "adminUser=weblogic"
print "adminPwd=welcome1"
def connectToadminServer(adminUrl, adminServerName):
print('Try to connect to the AdminServer')
connect(userConfigFile=usrCfgFile, userKeyFile=usrKeyFile, url=adminUrl)
except NameError, e:
print('Apparently user config properties usrCfgFile and usrKeyFile not set.')
print('Try to connect to the AdminServer adminUser and adminPwd properties')
connect(adminUser, adminPwd, adminUrl)
except WLSTException:
message='Apparently AdminServer not Started!'
print (message)
raise Exception(message)
def createAdminChannel(serverName, adminListenPort):
channelName = serverName+'-AdminChannel'
print('Channel '+channelName +' for '+serverName+' already exists.')
except WLSTException:
print('Create Admin Channel for server: '+serverName+', with port: '+adminListenPort)
print ('Succesfully created channel: '+channelName)
except WLSTException:
apply(traceback.print_exception, sys.exc_info())
message='Failed to create channel '+channelName+'!'
print (message)
raise Exception(message)
def main():
print (lineSeperator)
print ('Start Osb Cluster')
print (lineSeperator)
print('\nConnect to AdminServer ')
connectToadminServer(adminUrl, adminServerName)
print('Start Edit Session')
#Create Admin Channels
# Administrators
print('\nCreate Admin Channels')
for serverName in serverNameList:
createAdminChannel(serverName, adminPort)
# Activate changes
print('Activate Changes')
except NameError, e:
print('Apparently properties not set.')
print "Please check the property: ", sys.exc_info()[0], sys.exc_info()[1]
apply(traceback.print_exception, sys.exc_info())
#call main()

The shell script to call it, createAdminChannels.sh:
# Create AdminChannels
# @author Martien van den Akker, Darwin-IT Professionals
# @version 2.1, 2016-08-24
. fmw12c_env.sh
echo Create Admin Channels
wlst.sh ./createAdminChannels.py -loadProperties $PROPERTY_FILE

And the example property file, darlin-vce-db.properties:
# Properties voor Creeëren SOADomain
# @author Martien van den Akker, Darwin-IT Professionals
# @version 1.0, 2016-04-15
# Properties for AdminServer
# AdminUser

Call the script as $> createAdminChannels.sh darlin-vce-db.properties

In the property file you'll need to name every server in the property serverNames. And for each server the particular Admin Listen Port in serverAdminPorts, in the exact same order. Start with the AdminServer.

At the end of the script the changes are activated and then the AdminServer listens over https on the changed port.

Important: the servers need to be down, except for the the AdminServer.

Unfortunately the infrastructure database was apparently down. So I haven't been able to start SOA, BAM, etc. to see if it is performant now. But I have high hopes...

when I am querying V$Datafile, I am seeing an entry with unrecoverable_change#

Tom Kyte - Wed, 2016-08-24 10:26
Hi team, when I am querying V$Datafile, I am seeing an entry with unrecoverable_change# as below we dont have any Primary standby configuration and this is Standalone DB. 1. how this Column updated (UNRECOVERABLE_CHANGE#) ? FILE# CREATION...
Categories: DBA Blogs

Memory_target and Memory_max_target

Tom Kyte - Wed, 2016-08-24 10:26
Hello Tom, I have very silly question for an experience person in oracle. but i am totally confused. please help As per my knowledge, memory_target is the parameter which oracle use to tune sga and pga components. and Memory_max_target is the p...
Categories: DBA Blogs

Clob EXTRACTVALUE over 4000 char limit

Tom Kyte - Wed, 2016-08-24 10:26
I am trying to extract a piece of text from a clob attribute that has descriptions in it. The problem I am encountering is when the description is over a certain number of characters, I believe 255 char, it fails and throws the error: <i>ORA-...
Categories: DBA Blogs

Transaction Isolation level - Serialization

Tom Kyte - Wed, 2016-08-24 10:26
Hi, I know its not recommended to use transaction isolation level as Serialization but for now we have to stick with it. Please pardon my knowledge as I'm new to this. I currently use Sybase ASA (SQLAnywhere) where we have the isolation level ...
Categories: DBA Blogs

Is it possible to use occi client of oracle11 to fetch/write extended data types that were added in oracle 12c?

Tom Kyte - Wed, 2016-08-24 10:26
Hi, I use oracle11 client library (libocci.so.11.1) to work with extended data types, like RAW(32000), VARCHAR(32000), etc. This is a new feature of Oracle 12c. When I try to fetch RAW data, I get exception: ORA-01461: can bind a LONG value on...
Categories: DBA Blogs

Partition pruning/elimination -reg..

Tom Kyte - Wed, 2016-08-24 10:26
Dear Tom, Sorry for the delay. 1 CREATE TABLE emp (no number, name VARCHAR2(10) , 2 PRIMARY KEY (no, name)) 3 partition by hash(no) 4 ( 5 PARTITION PART1 TABLESPACE "TS1" , 6 PARTITION PART2 TABLESPACE "TS2" 7* ) ...
Categories: DBA Blogs

hiding a server in the cloud

Pat Shuff - Wed, 2016-08-24 09:00
There was a question the other day about securing a server and not letting anyone see it from the public internet. Yesterday we talked about enabling a web server to talk on the private network and not be visible from the public internet. The crux of the question was can we hide the console and shell access and only access the system from another machine in the Oracle Cloud.

To review, we can configure ports into and out of a server by defining a Security Rule and Security List. The options that we have are to allow ports to communicate between the public-internet, sites, or instances. You can find out more about Security Lists from the online documentation. You must have the Compute_Operations role to be able to define a new Security List. With a Security List you can drop inbound packets without acknowledge or reject packets with acknowledgement. The recommended configuration is to Drop with no reply. The outbound policy allows you to permit, drop without acknowledgement or reject the packet with acknowledgement. The outbound policy allows you to have your program communicate with the outside world or lock down the instance. By default everything is configured to allow outbound and deny inbound.

Once you have a Security List defined, you create exceptions to the list through Security Rules. You can find out more about Security Rules from the online documentation. You must have the Compute_Operations role to manage Security Rules. With rules you create a name for the rule and either enable or disable communications on a specific port. For example the defaultPublicSSHAccess is setup to allow communications on port 22 with traffic from the public-internet to your instance. This is mapped to the default Security List which allows console and command line login to Linux instances. For our discussion today we are going to create a new Security List that allows local instances to communicate via ssh and disable public access. We will create a Security Rule that creates the routing locally on port 22. We define a port by selecting a Security Application. In this example we want to allow ssh which corresponds to port 22. We additionally need to define the source and destination. We have the choice of connecting to a Security List or to Security IP List. The Security IP List is either to or from an instance, the pubilc internet, or a site. We can add other options using the Security IP List tab on the left side of the screen. If we look at the default definitions we see that instance is mapped to the instances that have been allocated into this administrative domain (AD). In our example this maps to,, because these three private ip address ranges can be provisioned into our AD. The public internet is mapped to The site is mapped to,, Note that the netmask is the key difference between the site and instance definitions.

Our exercise for today is to take our WebServer1 (or Instance 1 in the diagram below) and disable ssh access from the public internet. We also want to enable ssh from WebServer2 (or Instance 2) so that we can access the console and shell on this computer. We effectively want to hide WebServer1 from all public internet access and only allow proxy login to this server from WebServer2. The network topology will look like

Step 1:Go through the configuration steps (all 9 of them) from two days ago and configure one compute instance with an httpd server, ports open, and firewall disabled. We will call this instance WebServer1 and go with the default Security List that allows ssh from the public internet.

Step 2:Repeat step 1 and call this instance WebServer2 and go with the default Security List that allows ssh from the public internet.

Step 3:The first thing that we need to do is define a new Security List. For this security list we want to allow ssh on the private network and not on the public network. We will call this list privateSSH.

Step 4:Now we need to define a Security Rule for port 22 and allow communication from the instance to our privateSSH Security List that we just created. We are allowing ssh on port 22 on the 10.x.x.x network but not the public network.

Step 5:We now need to update the instance network options for WebServer1 and add the privateSSH Security List item and remove the default Security List. Before we make this change we have to setup a couple of things. We first copy the ssh keys from our desktop to the ~opc/.ssh directory to allow WebServer2 to ssh into WebServer1. We then test the ssh by logging into WebServer2 then ssh from WebServer2 to WebServer1. We currently can ssh into WebServer1 from our desktop. We can do this as opc just to test connectivity.

Step 6:We add the privateSSH, remove default, and verify the Security List is configured properly for WebServer1.

Step 7:Verify that we can still ssh from WebServer2 to WebServer1 but can not access WebServer1 from our desktop across the public internet. In this example we connect to WebServer1 as opc from our desktop. We then execute step 6 and try to connect again. We expect the second connection to fail.

In summary, we have taken two web servers and hidden one from the public internet. We can log into a shell from the other web server but not from the public internet. We used web servers for this example because they are easy to test and play with. We could do something more complex like deploy PeopleSoft, JDE, E-Business Suite, or Primavera. Removing ssh access is the same and we can open up more ports for database or identity communication between the hidden and exposed services.

The basic answer to the question of "can we hide a server from public internet access" the answer is yes. We can easily hide a server with Security Lists, Security Rules, Security IP Lists, and Security Applications. We can script these in Orchestrations or CLI scripts if we wanted to. In this blog we went through how to do this from the compute console and provided links to additional documentation to learn more about using the compute console to customize this for different applications.

Becky's BI Apps Corner: OBIA Back-to-Beginnings - Naming Conventions and Jargon

Rittman Mead Consulting - Wed, 2016-08-24 08:30

It's easy to talk about a technology using only jargon. It's much harder to talk about a technology without using jargon. I have seen many meetings between business and IT break down because of this communication barrier. I find it more discouraging when I see this communication breakdown happen between advanced IT staff and new IT staff. For those of us in any technological field, it's easy to forget how long it took to learn all of the ins and outs, the terminology and jargon.

During a recent project, I had another consultant shadowing me to get experience with OBIA. (Hi, Julia!) I was 'lettering' a lot so I decided it was time to diagram my jargon. My scribbles on a whiteboard gave me the idea that it might be helpful to do a bit of connecting the dots between OBIA and data warehousing jargon and naming conventions used in OBIA.

BI Applications Load Plan phases: SDE - Source Dependent Extract

SDE is the first phase in the ETL process that loads source data into the staging area. SDE tasks are source database specific. SDE mappings that run in the load plan will load staging tables. These tables end with _DS and _FS among others.

SIL - Source Independent Load

SIL is the second phase in the ETL process that takes the staged data from the staging tables and loads or transforms them into the target tables. SILOS mappings that run in the load plan will load dimension and fact tables. These tables end with _D and _F among others.

PLP - Post Load Process

This third and final phase in the ETL process occurs after the target tables have been loaded and is commonly used for loading aggregate fact tables. PLP mappings that run in the load plan will load aggregate tables ending with _A. Aggregate tables are often fact table data that has been summed up by a common dimension. For example, a common report might look at finance data by the month. Using the aggregate tables by fiscal period would help improve reporting response time.

For further information about any of the other table types, be sure to read Table Types for Oracle Business Analytics Warehouse. Additionally, this page has probably the best explanation for staging tables and incremental loads.

Source System Acronyms

Since the SDE tasks are source database specific, the SDE mappings' names also include an acronym for the source system in the mapping name. Below are the supported source database systems and the acronyms used in the names and an example for each.

  • Oracle E-Business Suite - ORA

    • SDE_ORA_DomainGeneral_Currency
  • Oracle Siebel - SBL

  • JD Edwards Enterprise One - JDEE

    • SDE_JDE_DomainGeneral_Currency
  • PeopleSoft - PSFT

    • SDE_PSFT_DomainGeneral_Currency_FINSCM
  • Oracle Fusion Applications - FUSION

    • SDE_FUSION_DomainGeneral_Currency
  • Taleo - TLO

    • SDE_TLO_DomainGeneral_Country
  • Oracle Service Cloud - RNCX

    • SDE_RNCX_DomainGeneral
  • Universal - Universal

    • SDE_Universal_DomainGeneral

This wraps up our quick "Back-to-Beginnings" refresher on naming conventions and the jargon used in relation to ETL and mappings. Let me know in the comments below if there are other topics you would like me to cover in my "Back-to-Beginnings" series. As always, be sure to check out our available training, which now includes remote training options, and our On Demand Training Beta Program. For my next post I'll be covering two new features in OBIA, Health Check and ETL Diagnostics, which are the missing pieces you didn't know you've been waiting for.

Categories: BI & Warehousing

Columns Affected by Extended Data Type

Michael Dinh - Wed, 2016-08-24 08:02

I am not going to post how to convert to extended data type since there are many blogs on that already.

Just a reminder, there’s no going back; hence have backup and possibly minimize changes during testing to be able to restore (which is ideal and may not be feasible).

Before reverting to MAX_STRING_SIZE=STANDARD, columns affected by extended data type need to be identified.

From Oracle documentation, MAX_STRING_SIZE controls the maximum size of VARCHAR2, NVARCHAR2, and RAW data types in SQL.

STANDARD means that the length limits for Oracle Database releases prior to Oracle Database 12c apply
(for example, 4000 bytes for VARCHAR2 and NVARCHAR2, and 2000 bytes for RAW).

EXTENDED means that the 32767 byte limit introduced in Oracle Database 12c applies.

Test case:
$ sysdba @max_string_size.sql

SQL*Plus: Release Production on Wed Aug 24 05:41:22 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

OWNER                TABLE_NAME                               COLUMN_NAME               DATA_TYPE  DATA_LENGTH CHAR_LENGTH C
-------------------- ---------------------------------------- ------------------------- ---------- ----------- ----------- -
MDINH                T                                        NAME                      VARCHAR2          5000        5000 B
MDINH                T2                                       T2                        RAW               2555           0
MDINH                T3                                       ID                        VARCHAR2         24000        6000 C
SYS                  DBA_ADDM_FINDINGS                        FINDING_NAME              VARCHAR2         32767       32767 B
SYS                  DBA_ADDM_FINDINGS                        IMPACT_TYPE               VARCHAR2         32767       32767 B
SYS                  DBA_ADDM_FINDINGS                        MESSAGE                   VARCHAR2         32767       32767 B
SYS                  DBA_ADDM_FINDINGS                        MORE_INFO                 VARCHAR2         32767       32767 B
SYS                  DBA_ADDM_TASKS                           ERROR_MESSAGE             VARCHAR2         32767       32767 B
SYS                  DBA_ADDM_TASKS                           STATUS_MESSAGE            VARCHAR2         32767       32767 B
SYS                  DBA_ADDM_TASK_DIRECTIVES                 DESCRIPTION               VARCHAR2         32767       32767 B
SYS                  DBA_ADVISOR_ACTIONS                      MESSAGE                   VARCHAR2         32767       32767 B
SYS                  DBA_ADVISOR_DEF_PARAMETERS               DESCRIPTION               VARCHAR2         32767       32767 B
SYS                  DBA_ADVISOR_EXECUTIONS                   ERROR_MESSAGE             VARCHAR2         32767       32767 B
SYS                  DBA_ADVISOR_EXECUTIONS                   STATUS_MESSAGE            VARCHAR2         32767       32767 B
SYS                  DBA_ADVISOR_EXEC_PARAMETERS              DESCRIPTION               VARCHAR2         32767       32767 B
SYS                  DBA_ADVISOR_FINDINGS                     FINDING_NAME              VARCHAR2         32767       32767 B
SYS                  DBA_ADVISOR_FINDINGS                     IMPACT_TYPE               VARCHAR2         32767       32767 B
SYS                  DBA_ADVISOR_FINDINGS                     MESSAGE                   VARCHAR2         32767       32767 B
SYS                  DBA_ADVISOR_FINDINGS                     MORE_INFO                 VARCHAR2         32767       32767 B
SYS                  DBA_ADVISOR_FINDING_NAMES                FINDING_NAME              VARCHAR2         32767       32767 B
SYS                  DBA_ADVISOR_PARAMETERS                   DESCRIPTION               VARCHAR2         32767       32767 B
SYS                  DBA_ADVISOR_RATIONALE                    IMPACT_TYPE               VARCHAR2         32767       32767 B
SYS                  DBA_ADVISOR_RATIONALE                    MESSAGE                   VARCHAR2         32767       32767 B
SYS                  DBA_ADVISOR_RECOMMENDATIONS              BENEFIT_TYPE              VARCHAR2         32767       32767 B
SYS                  DBA_ADVISOR_TASKS                        ERROR_MESSAGE             VARCHAR2         32767       32767 B
SYS                  DBA_ADVISOR_TASKS                        STATUS_MESSAGE            VARCHAR2         32767       32767 B
SYS                  DBA_REGISTRY                             OTHER_SCHEMAS             VARCHAR2         32767       32767 B
SYS                  DBA_SCHEDULER_CHAIN_RULES                ACTION                    VARCHAR2         32767       32767 B
SYS                  DBA_SCHEDULER_CHAIN_RULES                CONDITION                 VARCHAR2         32767       32767 B
SYS                  DBA_SCHEDULER_JOBS                       PROGRAM_NAME              VARCHAR2         16000       16000 B
SYS                  DBA_SCHEDULER_JOBS                       PROGRAM_OWNER             VARCHAR2         16000       16000 B
SYS                  DBA_SCHEDULER_JOBS                       RAISE_EVENTS              VARCHAR2         32767       32767 B
SYS                  DBA_SCHEDULER_JOBS                       SCHEDULE_NAME             VARCHAR2         16000       16000 B
SYS                  DBA_SCHEDULER_JOBS                       SCHEDULE_OWNER            VARCHAR2         16000       16000 B
SYS                  DBA_SCHEDULER_JOB_RUN_DETAILS            ERRORS                    VARCHAR2         32767       32767 B
SYS                  DBA_SCHEDULER_JOB_RUN_DETAILS            OUTPUT                    VARCHAR2         32767       32767 B
SYS                  DBA_SCHEDULER_WINDOWS                    SCHEDULE_NAME             VARCHAR2         16000       16000 B
SYS                  DBA_SCHEDULER_WINDOWS                    SCHEDULE_OWNER            VARCHAR2         16000       16000 B
SYS                  DBA_VIEWS                                TEXT_VC                   VARCHAR2         32767       32767 B
SYS                  INT$DBA_VIEWS                            TEXT_VC                   VARCHAR2         32767       32767 B

40 rows selected.

test:(SYS@test):PRIMARY> show parameter max_string

NAME                                 TYPE                           VALUE
------------------------------------ ------------------------------ ------------------------------
max_string_size                      string                         EXTENDED

test:(SYS@test):PRIMARY> desc mdinh.t
 Name                                                  Null?    Type
 ----------------------------------------------------- -------- ------------------------------------
 ID                                                             VARCHAR2(1000)
 NAME                                                           VARCHAR2(5000)

test:(SYS@test):PRIMARY> desc mdinh.t2
 Name                                                  Null?    Type
 ----------------------------------------------------- -------- ------------------------------------
 T2                                                             RAW(2555)

test:(SYS@test):PRIMARY> desc mdinh.t3
 Name                                                  Null?    Type
 ----------------------------------------------------- -------- ------------------------------------
 ID                                                             VARCHAR2(6000 CHAR)

test:(SYS@test):PRIMARY> @nls.sql

PARAMETER                      SESSION                        DATABASE                       INSTANCE
------------------------------ ------------------------------ ------------------------------ ------------------------------
NLS_COMP                       BINARY                         BINARY                         BINARY
NLS_SORT                       BINARY                         BINARY
NLS_CALENDAR                   GREGORIAN                      GREGORIAN
NLS_CURRENCY                   $                              $
NLS_LANGUAGE                   AMERICAN                       AMERICAN                       AMERICAN
NLS_TERRITORY                  AMERICA                        AMERICA                        AMERICA
NLS_DATE_FORMAT                YYYY-MM-DD HH24:MI:SS          DD-MON-RR
NLS_TIME_FORMAT                HH.MI.SSXFF AM                 HH.MI.SSXFF AM
NLS_CHARACTERSET                                              AL32UTF8
NLS_ISO_CURRENCY               AMERICA                        AMERICA
NLS_DATE_LANGUAGE              AMERICAN                       AMERICAN
NLS_DUAL_CURRENCY              $                              $
NLS_NCHAR_CONV_EXCP            FALSE                          FALSE                          FALSE
NLS_LENGTH_SEMANTICS           CHAR                           BYTE                           BYTE
NLS_NCHAR_CHARACTERSET                                        AL16UTF16
NLS_NUMERIC_CHARACTERS         .,                             .,

20 rows selected.


Now it’s evident as to why there is no going back since SYS objects seem to be modified too.

That’s the easy part. Next is to create the database with identical components installed and hopefully full export/import will work.

Some useful information if you are thinking about migrating to extended data type.
12c Indexing Extended Data Types Part I (A Big Hurt)

Unable to login with a SQL Authenticator

Darwin IT - Wed, 2016-08-24 06:25
For a project, we are migrating Forms to ADF.
There is also a number of reports which are not to be migrated yet.
Therefore, we need to keep the users in the database.
As we do not want to maintain two user stores, we thought it to be a good idea to create an authenticator in WebLogic to authenticate to the database.
There are loads of blog posts / support notes on how to configure a SQL Authenticator, so I won't repeat this procedure.
Take a look at Oracle Support Document 1342157.1 or a post from Edwin Biemond

However, we noticed a flaw in these (old) references.

In WebLogic 12c, when we create a SQL Authenticaotr, there is a filed named Identity Domain.
From the Oracle documentation, we learn:
All Authentication providers included in WebLogic Server support identity domains. If the identity domain attribute is set on an Authentication provider, that Authentication provider can authenticate only users who are defined in that identity domain.
An identity domain is a logical namespace for users and groups, typically representing a discrete set of users and groups in the physical datastore. Identity domains are used to identify the users associated with particular partitions.

As we do not use partitions in our domain, there is no use for an Identity domain.
But we did not know that when we setup the authenticator (and who reads the entire manual right??)

So following the previously mentioned resources, we created the SQL authenticator and entered the domain name in the Identity Domain field.
This resulted in a not working authenticator.
  • When a user tried to login using database credentials, the autentication faild Always
  • No error message in the log
  • No activity in the datase (no query was executed to check the credentials)
To further analyse the issue, we added som Java options to the Admin server, using the setDomainEnv script:
JAVA_OPTIONS="${JAVA_OPTIONS} -Dweblogic.kernel.debug=true -Dweblogic.log.StdoutSeverity=Debug -Dweblogic.log.LogSeverity=Debug -Dweblogic.StdoutDebugEnabled=true -Dweblogic.log.LoggerSeverity=Debug -Dweblogic.debug.DebugSecurityAtn=true" 

This gave us more insight in the issue. The log file now revealled:
####<23-aug-2016 15:13:02 uur CEST> <Debug> <SecurityAtn> <BSCC6112> <DefaultServer> <[ACTIVE] ExecuteThread: '5' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <d489addb-c960-41f1-babb-f912aec329bc-00000036> <1471957982959> <[severity-value: 128] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <weblogic.security.providers.authentication.shared.DBMSAtnLoginModuleImpl.login exception: 
java.security.PrivilegedActionException: javax.security.auth.login.LoginException: javax.security.auth.callback.UnsupportedCallbackException: Unrecognized Callback: class weblogic.security.auth.callback.IdentityDomainUserCallback
at java.security.AccessController.doPrivileged(Native Method)
at com.bea.common.security.internal.service.LoginModuleWrapper.login(LoginModuleWrapper.java:114)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755)

Unfortunately, we still had no clue.
As you have read from the beginning of the post, you might think that setting the Identity Domain field to DOMAIN might solve the issue (the log states partition-name: DOMAIN) but no, that's no solution either.

The trick is to leave the Identity Domain filed blank. After that the authenticator worked like a charm.

Why taking good holidays is good practice

Steve Jones - Wed, 2016-08-24 02:22
Back when I was a fairly recent graduate I received one of the best pieces of advice I've ever received.  The project was having some delivery pressures and I was seen as crucial to one of the key parts.  As a result my manager was putting pressure on me to cancel my holiday (two weeks of Windsurfing bliss in the Med with friends) with a promise that the company would cover the costs.  I was
Categories: Fusion Middleware

Links for 2016-08-23 [del.icio.us]

Categories: DBA Blogs

Social Successes Shine at #OOW16

Linda Fishman Hoyle - Tue, 2016-08-23 17:50

 A Guest Post by Angela Wells, Oracle Senior Director, Outbound Product Management, Social 

Once again, Oracle will shine a spotlight on social customer successes and the latest Oracle Social Cloud innovations. This year’s content will focus on how social touches every area of the business from customer service to marketing to business intelligence. And some of the world’s biggest brands—like General Motors, Dr Pepper Snapple Group, and Mack Trucks—will join us on stage to bring these social innovations to life. Also, back by popular demand, is our Social Intelligence Center, showcasing our interactive data visualization technology across several HD screens. Think of it as the interactive social media hub for OpenWorld.

Oracle OpenWorld 2016 kicks off on Sunday, September 18, with great User Group content and an opening keynote by Larry Ellison. The event presents an excellent opportunity for digital and social professionals to get engaged, stay informed, and to learn how to adapt to massive change and innovation happening across every single industry worldwide.

Follow Us Via Social

Whether attending or not, be sure to follow Oracle Social Cloud on our social channels as we’ll be covering #OOW16 from start to finish: FacebookTwitterLinkedInGoogle +Instagram, and Oracle Social Spotlight.

Here are some of the highlights:

Social Cloud and Customer Presentations

There’s no better advertising than showcasing your customer’s successes. And that’s what all our sessions will do, while giving attendees real, take-home execution tactics and strategies. See below some highlights: 

  • “Social Data Driving Business Intelligence” with General Motors, Polaris, Dr Pepper Snapple Group and Oracle Social Cloud [CON7342]
  • “Is Social the New Customer Call Center?” with General Motors, Cummins, D&M Holdings and Oracle Social Cloud [CON7340]
  • “Tackling The Content Conundrum” with T.H. March, FamilyShare Network and Oracle Social Cloud [CON7341]
  • “Influencers - Business's Not So Silent Partner” with Mack Trucks, M&C Saatchi, Hornblower and Oracle Social Cloud [CON7344]
  • “How Social Service Guides the Customer Experience” A Ted-style session with Oracle’s Angela Wells and General Motors’ Dr. Rebecca Harris Burns [CON7339]
  • “A Sky-High Overview of the Oracle Social Cloud” with Oracle Social executives Faz Assadi and Christie Sultemeier [CON7343]

Social Intelligence Center

Part of CX Central, Oracle Social Cloud’s Social Intelligence Center is a command-center style data visualization experience. Our SRM intakes incredible volumes of social data across the globe and displays it in engaging visuals so users can quickly understand and spot trends and insights. We’ll have the SRM searching for everything related to #OOW16 so users can see what’s trending at OpenWorld from top influencers to geo-location data. The area will have plenty of seating and charging stations. Additionally, Oracle Social Cloud executives will be on-hand to give demos of the SRM, allowing attendees to experiment with our social tool and understand the power social can have for your business.

The Social Intelligence Center is part of CX Central, which is located in Moscone Center, West Level 2 lobby area. Hours of operation will be Monday (10:15am – 5:30pm), Tuesday (10:15am – 5:15pm, and networking reception 6:00pm – 7:30pm), and Wednesday (10:15am – 4:15 pm).

Inspiring and Innovative Keynotes  

Always exciting and inspirational, keynotes by top Oracle executives—Larry Ellison, Mark Hurd, Thomas Kurian, and Chris Leone—provide insight into the state of business, technology, innovation, and the future.

Celebration and Networking

The networking fun starts on Monday night and events go on throughout the week across all of San Francisco, with a spectacular community appreciation event on Wednesday night along the city waterfront at AT&T Park, featuring a live performance by Billy Joel, and complimentary food and beverages. Bring your phone and let’s get social!

Encourage your prospects and customers to come celebrate the Oracle Community by networking with digital and business leaders, cloud customers, and subject matter experts. It’s a can’t-miss event—and in one of the most beautiful, exciting cities in the world—San Francisco.

The Madness of the Modern Human

FeuerThoughts - Tue, 2016-08-23 17:13
"I eat using Uber-Eats.I push a button, the food is made, the driver delivers it to me. But when it's fully autonomous, how does the food actually get to my door? There's a tech stack that can get the car through the physical world to my doorstep, but then what? Does some robot get out of my car and deliver my food? That's hard. I don't know if that's two decades out, but the point is the physical world is getting wired up fast."
As you might guess, that is a quote from Uber's Mad CEO, Something-or-Other Kalanick. A unicorn billionaire who first wants to push taxi cab drivers to poverty, replacing them with "gig" contractors, who will then (in not too many years) be replaced by driverless cars.

Seriously, what is wrong with us? With the oceans crashing against our coastal cities, the planet warming, the poles melting, the forests being razed, the Sixth Extinction well in progress, can we still be so madly obsessed with using technology in the most absurd, energy-consuming, convenience-crazed ways?

A driverless car brings the food to my building "but then what?"

BUT THEN WHAT? How about getting your big fat ever spreading butt out of your Lazy-Boy and answering the damn door yourself, maybe even walking outside to a neighborhood restaurant and partaking in a meal around others?

Silly, sad humans.
Categories: Development

Update using subquery with group by

Tom Kyte - Tue, 2016-08-23 16:06
Hi Tom I have a table with 18 million rows. I need to do an once-off update to fix a data issue. CREATE TABLE TESTLARGE(CODE number, STATE varchar2(5), SDATE date, flag char(1)); This table does not have any primary key enforced. I have t...
Categories: DBA Blogs

problem with a few chjaracters in JAVA / JDBC that cant be converted from source 9i solaris DB to a 12c linux system

Tom Kyte - Tue, 2016-08-23 16:06
Hello Tom, 1.what is the right superset for WE8DEC?? 2. what OS charset (say I would it set in my java program) should by set if I want to read varchar fields and write them in a new varchar field? Apart from this they received "ORA-29345:...
Categories: DBA Blogs

DBMS_STATS.GATHER_TABLE_STATS Gives wrong row count (NUM_ROWS column in user_tables)

Tom Kyte - Tue, 2016-08-23 16:06
Tom, I am a big fan of yours and you are awesome. Here is something I observed today. I always thought analyzing table will populate number of rows in user_tables.num_rows column. I have a table with 204,913 records. When I do a select count(1) fr...
Categories: DBA Blogs


Subscribe to Oracle FAQ aggregator