Feed aggregator

Insert All Record Count for different tables

Tom Kyte - Sat, 2017-05-20 02:46
Hi Tom, I am using a multi-insert statement as below in a SQL script. There are around 50 insert all statements like this in my script for different tables. INSERT ALL WHEN FLAG='Y' THEN INTO table1() values() WHEN ...
Categories: DBA Blogs

Extent Sizing for Very Large Segment

Tom Kyte - Sat, 2017-05-20 02:46
We are in the process of reorganizing a large table: 6+ billion rows, almost 500 gigs of space, 126k extents. This table is an audit record so it would rarely --if ever-- get updated or deleted. My question has to do with extent sizing: What would...
Categories: DBA Blogs

255 columns

Jonathan Lewis - Fri, 2017-05-19 11:49

This is one of my “black hole” articles – I drafted it six months ago, but forgot to publish it.

A recent post on OTN highlighted some of the interesting oddities that appear when you create tables with more than 255 columns. In fact this was a more subtle case than usual because it reminded us that it’s possible to have a partitioned table which appears to have less than the critical 255 columns while actually having more than 255 columns thanks to the anomaly of how Oracle handles dropping columns in a partitioned table.  (For a useful insight see this note from Dominic Brooks – and for a nice thought about preparing simple tables for an exchange with such a partitioned tables look at the 12.2 feature of “create table for exchange” in Maria Colgan’s article)

The thread took me down the path of trying to recreate some notes I wrote a long time ago and can no longer find, and the OP’s problem wasn’t the basic one I had assumed anyway, but I thought I’d publish a bit of the work I had done so that you can see another of the funny effects that appear when your table definition has too many columns (and you use them).

The OP told us about a table with more than 350 columns, so here’s a little script I wrote to generate a table with 365 columns and some data. (It turned out that the OP had more than 390 columns in the table, but 30+ had been “dropped”.)


rem
rem	Script:		wide_table_2.sql
rem	Author:		Jonathan Lewis
rem	Dated:		Nov 2016
rem	Purpose:	
rem
rem	Last tested 
rem		11.2.0.4
rem

create sequence s1;

declare
	m_statement_1	varchar2(32767) := 
		'create table t1(col0001 varchar2(10),';
	m_statement_2	varchar2(32767) := 
		'insert into t1 values(lpad(s1.nextval,10),';
begin
	for i in 2 .. 365 loop
		m_statement_1 := m_statement_1 ||
			'col' || to_char(i,'FM0000') || ' varchar2(100),'
		;
		m_statement_2 := m_statement_2 ||
			case when i in (2,3,4) 
			-- case when i in (122,123,124) 
			-- case when i in (262,263,264) 
				then 'dbms_random.string(''U'',ceil(dbms_random.value(0,100))),'
			     when i = 365
				then 'lpad(s1.currval,7))'
				else '''COL' || to_char(i,'FM0000') || ''','
			end
		;
	end loop;
	
	m_statement_1 := substr(m_statement_1, 1, length(m_statement_1) - 1);
	m_statement_1 := m_statement_1 || ') pctfree 25';

	execute immediate m_statement_1;

	for i in 1..10000 loop
		execute immediate m_statement_2;
	end loop;

end;
/


I’ve taken a fairly simple approach to building a string that creates a table – and it’s easy to adjust the number of columns – and a string to insert some values into that table. The insert statement will insert a row number into the first and last columns of the table and generate a random length string for a few of the columns. I’ve picked three possible sets of three columns for the random length string; one set is definitely going to be in the first row piece, one set is definitely going to be in the last row piece, and (since the row will split 110/255) one will be somewhere inside whichever is the larger row piece.

If I wanted to do something more sophisticated I’d probably have to switch to a PL/SQL array for the two statements strings – 32,767 characters doesn’t give me much freedom to play if I wanted to test a table with 1,000 columns.

Having created and populated my table, this is what I did the following three tests with it:


analyze table t1 compute statistics;

prompt	====
prompt	CTAS
prompt	====

create table t1a pctfree 25 as select * from t1;
analyze table t1a compute statistics;

select	table_name, num_rows, avg_row_len, blocks, chain_cnt
from	user_tables
where	table_name like 'T1%'
;

prompt	======
prompt	Insert
prompt	======

truncate table t1a;
insert into t1a select * from t1;
analyze table t1a compute statistics;

select	table_name, num_rows, avg_row_len, blocks, chain_cnt
from	user_tables
where	table_name like 'T1%'
;

prompt	=============
prompt	Insert append
prompt	=============

truncate table t1a;
insert /*+ append */ into t1a select * from t1;
analyze table t1a compute statistics;

select	table_name, num_rows, avg_row_len, blocks, chain_cnt
from	user_tables
where	table_name like 'T1%'
;

The first test creates a new table (t1a, at pctfree 25, matching the original) of the original table with a simple “create as select”.

The second test truncates the table and does a basic “insert as select” to repopulate the copy table.

Third test truncates the table again and does an “insert as select” with the /*+ append */ hint to repopulate the copy table.

In all three cases (and with three variations of where the longer random strings went) I used the analyze command to gather stats on the tables so that I could get a count of the number of chained rows; and I dumped a couple of blocks from the tables to see what the inserted rows looked like.

Here’s a summary of the results when the random-length columns are near the start of the row (the position didn’t really affect the outcome and the results for 12.1.0.2 were very similar):

====
CTAS
====

TABLE_NAME             NUM_ROWS AVG_ROW_LEN     BLOCKS  CHAIN_CNT
-------------------- ---------- ----------- ---------- ----------
T1                        10000        3062       6676       3313
T1A                       10000        3062       9504        237

======
Insert
======

TABLE_NAME             NUM_ROWS AVG_ROW_LEN     BLOCKS  CHAIN_CNT
-------------------- ---------- ----------- ---------- ----------
T1                        10000        3062       6676       3313
T1A                       10000        3062       6676       3287

=============
Insert append
=============

TABLE_NAME             NUM_ROWS AVG_ROW_LEN     BLOCKS  CHAIN_CNT
-------------------- ---------- ----------- ---------- ----------
T1                        10000        3062       6676       3313
T1A                       10000        3062       9504        237

As you can see we get two significantly different results: the CTAS and the “insert append” produce tables with 9,504 blocks and 237 chained rows reported, while the original table (single row inserts) and the regular “insert as select” produce tables with 6,676 blocks and 3,133 chained rows. It seems that the CTAS and direct path insert have minimised the number of chained rows at a cost of a dramatically increased number of blocks.

We know, of course, that every row in this table will consist of two row pieces, one of 110 columns and one of 255 columns; so every row is in some respects chained due to the potential for intra-block chaining of those two pieces, but the analyze command reports only those rows which start in one block and end in another block as chained rows – intra-block chaining doesn’t count (in this version of Oracle).

There are two questions to address in these results: the first is “What’s happening?”, the second, which we ask when we get the answer to the first, is “How come the direct path method still gives us some chained rows?”

I believe the answer to the first question is that the direct path method attempts to avoid chaining unavoidable row-pieces. Even if it means leaving a huge amount of empty space in a block Oracle starts a new row in a new block if there isn’t enough space for both of the anticipated row-pieces to fit in the current block. I think this may be a feature to help Exadata and direct path reads where a relatively small number of chained rows (which might be outside the current Exadata storage unit) could have a catastrophic impact on performance because the system would have to do a single block read to pick up the extra piece – which could have a devastating impact on the performance of the Exadata smart scan.

So why do some rows still see chaining under this strategy – I think it’s because there’s a small error in the arithmetic somewhere (possibly visible only in ASSM tablespaces, perhaps related to row-piece headers) where Oracle thinks there’s enough space for both row pieces but there isn’t quite so it tries to insert both pieces and has to chain the second one. (I’ll back this up with some analysis of block dumps in a moment).

So you have a choice – lots of wasted space and a little row-chaining, or maximum packing of data and (potentially) lots of row-chaining. But there’s more: I’ve said we get one row piece of 110 columns and one of 255 columns for each row, but the point at which the split occurs and the order in which the pieces are inserted depends on the method used.

  • Single row inserts (initial table creation): The split occurred at column 111 – so the leading 110 columns are in one row piece and the trailing 255 columns are in the other; and the row piece with the tail-end columns is inserted first.
  • Array inserts (normal): Exactly the same as the single row inserts.
  • Direct path inserts / CTAS: The split occurred at column 256, with the leading row-piece inserted first and the trailing 110 column row-piece inserted second.

I’m not sure that this really matters very much when you consider the dramatic difference in size that can appear in the comparison between direct path and normal inserts, but maybe there’s someone who will notice a performance (or even space) side effect because of this inconsistency. (And I haven’t even thought about the effects of basic or oltp compression yet!)

I said I’d come back to the row-chaining anomaly. One of the little details that I didn’t include in my code listing was the call to “analyze table report chained rows” that I did (after executing $OARCLE_HOME/rdbms/admin/utlchain.sql) to list the head rowids of the chained rows into the chained_rows table. After doing this I ran a simple pl/sql loop to dump all the relevant blocks to the trace file:

begin
	for r in (
		select 
			dbms_rowid.rowid_relative_fno(head_rowid) file#, 
			dbms_rowid.rowid_block_number(head_rowid) block# 
			from	chained_rows
		) loop
			execute immediate 'alter system dump datafile ' || r.file# || ' block ' || r.block#;
	end loop;
end;
/

Here’s a little extract from the resulting trace file showing you what the start of a row piece looks like when dumped:

tab 0, row 0, @0x1765
tl: 2075 fb: --H-F--- lb: 0x0  cc: 255
nrid:  0x01401bc4.1
col  0: [10]  20 20 20 20 20 20 20 33 36 32
col  1: [19]  41 52 43 4a 4a 42 4e 55 46 4b 48 4c 45 47 4c 58 4c 4e 56
col  2: [ 8]  59 4b 51 46 4a 50 53 55
col  3: [17]  53 58 59 4e 4a 49 54 4a 41 5a 5a 51 44 44 4b 58 4d
col  4: [ 7]  43 4f 4c 30 30 30 35
col  5: [ 7]  43 4f 4c 30 30 30 36

A convenient thing to check is the cc: entry (end of 2nd line). You can see that this row piece has 255 columns, and if you look at the first six columns dumped you can see that it’s the row numbered 362, then there are three columns of different length strings, then two columns with the values ‘COL00005’ and ‘COL0007’ respectively. It’s the “cc:” entry that’s useful though. I’m going to do a bit of simple unix hackery:

grep " cc: " test_ora_24398.trc | sed "s/^.*cc: //"  | sort | uniq -c | sort -n
      1 1
      5 2
    112 108
    125 109
    237 110
    474 255

In my 237 blocks with chained rows I had 474 row pieces of 255 columns and 237 row pieces of 110 columns; then I had 125 row pieces that had lost (and therefore chained) one column and 112 row pieces that had lost and therefore chained 2 columns. I also had a couple of small “tail-end” pieces from earlier blocks scattered in these blocks. These figures suggest that there’s a small error (actually no more than about 20 bytes) in the calculation Oracle does to decide if it can fit a whole row into the current block or whether it has to go on to the next empty block.

Conclusions

When copying a table defined with more than 255 columns there’s the potential for a huge variation in the space usage and chain count depending on whether you do a CTAS (or insert /*+ append */) or a simple insert. You have to decide which option is the biggest threat to your available resources.

There is a little anomaly with the way in which rows are split that is also dependent on the method used for copying – this may also have some effect, though perhaps small enough to be ignored when compared with the space/chaining difference.

Even though CTAS/direct path insert can eliminate a lot of row chaining it is still possible to find some row chaining in the resulting data. This may be the result of a calculation error (or possibly a deliberate space saving compromise).

Lagniappe

Oracle 12.2 may do things very differently.

 


ROWNUM Bugs

Tom Kyte - Fri, 2017-05-19 08:26
Hi, i read this below question https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:948366252775 in latest follow up a reader posted like this <code>( 5 years later... select * from X where rownum in (select 1 from dual) sti...
Categories: DBA Blogs

NOLOGGING Bad For Recovery

Michael Dinh - Thu, 2017-05-18 22:41

What doesn’t kill you makes you stronger.
I have gone through NOLOGGING hell and not sure if I am out yet.
Disk was lost from server and database was recovered.

Oracle Standard Edition (SE2) for Microsoft Windows x64 (64-bit).

RMAN> restore database validate preview;

  List of Archived Logs in backup set 4098
  Thrd Seq     Low SCN    Low Time  Next SCN   Next Time
  ---- ------- ---------- --------- ---------- ---------
  1    15209   134553557  26-APR-17 134574622  26-APR-17
  1    15218   134681063  26-APR-17 134690751  27-APR-17
RMAN-05119: recovery can not be done to a consistent state.
Media recovery start SCN is 133224369
Recovery must be done beyond SCN 133235710 to clear datafile fuzziness

RMAN> spool log to recover.log
RMAN> recover database until sequence 15209;
RMAN> alter database open resetlogs;
RMAN> exit 

Bug 20315311 – RMAN-5119: recovery can not be done to a consistent state (Doc ID 20315311.8)
The content was last updated on: 17-FEB-2017

RMAN gives the following notification in the 'restore database preview' output even though a successful backup has been performed:
  RMAN-05119: recovery can not be done to a consistent state
 
Workaround
 None

SQL> select * from mytable;

select * from mytable
                      *
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 5, block # 915)
ORA-26040: Data block was loaded using the NOLOGGING option

RMAN> validate tablespace USERS;

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
5    OK     334            263346       848640          134981624

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
8    OK     295            1225517      3932160         134981561

Finished validate at 18-MAY-17

RMAN> list failure;

Database Role: PRIMARY

no failures found that match specification

RMAN> advise failure;

Database Role: PRIMARY

no failures found that match specification

RMAN> exit

RMAN> backup validate check logical database;

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
5    OK     334            263346       848640          134982083
  File Name: 

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
8    OK     295            1225517      3932160         134982061
  File Name: 

Finished validate at 18-MAY-17  

Basically, NOLOGGING operation occurred from backup before resetlogs recovery
and there is no datafile/database backup following NOLOGGING operations
since database backup is performed once a day.

The Gains and Pains of Nologging Operations (Doc ID 290161.1)

However, NOLOGGING is intended for configurations in which media recovery 
or the recovery of the corresponding object is not important. 
Thus, if the disk or tape or storage media fails, 
you will not be able to recover your changes from the redo because the changes were never logged.  
SQL> select max(NONLOGGED_END_CHANGE#), max(NONLOGGED_END_TIME), max(RESETLOGS_CHANGE#), max(RESETLOGS_TIME) from v$nonlogged_block;
MAX(NONLOGGED_END_CHANGE#) MAX(NONLOGGED_END_TI MAX(RESETLOGS_CHANGE#) MAX(RESETLOGS_TIME)
-------------------------- -------------------- ---------------------- --------------------
                 134534115

SQL> select count(*) from v$nonlogged_block;
  COUNT(*)
----------
        64

-- NOTICE: this matched what RMAN reported.
SQL> select file#, count(*), sum(blocks) from v$nonlogged_block group by file#;
     FILE#   COUNT(*) SUM(BLOCKS)
---------- ---------- -----------
         5         54         334
         8         10         295

SQL> select resetlogs_change#, resetlogs_time, current_scn from v$database;
RESETLOGS_CHANGE# RESETLOGS_TIME       CURRENT_SCN
----------------- -------------------- -----------
        134553558 15-MAY-2017 18:28:53   135266743

SQL> select file#, block#, NONLOGGED_START_CHANGE# from v$nonlogged_block
  2  where NONLOGGED_START_CHANGE# > (select RESETLOGS_CHANGE# from v$database)
  3  ;

no rows selected

SQL>

Truncating table and reloading data resolve SELECT issues but block corruption (Free Block) still exists.
You can read more about Playing with Oracle Free Block Corruption

SQL> truncate table mytable;

Table truncated.

SQL> select * from mytable;

no rows selected

SQL> r
  1  SELECT e.owner, e.segment_type, e.segment_name, e.partition_name, c.file#
  2  , greatest(e.block_id, c.block#) corr_start_block#
  3  , least(e.block_id+e.blocks-1, c.block#+c.blocks-1) corr_end_block#
  4  , least(e.block_id+e.blocks-1, c.block#+c.blocks-1)
  5  - greatest(e.block_id, c.block#) + 1 blocks_corrupted
  6  , null description
  7  FROM dba_extents e, v$nonlogged_block c
  8  WHERE e.file_id = c.file#
  9  AND e.block_id <= c.block# + c.blocks - 1 10 AND e.block_id + e.blocks - 1 >= c.block#
 11  UNION
 12  SELECT null owner, null segment_type, null segment_name, null partition_name, c.file#
 13  , greatest(f.block_id, c.block#) corr_start_block#
 14  , least(f.block_id+f.blocks-1, c.block#+c.blocks-1) corr_end_block#
 15  , least(f.block_id+f.blocks-1, c.block#+c.blocks-1)
 16  - greatest(f.block_id, c.block#) + 1 blocks_corrupted
 17  , 'Free Block' description
 18  FROM dba_free_space f, v$nonlogged_block c
 19  WHERE f.file_id = c.file#
 20  AND f.block_id <= c.block# + c.blocks - 1 21 AND f.block_id + f.blocks - 1 >= c.block#
 22  order by file#, corr_start_block#
 23*

                                                                                      START     END
OWNER      SEGMENT_TYPE         SEGMENT_NAME         PARTITION_NAME        FILE#     BLOCK#  BLOCK# BLOCKS_CORRUPTED DESCRIPTION
---------- -------------------- -------------------- -------------------- ------ ---------- ------- ---------------- ------------------------------
SCHEMA    TABLE                ########                                       5        219     219                1
SCHEMA    TABLE                ########                                       5        227     227                1
SCHEMA    TABLE                ########                                       5        235     235                1
SCHEMA    TABLE                ##################                             5        251     251                1
SCHEMA    TABLE                ##########                                     5        275     275                1
SCHEMA    TABLE                ############                                   5        283     283                1
SCHEMA    TABLE                #############                                  5        459     459                1
SCHEMA    TABLE                ##########                                     5        467     467                1
SCHEMA    TABLE                ################                               5        491     491                1
SCHEMA    TABLE                #############                                  5        499     499                1
SCHEMA    TABLE                ##################                             5        851     851                1
SCHEMA    TABLE                ########                                       5        875     875                1
SCHEMA    TABLE                #############                                  5        883     883                1
SCHEMA    TABLE                ################                               5        891     891                1
SCHEMA    TABLE                ##################                             5        899     903                5
SCHEMA    TABLE                ################                               5        907     907                1
SCHEMA    TABLE                ############                                   5        915     919                5
SCHEMA    TABLE                ############                                   5       1091    1091                1
SCHEMA    TABLE                ##########                                     5       1107    1107                1
SCHEMA    TABLE                ##############                                 5       1323    1327                5
SCHEMA    TABLE                ############                                   5       1331    1331                1
SCHEMA    TABLE                ##############                                 5       1347    1351                5
SCHEMA    TABLE                ############                                   5       1355    1355                1
SCHEMA    TABLE                ###########                                    5       1643    1643                1
SCHEMA    TABLE                ##########                                     5       1651    1651                1
SCHEMA    TABLE                ###########                                    5       2099    2103                5
SCHEMA    TABLE                ###############                                5       2531    2531                1
SCHEMA    TABLE                ##############                                 5       2539    2543                5
SCHEMA    TABLE                ##############                                 5       2547    2547                1
SCHEMA    TABLE                ###############                                5       2707    2707                1
SCHEMA    TABLE                ###########                                    5       2715    2715                1
SCHEMA    TABLE                ###############                                5       2875    2875                1
SCHEMA    TABLE                ##############                                 5       4211    4215                5
                                                                               5       4216    4218                3 Free Block
SCHEMA    TABLE                ###########                                    5       4611    4611                1
SCHEMA    TABLE                ########                                       5       4627    4627                1
                                                                               5       4872    4877                6 Free Block
                                                                               5       4880    4886                7 Free Block
                                                                               5       5064    5071                8 Free Block
                                                                               5       5073    5087               15 Free Block
                                                                               5       5089    5103               15 Free Block
                                                                               5       5105    5119               15 Free Block
                                                                               5       5121    5135               15 Free Block
                                                                               5       5137    5159               23 Free Block
                                                                               5       5161    5175               15 Free Block
                                                                               5       5177    5191               15 Free Block
                                                                               5       5193    5207               15 Free Block
                                                                               5       5209    5223               15 Free Block
                                                                               5       5225    5239               15 Free Block
                                                                               5       5241    5255               15 Free Block
                                                                               5       5257    5263                7 Free Block
                                                                               5       5328    5335                8 Free Block
                                                                               5       5337    5343                7 Free Block
                                                                               5     229506  229550               45 Free Block
                                                                               5     608697  608703                7 Free Block
                                                                               8    1120824 #######                5 Free Block
                                                                               8    1120872 #######                8 Free Block
                                                                               8    1274888 #######                8 Free Block
                                                                               8    1536681 #######               14 Free Block
                                                                               8    1536697 #######               11 Free Block
                                                                               8    1536713 #######                6 Free Block
                                                                               8    1791056 #######                8 Free Block
                                                                               8    2804571 #######               37 Free Block
                                                                               8    3213819 #######                5 Free Block
                                                                               8    3854212 #######              193 Free Block

65 rows selected.

Elapsed: 00:09:22.13
SQL>

How to identify all the Corrupted Objects in the Database with RMAN (Doc ID 472231.1)
In 12c the NOLOGGING blocks identified by rman validate are in new view v$nonlogged_block

set lines 200 pages 1000 tab off trimsp on timing on
col owner for a10
col segment_type for a20
col segment_name for a20
col partition_name for a20
col partition_name for a20
col description for a30
col file# for 99999
col corr_start_block# heading 'CORRUPT|START|BLOCK#' for 999999999
col corr_end_block# heading 'CORRUPT|END|BLOCK#' for 999999999
SELECT e.owner, e.segment_type, e.segment_name, e.partition_name, c.file#
, greatest(e.block_id, c.block#) corr_start_block#
, least(e.block_id+e.blocks-1, c.block#+c.blocks-1) corr_end_block#
, least(e.block_id+e.blocks-1, c.block#+c.blocks-1)
- greatest(e.block_id, c.block#) + 1 blocks_corrupted
, null description
FROM dba_extents e, v$nonlogged_block c
WHERE e.file_id = c.file#
AND e.block_id <= c.block# + c.blocks - 1 AND e.block_id + e.blocks - 1 >= c.block#
UNION
SELECT null owner, null segment_type, null segment_name, null partition_name, c.file#
, greatest(f.block_id, c.block#) corr_start_block#
, least(f.block_id+f.blocks-1, c.block#+c.blocks-1) corr_end_block#
, least(f.block_id+f.blocks-1, c.block#+c.blocks-1)
- greatest(f.block_id, c.block#) + 1 blocks_corrupted
, 'Free Block' description
FROM dba_free_space f, v$nonlogged_block c
WHERE f.file_id = c.file#
AND f.block_id <= c.block# + c.blocks - 1 AND f.block_id + f.blocks - 1 >= c.block#
order by file#, corr_start_block#
;


A Sneak Peek at Oracle’s Chatbot Cloud Service and 5 Key Factors Necessary for Bot ROI

In early May, I flew out to Oracle HQ in San Francisco for an early look at their yet-to-be released Oracle Intelligent Bots Service.  The training left me ecstatic that the technology to quickly build great chatbots is finally here. However, the question remains, can chatbots provide real value for your business?

What is a chatbot?

A chatbot is a program that simulates a conversation partner over a messaging app. It can integrate with any kind of messaging client, such as Facebook, WeChat, WhatsApp, Slack, Skype, or you could even build your own client. If you’ve been following our blog, you may have already seen the chatbot (Atlas) we built as part of our annual hackathon.

Here is an example conversation I had with Atlas recently:

Chatbot Conversations

Chatbots use Natural Language Processing and Machine Learning algorithms to take what the user said and match it up against pre-defined conversations. Understanding how chatbots recognize phrases can help determine what conversations a user could have with a bot. Here is some chatbot terminology:

  • An intent is something the users wants, and the bot maps this to an action. For example, the user might want to say some form of “Hi” to the bot, and we would want the bot to respond with a random greeting. A chatbot generally has up to 2,000 intents.
  • Utterances are examples of different phrases that represent an intent. An intent might have 10-15 utterances. The bot will be able to match statements similar to those utterances to the intent, but what a user says doesn’t have to exactly match an utterance. This is where the language processing algorithms are used.
  • Entities are key variables the bot can parse from the intent.

Suppose we are building an HR chatbot that can help users reset passwords. The goal is for our bot to understand that the user needs a password reset link, and then send the correct link to the user. Our intent could be called Password Reset. Since the user could have accounts for different services, we would need to create an entity called AccountType for our bot to parse from what the user said. AccountType could map to “Gitlab”, “WebCenter”, or “OpenAir”.

As a rough design, we could start with:

  • Intent: Password Reset
  • Utterances:
    • I’d like to reset my password.
    • How do I change my password for Gitlab?
    • I forgot my WebCenter pw, can you help?
    • Please assist me in receiving a new password.
    • Forgot my passcode for OpenAir.
    • Give me another password.
  • Entity: AccountType (Gitlab, WebCenter, OpenAir)

Intents like this one will need to be set up for a bot to know what to do when a user says something. If a user asks the bot a question it doesn’t have an intent for, it won’t know what to do and the user will get frustrated. Our bot still won’t know how to order a pizza, but it could help with password resets.

Key Factor #1: Chatbots should have a purpose

A chatbot can only answer questions it is designed to answer. If I was building an HR Help chatbot, it probably would not be able to order a pizza, rent a car for you, or check the weather. It could, for example, reset passwords, report harassment, set up a new hire, and search for policies. Once the requirements are set, developers can build, design, and test to ensure the bot has those capabilities.

This makes it important to set expectations with the user on what types of questions they can ask it, without giving the user a list of questions. Introducing a bot along with its purpose will help with this. For example, we could have the HR Help Bot, the Travel Planning bot, or the Sales Rep Info bot. If we introduced the Fishbowl Ask-Me-Anything bot, users will start asking it a lot of questions we didn’t plan for it to be able to answer.

Conversations can be more complicated than a simple back and forth, or question and answer. The capability is there (Oracle’s solution gives developers full control over a Conversational State Machine), but I have yet to explore the full capabilities.

Once a purpose and a set of intents are identified, a chatbot could be a useful tool to engage customers or employees.

Key Factor #2: Design Architecture

Bots are great for interacting with difference services. Oracle Intelligent Bot Service is designed to make it easy for developers to make REST API calls and database lookups in between parsing what the user says, and returning a response.

Here are a few things to think about when designing a bot’s architecture:

  • Integrations: What services will the bot interact with?
  • Security: Are users typing their bank account number over Facebook chat?
  • Human interaction: How will the bot flip users over to a human to help when they get frustrated?
  • Infrastructure: What will be on premise and what will be in the cloud?
  • Performance: How to minimize network requests?
Key Factor #3: Analytics

Analytics can be used to improve the bot’s capability over time and understand the impact on the company. Some companies may already have metrics around help desk call volume or customer conversion rates, and it would be interesting to compare that data from before and after a bot’s release.

Beyond that, bot analytics will be able to show the performance of the bot. Analytics could show the top questions a bot is asked but can’t answer, how many questions it answers successfully each day, and what questions it mistook for something else. Oracle’s chatbot solution will have some capabilities built in, and the platform is so flexible it will be possible to gather any data about a bot.

Key Factor #4: Bot Building Best Practices

There is a lot to do when it comes to building the bot. From setting up the infrastructure, connecting all the services, and filling out all the utterances. There are some best practices to keep in mind as well.

The bot should sound like a human. Personality can play a big role in giving users a better interaction.

As users become more familiar with chatbots, there will also be a set of questions they expect every bot to be able to answer. This list might start with:

  • Hi.
  • What do you do?
  • Are you human?
  • Help!
  • Tell me a joke.
  • How are you?

When the bot is going to run a query or API that may take a while, it is important to warn the user in advance and echo that the bot understood what the user wanted. Some apps will also support “is typing” statuses, which is another great way to show the bot is thinking.

Key Factor #5: Testing

Users have high expectations for the intelligence level of a chatbot. They expect the Machine Learning algorithms to work well, and the bot to seem smart. If the bot doesn’t meet their expectations on the first try, they are unlikely to use the bot in the future.

Testing and tuning utterances can make the difference for making a bot seem smart. The bot should be able to accurately map what a user says to the correct intent. Oracle’s chatbot solution has some nice testing capabilities around utterances and intents, and making sure what the users says maps correctly.

Chatbots are another piece of software, so it is important to do performance and user testing on it as well.

Conclusion

Chatbots are a great way to tie in a single user interface to a large variety of services, or automate repetitive conversations. There are plenty of business use cases that would benefit from a chatbot, but the ROI depends on thorough requirements gathering and using analytics to optimize the bot. That being said, companies that have already started down the path – like this Accounting Firm in Minneapolis – are seeing benefits from bots automating manual processes leading to a reduction in operating costs by 25 to 40%. Savings like this will vary across use case and industry, but overall the automation gains from a bot are there regardless of what the bot is being used for. We would love to discuss your ideas on how a chatbot could help your business. Leave a comment or contact us with any questions.

The post A Sneak Peek at Oracle’s Chatbot Cloud Service and 5 Key Factors Necessary for Bot ROI appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Rownum

Tom Kyte - Thu, 2017-05-18 14:06
Hi Tom I have 2 questions 1. I have a table called a where I have more than one row. When I put a query "SELECT * FROM A WHERE ROWNUM=1" it gives me the first row. But if I put a query specifying any number other than 1 for e.g. "SELECT * FRO...
Categories: DBA Blogs

Making a genric search sql Query

Tom Kyte - Thu, 2017-05-18 14:06
Hi Tom, I have a web application connecting to a Oracle DB. I have a search screen where user can enter any search criterion and submit the information. The information is collected and passed to an Oracle Stored Proc. where a dynamic query is f...
Categories: DBA Blogs

how to create column alias name from the value of another column

Tom Kyte - Thu, 2017-05-18 14:06
Hi Tom, Table structure: empno empname sal balance i had select stmt which will accept date(18-May-2017) as parameter and display the data for date given but column name for balance should be balance_0518 could you please help me in the q...
Categories: DBA Blogs

Oracle JET Hybrid - NavDrawer Template Menu/Header Structure

Andrejus Baranovski - Thu, 2017-05-18 13:31
Oracle JET provides NavDrawer template for Web and for Hybrid. Read how to create JET Hybrid application based on template - Create a Hybrid Mobile Application. There is significant difference in NavDrawer template implementation when we compare Web and Hybrid application.

Hybrid template draws menu structure on top of the form. Web template is pushing form to the right, when menu is opened. Such approach works fine on the Web, but you would see significant UI lag each time when menu item is selected. Probably thats the reason why hybrid NavDrawer template draws menu on top of the form - visually this provides better performance when switching between menu items. Menu is rendered on the top of the form in JET Hybrid Nav Drawer template:


Form is loaded instantly, when menu item is selected. Header in JET Hybrid NavDrawer template stays fixed, it doesnt scroll. This gives good opportunity to put there common actions:


NavDrawer template in JET Web application moves form to the right, when menu is opened - thats the main visual difference when comparing to NavDrawer Hybrid:


Index page of NavDrawer hybrid template is almost identical to Web NavDrawer, except that it doesn't contain header part. Header is implemented separate module:


I have customized default header implementation with additional items - logo and user preferences:


Header module is constructed in appController, this is how it is generated by default. If we want to have access to variables/functions from appController in the header, we need to create a mapping:


Every module must include div with fixed top JET CSS class (thats why it doesnt scroll and stays on top), where you would copy header code:


Header is bind with module, which is defined by headerConfig variable (must be located in each module) - which is initialized in appController:


Thats all about menu/header implementation.

Let's learn how to push update to Google Play. Make sure to increase application version in Cordova config.xml file:


Go to Google Play and upload new APK, it will be parsed and Google Play automatically will deactivate previous version:


You can initiate roll-out to production:


This will push new release to Google Play. Users will be automatically notified about new version:


Version 2 of our JET Hybrid app is available on Google Play:

New OA Framework 12.2.4 Update 16 Now Available

Steven Chan - Thu, 2017-05-18 10:30
Web-based content in Oracle E-Business Suite Release 12 runs on the Oracle Application Framework (also known as OA Framework, OAF, or FWK) user interface libraries and infrastructure. Since the initial release of Oracle E-Business Suite Release 12.2 in 2013, we have released a number of cumulative updates to Oracle Application Framework to fix performance, security, and stability issues.

These updates are provided in cumulative Release Update Packs, and cumulative Bundle Patches that can be applied on top of the Release Update Packs. In this context, cumulative means that the latest RUP or Bundle Patch contains everything released earlier.

The latest OAF update for Oracle E-Business Suite Release 12.2.4 is now available:

Where is this update documented?

Instructions for installing this OAF Release Update Pack are in the following My Oracle Support knowledge document:

Who should apply this patch?

All Oracle E-Business Suite Release 12.2.4 users should apply this patch.  Future OAF patches for EBS Release 12.2.4 will require this patch as a prerequisite. 

What's new in this update?

This bundle patch is cumulative: it includes all fixes released in previous EBS Release 12.2.4 bundle patches.

This latest bundle patch includes fixes for following bugs/issues:

  • Row validation exception when multiple attachments are added.
  • The Fieldset legend label element of advanced search panel radio buttons group is empty. 

Related Articles

Categories: APPS Blogs

Datascape Podcast Episode 9 – What’s Up with Oracle These Days?

Pythian Group - Thu, 2017-05-18 09:06

On today’s episode of the Datascape Podcast we are going to talk about the Oracle Database. With the hype around cloud and cloud first initiatives, automation, DevOps, and very specialized cloud-based databases, I wondered what was going on with one of the datascape’s old faithful technologies, Oracle Database.

Joining us is Simon Pane, who is an Oracle consultant and expert. Simon is very much immersed in the community and truly has his feet on the ground. He’s been working in the Oracle DBA space as a consultant for almost his entire career, with a focus around Oracle’s core database technologies. Simon often speaks at various conferences around North America and Europe. He was recently inducted as a board member to Independent Oracle User Group (IOUG), one of the main Oracle user groups.

You can find it on iTunes, Google Play, Sound Cloud, and just about anywhere that shares podcasts.

Key Points From This Episode:

  • Learn more about which types of companies and organizations use the Oracle Database.
  • Understand the different barriers of adoption for using the Oracle Database.
  • Simon shares more about the average size implementation that they see at Oracle.
  • Discover why the main use case for Oracle XE is for teaching and training purposes.
  • Learn more about the current version of Oracle, and the new exciting features.
  • Understand why it is complicated to upgrade an Oracle database, and preparation needed.
  • Simon shares the top features and benefits of the Oracle database.
  • Find out what the ODA and Exadata Appliances are, and the differences between them.
  • Learn more about the Oracle Cloud and how they are working to become a major player.
  • Simon shares his thoughts on Oracle as a product, and whether or not it is dying/dead.
  • Hear about the project that Simon is most proud of, the best tool he uses, and much more!

Links Mentioned in Today’s Episode:

Oracle Database
Oracle Database Appliance
Oracle Exadata
DBA Course
Simon on Twitter
More about Simon
ODTUG
ODTUG Kscope17 Conference

Categories: DBA Blogs

Sequential Asynchronous calls in Node.JS – using callbacks, async and ES6 Promises

Amis Blog - Thu, 2017-05-18 04:38

One of the challenges with programming in JavaScript (ECMA Script) in general and Node.JS in particular is having to deal with asynchronous operations. Whenever a call is made to a function that will handle the request asynchronously, care has to be taken to be prepared to receive the result from the function in an asynchronous fashion. Additionally, we have to ensure that the program flow does not continue prematurely – only those steps that can be performed without the result from the function call can proceed. Orchestrating multiple asynchronous – some of them sequential or chained and others possibly in parallel – and gathering the results from those calls in the proper way is not trivial.

Traditionally, we used callback functions to program the asynchronous interaction: the caller passed a reference to a function to the asynchronous operation and when done with the asynchronous operation, the called function would invoke this callback function to hand it the outcome. The call(ed)back function would then take over and continue flow of the program. A simple example of a callback function is seen whenever an action is scheduled for execution using setTimeout():

setTimeout(function () {
  console.log("Now I am doing my thing ");
}, 1000);

or perhaps more explicitly:

function cb() {
  console.log("Now I am doing my thing ");
}

setTimeout(cb, 1000);
Chain of Asynchronous Actions

With multiple mutually dependent (chained) calls, using callback functions results in nested program logic that quickly becomes hard to read, debug and maintain. An example is shown here:

image

 

Function readElementFromJsonFile does what its name says: it reads the value of a specific element from the file specified in the input parameter. It does so asynchronously and it will call the callback function to return the result when it has been obtained. Using this function, we are after the final value. Starting with file step1.json, we read the name of the nextfile element which indicates the next file to read, in this case step2.json. This file in turn indicates that nextStep.json should be inspected and so on. Clearly we have a case of a chain of asynchronous actions where each action’s output provides the input for the next action.

In classic callback oriented JavaScript, the code for the chain of calls looks like this – the nested structure we have come to expect from using callback functions to handle asynchronous situations:

// the classic approach with nested callbacks
var fs = require('fs');
var step1 = "/step1.json";

function readElementFromJsonFile(fileName, elementToRead, cb) {
    var elementToRetrieve = 'nextfile';
    if (elementToRead) {
        elementToRetrieve = elementToRead;
    }
    console.log('file to read from ' + fileName);
    fs.readFile(__dirname + '/' + fileName, "utf8", function (err, data) {
        var element = "";
        if (err) return cb(err);
        try {
            element = JSON.parse(data)[elementToRetrieve];
        } catch (e) {
            return cb(e);
        }
        console.log('value of element read = ' + element);
        cb(null, element);
    });
}//readElementFromJsonFile

readElementFromJsonFile(step1, null, function (err, data) {
    if (err) return err;
    readElementFromJsonFile(data, null, function (err, data) {
        if (err) return err;
        readElementFromJsonFile(data, null, function (err, data) {
            if (err) return err;
            readElementFromJsonFile(data, null, function (err, data) {
                if (err) return err;
                readElementFromJsonFile(data, 'actualValue', function (err, data) {
                    if (err) return err;
                    console.log("Final value = " + data);
                });
            });
        });
    });
});

The arrival of the Promise in ES6 – a native language mechanism that is therefore available in recent versions of Node.JS – makes things a little bit different and more organized, readable and maintainable. The function readElementFromJsonFile() will now return a Promise – a placeholder for the eventual result of the asynchronous operation. Even though the result will be provided through the Promise object at a later moment, we can program as if the Promise represents that result right now – and we can anticipate in our code at what to do when the function delivers on its Promise (by calling the built in function resolve inside the Promise).

The result of the resolution of a Promise is a value – in the case of function readElementFromJsonFile it is the value read from the file. The then() operation that is executed when the Promise is resolved with that value, calls the function that it was given as a parameter. The result (resolution outcome) of the Promise is passes as input into this function. In the code sample below we see how readElementFromJsonFile(parameters).then(readElementFromJsonFile) is used. This means: when the Promise returned from the first call to the function is resolved, then call the function again, this time using the outcome of the first call as input to the second call. With the fourth then this is a little bit more explicit: since in the final call to the function readElementFromJsonFile we need to pass not just the outcome from the previous call to the function as an input parameter but also the name of the element to read from the file. Therefore we use an anonymous function that takes the resolution result as input and makes the call to the function with the additional parameter. Something similar happens with the final then – where the result from the previous call is simply printed to the output.

The code for our example of subsequently and asynchronously reading the files becomes:

var fs = require('fs');
var step1 = "step1.json";

function readElementFromJsonFile(fileName, elementToRead) {
    return new Promise((resolve, reject) => {
        var elementToRetrieve = 'nextfile';
        if (elementToRead) {
            elementToRetrieve = elementToRead;
        }
        console.log('file to read from ' + fileName);
        fs.readFile(__dirname + '/' + fileName, "utf8", function (err, data) {
            var element = "";
            if (err) return reject(err);
            try {
                element = JSON.parse(data)[elementToRetrieve];
            } catch (e) {
                reject(e);
            }
            console.log('element read = ' + element);
            resolve(element);
        });
    })// promise
}

readElementFromJsonFile(step1)
    .then(readElementFromJsonFile)
    .then(readElementFromJsonFile)
    .then(readElementFromJsonFile)
    .then(function (filename) { return readElementFromJsonFile(filename, 'actualValue') })
    .then(function (value) { console.log('Value read after processing five files = ' + value); })
Scheduled Actions as Promise or how to Promisify setTimeout

The setTimeout() built in expects a call back function. It does not currently return a Promise. Something like:

setTimeout(1000).then(myFunc)

would be nice but does not exist.

This entry on Stackoverflow has a nice solution for working with setTimeout Promise style:

function delay(t) {
   return new Promise(function(resolve) { 
       setTimeout(resolve, t)
   });
}

function myFunc() {
    console.log('At last I can work my magic!');
}

delay(1000).then(myFunc);

The post Sequential Asynchronous calls in Node.JS – using callbacks, async and ES6 Promises appeared first on AMIS Oracle and Java Blog.

Write Better Code Faster on Oracle Systems (SPARC,x86) with Oracle Developer Studio 12.6

Oracle Developer Studio is the #1 development environment for building C, C++, Fortran and Java applications for Oracle Solaris and Linux operating systems and optimized for Oracle Systems...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Elasticsearch Versions Supported with PeopleSoft

PeopleSoft Technology Blog - Thu, 2017-05-18 02:00

PeopleSoft integration with Elasticsearch depends on the version of Elasticsearch that is distributed with PeopleTools.  PeopleSoft also provides PeopleSoft-specific customizations.  PeopleSoft currently provides a version of Elasticsearch 2.3.2 with PeopleSoft products.  This version of Elasticsearch is distributed to customers through our Deployment Packages (DPKs).  Customers have noted that Elastic, the company that provides Elasticsearch, has announced that version 2.3.2 will reach end of life on September 30, 2017.  It is important for customers to know that Oracle/PeopleSoft will continue to support that version of Elasticsearch, which we provide, beyond that date.   Customers will receive any fixes through our DPKs, and they should not attempt to get support or updates from Elastic.

In addition, we are currently working on certifying the latest version of Elasticsearch, and when that is complete, it will be available in our Elasticsearch Deployment Packages.  Our strategy is to support future versions of Elasticsearch when they are available after testing and certification.

Welcome to Our New Blog Platform

PeopleSoft Technology Blog - Thu, 2017-05-18 02:00

It took a bit longer than anticipated, but we are up and running on the new blog platform.  We hope you like the new format.  There are some good features, one of the most important being the responsiveness of the UI.  You can now read and interact with this blog across form factors, including phones.

We have a few kinks to work out and a few more things to get configured, but you will start seeing fresh posts shortly.  For those of you that have posted comments on the old platform, so of those MAY NOT have made it over in the migration.  We will try to get them carried over and address them, but if you don't see a response to one of your comments soon, please re-post it. Note that the URL is the same.  Our social media feeds are temporarily changed, but we are working on getting those updated to PeopleSoft-specific targets.  In the meantime, please visit peoplesoftinfo.com for lots of content, including links to our social sites.

Multiple Policy Support (4.3.0.4.0)

Anthony Shorten - Wed, 2017-05-17 23:28

One of the features of the latest Oracle Utilities Application Framework (V4.3.0.4.0) is the support for multiple WS-Policy compliant policies on Inbound Web Services. There are a number of ways to achieve this:

  • Annotations - It is now possible to specify multiple inline policies (standard ones and custom ones) with order of precedence also supported via a Sequence. It is also now possible to delegate to security within Annotations to Oracle Web Services Manager. This means it is now possible to mix inline with external policies. For example:

Multiple Policies as Annotations

  • Oracle WebLogic - It is possible to attach the policies supported by Oracle WebLogic to the individually deployed Web Services on the container level. This supports multiple policies (order of precedence is designated by the order they appear in the Web Service) on the individual Web Service.
  • Oracle Web Services Manager - It is possible to attach additional policies using the container (Web Services Manager includes the Oracle WebLogic supported policies, additional advanced policies and access controls) and like Oracle WebLogic, the order of precedence for multiple policies is the order they are attached to the individual Web Service. For example:

OWSM Policy Example

Now why have multiple policies in the first place. Well, you do not have to use multiple policies but there are a few use cases where it makes sense:

  • Some WS-Policies are for transport security and some are for message security only. Using a combination allows you to specify both using different policies. I should point out that most WS-Policies contain a transport and message combination so it reduces the need for multiple policies in the container.
  • You can create WS-Policy compliant custom policies, as long as they are supported by Oracle WebLogic or Oracle Web Services Manager, and those can have separate transport or message security definitions.
  • You should reuse web services as much as possible. You can choose not to expose the WS-Policy in your service but then use different policies for different interface systems. This might sound illogical but you may have different levels of security depending on the source of the call. In this case you would tell your sources the different policies they must adhere to.

Multiple policies are an optional feature but can be used to support a wide range of different interface styles.

For Create Serial Number By Employee No. And Year

Tom Kyte - Wed, 2017-05-17 19:46
I Will Create Serial Number by Employee Number Concatenation Year YYYY,For Every Employee to When Create New Transaction Auto Generating Serial Number by Employee number and When Termination Year and begin new year 01-01-YYYY Create new Serial F...
Categories: DBA Blogs

Cannot insert null values to a column defined not null with default constraint

Tom Kyte - Wed, 2017-05-17 19:46
Hi Tom, I'm trying to insert null values to a column defined not null with default constraint . But getting error ORA=01400 Cannot insert null values. Below is the table I created . create table tmp_test( column1 varchar2(10) default 1 not...
Categories: DBA Blogs

INSERT/DELETE on PARENT-CHILD-SUB CHILD table dependency

Tom Kyte - Wed, 2017-05-17 19:46
<code>Hi Chirs/Connor, I have written below procedure to move records from tables (from Primary DB to Arcihval DB). The logic is to merge records on Archival DB then DELETE those rows from Primary as well. We have already have same table struc...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator