Skip navigation.

Feed aggregator

SQL Server 2014: First Service Pack (SP1) is available

Yann Neuhaus - Tue, 2015-05-19 01:48

May 14th, Microsoft has released the first Service Pack (SP1) for SQL Server 2014. It is more than thirteen months after the RTM version.
SQL Server 2014 Service Pack 1 includes all of the CU from 1 to 5.

Which issues are fixed in this SP1

There are 29 hotfixes:

  • 19 for the Engine
  • 6 for SSRS
  • 3 for SSAS
  • 1 for SSIS

 

b2ap3_thumbnail_PieSQL2014SP1.jpg

Some improvements are:

  • Performance improvement of Column store with batch mode operators and a new Extended Event
  • Buffer pool extension improvement
  • New cardinality estimator to boost queries performances

Historic of SQL Server 2014

The build version of SQL Server 2014 SP1 is 12.0.4100.1.
Here, a quick overview of SQL Server 2014 builds since the CTP1:

Date SQL Server 2014 version Build

Juin 2013

Community Technology Preview 1 (CTP1)

11.00.9120

October 2013

Community Technology Preview 2 (CTP2)

12.00.1524

April 2014

RTM

12.00.2000

April 2014

Cumulative Update 1 (CU1)

12.00.2342

June 2014

Cumulative Update 2 (CU2)

12.00.2370

August 2014

Cumulative Update 3 (CU3)

11.00.2402

October 2014

Cumulative Update 4 (CU4)

12.00.2430

December 2014

Cumulative Update 5 (CU5)

11.00.2456

May 2015

Service Pack 1 (SP1)

12.00.4100

If you need more information about SQL Server 2014 SP1 or to download it, click here.

As a reminder, Service Packs are very critical and important for bug fixing point of view, product upgrade so take care to install it quickly ;-)
See you.

Change first day of week in APEX 5.0 Calendar

Dimitri Gielis - Tue, 2015-05-19 00:52
APEX 5.0 comes with a new calendar region, which is way nicer than the previous calendar in APEX. It has more features, looks better and is also responsive. Behind the scenes you'll see the calendar region is based on Full Calendar.


In Belgium we use Monday as first day of the week, whereas in the US they seem to use Sunday as start of the week in the calendar overview. I've been integrating Full Calendar before, so I knew that library had an option to set the first day of the week. You could either specify an option called firstDay and set that to 1, or you could change the language, and depending the language it would adjust the start day of the week.

In APEX 5.0 I looked for that option, but there's not a specific attribute to set the first day of the week, instead it's based on the language of your application. If you go to Shared Components > Globalization Attributes by default it's set to en, which has Sunday as start of the week. If you set it to en-gb it will have Monday as start of the week.


I searched some time to find how to do it, so hopefully this post will help others to find it more easily. Thanks to Patrick for sharing the way it was implemented.

Categories: Development

SQL Saturday Lisbon: from Francesinha to Bacalhau

Yann Neuhaus - Mon, 2015-05-18 23:45

The last week-end, I was at the SQL Saturday 369 that held in Lisbon. If you take a look at the agenda, you'll probably see that there is a lot of interesting sessions with a lot of famous speakers. Unfortunately, I was not able to attend to all sessions, so I decided to focus only on those that have a direct correlation with my work.

First, 2 "headache" sessions given by Paul White (aka @SQL_Kiwi) about the query optimizer and some internal stuffs. The QO is definitely a very interesting topic and I'm always willing to discover more and more with guys like Paul to improve my skills.

Then, 2 sessions about In-Memory features with SQL Server 2016. In fact, I'm already aware about potential new features about the next SQL Server version, but attending to a session given by Niko Neugebauer about columnstore and discuss about next features adds always a certain value for sure. Thanks Niko and Murilo Miranda for your sessions! 

Finally another "headache" session to finish this day about batch mode and CPU archictectures given by Chris Adkin. We had a very deep dive explaination about batch mode and how it improves performance with CPU savings.  

 

Moreover, it was also the opportunity to meet some of my SQL Server MVP friends like Jean-Pierre Riehl and Florian Eiden ...

 

blog_45_-_1_-_french_team_sqlsat369

 

... and have a good dinner with the SQL Saturday staff and other speakers. A lot of countries represented here: Portugal, Germany, UK, New Zealand, France and probably others.

 

blog_45_-_2_-_dinner_sqlsat369

 

A beautiful city, a good weather, a lot of very good speakers and a very good staff ... maybe the secret sauce of a successful SQL Server event!

I'm pretty sure that it will be the same to the next SQL Saturday in Paris and I will be there (maybe as a speaker this time)

Indexing and Transparent Data Encryption Part I (The Secret Life of Arabia)

Richard Foote - Mon, 2015-05-18 23:42
Database security has been a really hot topic recently so I thought I might write a few posts in relation to indexing and Transparent Data Encryption (TDE) which is available as part of the Oracle Advanced Security option. To protect the database from unauthorized “backed-door” accesses, the data within the actual database files can be encrypted. […]
Categories: DBA Blogs

First Impression for Evodesk Desktop Unboxing

Michael Dinh - Mon, 2015-05-18 18:53

Disclaimer: I am not being paid by anyone to write positive or negative review.

Opinions are my own based on my limited engineering background.

First, packaging is somewhat poor and could be much better for a desk costing close to $1,000 ($886 for my configuration).

Tape coming off.

badpackaging2

I hope my desktop is okay.

badpackaging1

Taking a look inside. Is that a tiny scratch I see?

badpackaging3

After opening the desktop, this is the torn location – not enough foam.

badpackaging4

Look at how much love I give it.

Desktop should be shipped in bubble wrap to prevent damage and scratch.

Cable Pass Through is way too small for 30” x 72”.

smallpass

Most standing desks I was looking at are 1 inch thick.

By no means is this best in class as Evodesk Desktop is 3/4 inch thin.

You won’t find this information anywhere at Evodesk technical specification.

http://www.evodesk.com/media/desktop-diagrams.pdf

thin

This is the programmer controller.

Openned ziplock bag and was this a returned repackaged?

controller

My picture does not look at good as Evodesk – http://www.evodesk.com/standing-desks#posi-loc

I do like th Posi-Loc and was the final selling point.

Hope this is secure and does not spin.

posi-loc

Update:

It looks like Evodesk has updated the information for desktop. Either that or I was blind as a bat the first go round.

Renew™ Desktops
  • 100% reclaimed/recycled wood composite desktop
  • EvoGuard™ durable & stylish non-VOC seamless coating
  • Soft comfort edges eliminate nerve compression and pressure fatigue
  • Corners are slightly rounded for improved safety and style
  • Oversized 3” x 6” Cable Pass Through
  • Pre-drilled for quick and easy setup
  • Available sizes: 48″ (30” x 48″ x .75”), 60″ (30” x 60” x .75”), 72″ (30” x 72” x .75”)
  • Meets California Air Resources Board’s (CARB 2) stringent emission standard
  • Backed by a no-nonsense 2-year no-nonsense limited warranty

Migrated rows

Jonathan Lewis - Mon, 2015-05-18 11:43

I received an email recently describing a problem with a query which was running a full tablescan but: “almost all the waits are on ‘db file sequential read’ and the disk read is 10 times the table blocks”.  Some further information supplied was that the tablespace was using ASSM and 16KB block size; the table had 272 columns (ouch!) and the Oracle version was 11.2.0.4.

In his researches he had read my article on wide rows, and had picked out of one of the comments the line: “the very bad thing about chained rows and direct reads that is that finding the rest of row by ‘db file sequential read’ is never cached”, but he wasn’t sure that this was the problem he was seeing so, very sensibly, he had re-run the query with extended tracing available, and dumped (and formatted/edited) a couple of blocks from the table.

He then sent me the trace file and block dump. Generally this is a mistake – especially when the trace file is several megabytes – but he had prepared the ground well and had linked it back to one of my blog notes, and I thought there might be an opportunity for publishing a few more comments, so I took a look. Here’s a carefully edited subset of the block dump – showing all the pertinent information:


Start dump data blocks tsn: 99 file#:100 minblk 2513181 maxblk 2513181

Block header dump:  0x1926591d
 Object id on Block? Y
 seg/obj: 0x1652a7  csc: 0x53.891880b8  itc: 12  flg: E  typ: 1 - DATA
     brn: 1  bdba: 0x1965b70c ver: 0x01 opc: 0
     inc: 84  exflg: 0

 Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x0010.01d.0000dff9  0x2b442286.3469.09  C---    0  scn 0x0053.891880b1
0x02   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x03   0x0000.000.00000000  0x00000000.0000.00  C---    0  scn 0x0000.00000000
0x04   0x0000.000.00000000  0x00000000.0000.00  C---    0  scn 0x0000.00000000
0x05   0x0000.000.00000000  0x00000000.0000.00  C---    0  scn 0x0000.00000000
0x06   0x0000.000.00000000  0x00000000.0000.00  C---    0  scn 0x0000.00000000
0x07   0x0000.000.00000000  0x00000000.0000.00  C---    0  scn 0x0000.00000000
0x08   0x0000.000.00000000  0x00000000.0000.00  C---    0  scn 0x0000.00000000
0x09   0x0000.000.00000000  0x00000000.0000.00  C---    0  scn 0x0000.00000000
0x0a   0x0000.000.00000000  0x00000000.0000.00  C---    0  scn 0x0000.00000000
0x0b   0x0000.000.00000000  0x00000000.0000.00  C---    0  scn 0x0000.00000000
0x0c   0x0000.000.00000000  0x00000000.0000.00  C---    0  scn 0x0000.00000000
bdba: 0x1926591d

data_block_dump,data header at 0x11083f154
===============
tsiz: 0x1ea8
hsiz: 0x26
pbl: 0x11083f154
     76543210
flag=--------
ntab=1
nrow=10
frre=-1
fsbo=0x26
fseo=0x4c5
avsp=0x49f
tosp=0x49f
0xe:pti[0]	nrow=10	offs=0
0x12:pri[0]	offs=0x1c15
0x14:pri[1]	offs=0x197b
0x16:pri[2]	offs=0x16e1
0x18:pri[3]	offs=0x1448
0x1a:pri[4]	offs=0x11b8
0x1c:pri[5]	offs=0xf1f
0x1e:pri[6]	offs=0xc85
0x20:pri[7]	offs=0x9ec
0x22:pri[8]	offs=0x752
0x24:pri[9]	offs=0x4c5
block_row_dump:
tab 0, row 0, @0x1c15
tl: 659 fb: -----L-- lb: 0x0  cc: 255
tab 0, row 1, @0x197b
tl: 666 fb: -----L-- lb: 0x0  cc: 255
tab 0, row 2, @0x16e1
tl: 666 fb: -----L-- lb: 0x0  cc: 255
tab 0, row 3, @0x1448
tl: 665 fb: -----L-- lb: 0x0  cc: 255
tab 0, row 4, @0x11b8
tl: 656 fb: -----L-- lb: 0x0  cc: 255
tab 0, row 5, @0xf1f
tl: 665 fb: -----L-- lb: 0x0  cc: 255
tab 0, row 7, @0x9ec
tl: 665 fb: -----L-- lb: 0x0  cc: 255
tab 0, row 8, @0x752
tl: 666 fb: -----L-- lb: 0x0  cc: 255
tab 0, row 9, @0x4c5
tl: 653 fb: -----L-- lb: 0x0  cc: 255

In the ITL you can see 10 entries with the flag set to “C—-” (committed) with no XID or SCN – that’s consistent with 10 rows migrating into the block in a single transaction. In the row directory you can see the block holds 10 rows, and in the body of the block you can see the header for each of those 10 rows with 255 columns (presumably the 2nd section of each row of 272 columns), and the flag bytes set to “—–L–” (the Last piece of a chained – as opposed to simply migrated – row).

So the block dump is consistent with the possiblity of a direct path read of a block somewhere (10 head pieces) having to read this block 10 times shortly afterwards. Can we find further corroboration in the trace file? The blockdump was for block 0x1926591d = 421943581 decimal


PARSE #4573135368:c=29,e=48,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=4116693033,tim=79008343283418
EXEC #4573135368:c=53,e=93,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=4116693033,tim=79008343283607
WAIT #4573135368: nam='SQL*Net message to client' ela= 1 driver id=1413697536 #bytes=1 p3=0 obj#=15477650 tim=79008343283636
WAIT #4573135368: nam='Disk file operations I/O' ela= 38 FileOperation=2 fileno=101 filetype=2 obj#=1462951 tim=79008343283973
WAIT #4573135368: nam='direct path read' ela= 8991 file number=100 first dba=947580 block cnt=13 obj#=1462951 tim=79008343293041

WAIT #4573135368: nam='db file sequential read' ela= 4934 file#=100 block#=2513181 blocks=1 obj#=1462951 tim=79008343298032
WAIT #4573135368: nam='db file sequential read' ela= 155 file#=100 block#=2513181 blocks=1 obj#=1462951 tim=79008343298216
WAIT #4573135368: nam='db file sequential read' ela= 127 file#=100 block#=2513181 blocks=1 obj#=1462951 tim=79008343298378
WAIT #4573135368: nam='db file sequential read' ela= 125 file#=100 block#=2513181 blocks=1 obj#=1462951 tim=79008343298526
WAIT #4573135368: nam='db file sequential read' ela= 128 file#=100 block#=2513181 blocks=1 obj#=1462951 tim=79008343298677
WAIT #4573135368: nam='db file sequential read' ela= 123 file#=100 block#=2513181 blocks=1 obj#=1462951 tim=79008343298826
WAIT #4573135368: nam='db file sequential read' ela= 134 file#=100 block#=2513181 blocks=1 obj#=1462951 tim=79008343298983
WAIT #4573135368: nam='db file sequential read' ela= 129 file#=100 block#=2513181 blocks=1 obj#=1462951 tim=79008343299135
WAIT #4573135368: nam='db file sequential read' ela= 180 file#=100 block#=2513181 blocks=1 obj#=1462951 tim=79008343299341
WAIT #4573135368: nam='db file sequential read' ela= 133 file#=100 block#=2513181 blocks=1 obj#=1462951 tim=79008343299497

WAIT #4573135368: nam='db file sequential read' ela= 11039 file#=100 block#=2513245 blocks=1 obj#=1462951 tim=79008343310565
WAIT #4573135368: nam='db file sequential read' ela= 133 file#=100 block#=2513245 blocks=1 obj#=1462951 tim=79008343310730
WAIT #4573135368: nam='db file sequential read' ela= 139 file#=100 block#=2513245 blocks=1 obj#=1462951 tim=79008343310895
WAIT #4573135368: nam='db file sequential read' ela= 124 file#=100 block#=2513245 blocks=1 obj#=1462951 tim=79008343311045
WAIT #4573135368: nam='db file sequential read' ela= 122 file#=100 block#=2513245 blocks=1 obj#=1462951 tim=79008343311190
WAIT #4573135368: nam='db file sequential read' ela= 127 file#=100 block#=2513245 blocks=1 obj#=1462951 tim=79008343311339
WAIT #4573135368: nam='db file sequential read' ela= 125 file#=100 block#=2513245 blocks=1 obj#=1462951 tim=79008343311490
WAIT #4573135368: nam='db file sequential read' ela= 134 file#=100 block#=2513245 blocks=1 obj#=1462951 tim=79008343311647
WAIT #4573135368: nam='db file sequential read' ela= 128 file#=100 block#=2513245 blocks=1 obj#=1462951 tim=79008343311797
WAIT #4573135368: nam='db file sequential read' ela= 124 file#=100 block#=2513245 blocks=1 obj#=1462951 tim=79008343311947

WAIT #4573135368: nam='db file sequential read' ela= 10592 file#=100 block#=2513309 blocks=1 obj#=1462951 tim=79008343322564
WAIT #4573135368: nam='db file sequential read' ela= 142 file#=100 block#=2513309 blocks=1 obj#=1462951 tim=79008343322740
WAIT #4573135368: nam='db file sequential read' ela= 126 file#=100 block#=2513309 blocks=1 obj#=1462951 tim=79008343322889

There are a couple of interesting details in this trace file.

First we note (as the OP said) there are very few direct path reads – but direct path reads can be asynchronous with several running concurrently, which means that we may report one direct path read while the data returned from others records no time. (You’ll have to take my word for the sparseness of direct path reads – there were 5 reading a total of 58 blocks from the object, compared to 50,000 db file sequential reads)

Then you can see that although each block that was subject to “db file sequential read” is reported 10 times, the first read is much slower than the subsequent ones – a fairly good indication that the later reads are coming from a cache somewhere. (The 50,00 reads consisted of roughly 5,300 blocks being read 10 times, 1,400 blocks being read 9 times, 460 blocks being read 8 times, and a few blocks being read 7 or fewer times.)

You might also notice that the “coincidental” jump of 64 blocks between the sets of 10 reads – this appears fairly frequently, and it’s the type of pattern you might expect to see when a serial process is allocating blocks for use in a clean ASSM tablespace after the extent sizes have become fairly large (possibly the 64MB size that eventually appears with system managed extent sizes). There’s a “pseudo-random” choice of block within extent dicated by the process id, that spreads the work done by a single process steadily through the extent. Having filled 2513181, 2513245, 2513309 and so on for 16 steps the trace file comes back to 2513182, 2513246, 2513309 and so on.

It’s interesting (and time-consuming) to check the patterns but what we really need next, and don’t have, to check the theory is the set of 13 blocks dictated by the first direct path read:

WAIT #4573135368: nam='direct path read' ela= 8991 file number=100 first dba=947580 block cnt=13 obj#=1462951 tim=79008343293041

It’s likely that somewhere in the 13 blocks in the range 947580 onwards we would find the 10 row head pieces pointing to block 2513181; then the 10 row head pieces pointing to block 2513245, and so on – and I’d hope that we might see a pattern of many consecutive (or near-consecutive) rows in each originating block pointing to the same “next block”. In fact, with a few blocks in the early range, we might even get some idea of how the application was loading and updating data and be able to make some suggestions for changing the strategy to avoid row chaining.

Footnote

The OP also had a follow-up question which was: “One question for the block dump is why there is no hrid in it since the row pieces are the second row pieces and the flag bit is ‘—–L–‘?”  It would be nice to see this, of course – then we wouldn’t need to see the 947580-947592 range to see what had been happening to the data – but that’s not the way Oracle works, as I’ve pointed out above; but since the answer was in another posting of mine I simply emailed the relevant URL to the OP.


<b>Contributions by Angela Golla,

Oracle Infogram - Mon, 2015-05-18 11:17
Contributions by Angela Golla, Infogram Deputy Editor

Profit Magazine for May
The May issue of Profit Magazine is available. It has a special 'Digital Disruption' report that includes 'The 4 Steps to a Successful Big Data Project' and more.   

Can you restore from a full online backup ?

Laurent Schneider - Mon, 2015-05-18 11:07

The question is not HOW TO DO IT but WHETHER YOU CAN DO IT !

A typical backup script would contains something like BACKUP DATABASE PLUS ARCHIVELOG:


backup database format
'/u99/backup/DB01/20150518/full_0_%d_s%s_p%p'
plus archivelog format
'/u99/backup/DB01/20150518/arc_%d_s%s_p%p';

Starting backup at 2015-05-18_18:27:55
current log archived
input archived log thread=1 sequence=469
...
piece handle=
/u99/backup/DB01/20150518/arc_DB01_s86_p1
Finished backup at 2015-05-18_18:27:58

Starting backup at 2015-05-18_18:27:58
input datafile file number=00002 name=
/u02/oradata/DB01/undots01.dbf
...
including current control file in backup set
including current SPFILE in backup set
piece handle=
/u99/backup/DB01/20150518/full_0_DB01_s88_p1
Finished backup at 2015-05-18_18:28:16

Starting backup at 2015-05-18_18:28:16
current log archived
input archived log thread=1 sequence=20
piece handle=
/u99/backup/DB01/20150518/arc_DB01_s89_p1
Finished backup at 2015-05-18_18:28:17

This sounds pretty safe, but what happened if you keep this full backup for ever ? do you have all what you need in it to restore ?

It depends. Chance exists that you can restore. To annoy the paranoiacs, here is a counter example.

  1. you start your full backup at 6pm.
    backuping sequence 21,22,23,24
    backup datafiles…

  2. at 7 pm one cronjob issue backup archivelog all delete input;
    backuping and deleting sequence 21,22,23,24,25,26,27,28,29,30

  3. at 8pm your backup is about to finish
    backuping sequence 31

Well, where is my sequence 27 ?

Let’s try


rm /tmp/arch_DB01*
startup force mount;
crosscheck backup;restore database;


Starting restore at 2015-05-18_18:47:45
channel ORA_DISK_1: restore complete, elapsed time: 00:02:05
Finished restore at 2015-05-18_18:49:51

Ok, the restore was fine. Now what?


RMAN&gt; recover database;

Starting recover at 2015-05-18_18:50:35
using channel ORA_DISK_1

starting media recovery

RMAN-00571: =================================================
RMAN-00569: ========== ERROR MESSAGE STACK FOLLOWS ==========
RMAN-00571: =================================================
RMAN-03002: failure of recover command at 05/18/2015 18:50:36
RMAN-06053: unable to perform media recovery because of missing log
RMAN-06025: no backup of archived log for thread 1 with sequence 30
RMAN-06025: no backup of archived log for thread 1 with sequence 29
RMAN-06025: no backup of archived log for thread 1 with sequence 27

RMAN&gt; alter database open;

RMAN-00571: =================================================
RMAN-00569: ========== ERROR MESSAGE STACK FOLLOWS ==========
RMAN-00571: =================================================
RMAN-03002: failure of alter db command at 05/18/2015 18:51:29
ORA-01113: file 1 needs media recovery
ORA-01110: data file 1: '/u02/oradata/DB01/system01.dbf'

You have lost your database. Even if you did a full online backup plus archivelog, it did not contain all the archivelogs and RMAN did not tell you.

There is more than one way to reduce this annoyance :

1) don’t backup archivelogs during full backup
2) don’t delete archivelog backups done during a full backup
3) report any archivelog backups that run during a full. Either by looking in the RMAN dictionary or in your house logfiles
4) do offline backup for long term archives
5) don’t trust RMAN

And as well :
Test your backups &#x1f600;

Variations on 1M rows insert (3): using SQL Server In-Memory features

Yann Neuhaus - Mon, 2015-05-18 11:00

Let’s continue with this series about inserting 1M rows and let’s perform the same test with a new variation by using SQL Server In-Memory features. For this blog post, I will still use a minimal configuration that consists of only 1 virtual hyper-V machine with 1 processor, 512MB of memory. In addition my storage includes VHDx disks placed on 2 separate SSDs  (one INTEL SSDC2BW180A3L and one Samsung SSD 840 EVO). No special configuration has been performed on Hyper-V.

Let's begin by the creation script of my database DEMO:

 

CREATE DATABASE [demo] ON PRIMARY ( NAME = N'demo_data', FILENAME = N'E:\SQLSERVER\demo_data.mdf' , SIZE = 1048576KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB ), FILEGROUP [demo_hk_grp] CONTAINS MEMORY_OPTIMIZED_DATA DEFAULT ( NAME = N'demo_hk', FILENAME = N'E:\SQLSERVER\HK' , MAXSIZE = UNLIMITED) LOG ON ( NAME = N'demo_log', FILENAME = N'F:\SQLSERVER\demo_hk_log.ldf' , SIZE = 1395776KB , MAXSIZE = 2048GB , FILEGROWTH = 10%) GO   ALTER DATABASE [demo] SET COMPATIBILITY_LEVEL = 120 GO   ALTER DATABASE [demo] SET RECOVERY SIMPLE; GO

 

Next the creation script of all user objects that includes:   - 2 disk-based tables: DEMO_DB_PK (with a clustered primary key) and DEMO_DB_HP (a heap table) - 2 In-Memory optimized tables: DEMO_HK_SCH_DATA (data arepersisted) and DEMO_HK_SCH (only schema is persisted)

 

CREATE TABLE [dbo].[DEMO_DB_PK] (        [id] [int] NOT NULL primary key,        [text] [varchar](15) COLLATE French_CI_AS NULL,        [number] [int] NULL, )   CREATE TABLE [dbo].[DEMO_DB_HP] (        [id] [int] NOT NULL,        [text] [varchar](15) COLLATE French_CI_AS NULL,        [number] [int] NULL, )     CREATE TABLE [dbo].[DEMO_HK_SCH_DATA] (        [id] [int] NOT NULL,        [text] [varchar](15) COLLATE French_CI_AS NULL,        [number] [int] NULL,   PRIMARY KEY NONCLUSTERED HASH (        [id] )WITH ( BUCKET_COUNT = 2097152) )WITH ( MEMORY_OPTIMIZED = ON , DURABILITY = SCHEMA_AND_DATA ) GO   CREATE TABLE [dbo].[DEMO_HK_SCH] (        [id] [int] NOT NULL,        [text] [varchar](15) COLLATE French_CI_AS NULL,        [number] [int] NULL,   PRIMARY KEY NONCLUSTERED HASH (        [id] )WITH ( BUCKET_COUNT = 2097152) )WITH ( MEMORY_OPTIMIZED = ON , DURABILITY = SCHEMA_ONLY ) GO

 

Finally, the last script of creating 7 stored procedures in order to test different cases:

sp_demo_insert_demo_db_hp: insert 1M rows inside a disk-based heap table sp_demo_insert_demo_db_pk: insert 1M rows inside a disk-based clustered table sp_demo_insert_demo_hk_sch_data: insert 1M rows inside an In-Memory optimized table in INTEROP with data persisted on disk sp_demo_insert_demo_hk_sch: insert 1M rows inside an In-Memory optimized table in INTEROP with only schema persisted on disk sp_demo_insert_demo_hk_sch_data_cp: insert 1M rows inside an In-Memory optimized table in NATIVE with data persisted on disk and durability sp_demo_insert_demo_hk_sch_data_cp_d: insert 1M rows inside an In-Memory optimized table in NATIVE with data persisted on disk and delayed durability enable sp_demo_insert_demo_hk_sch_cp: insert 1M rows inside an In-Memory optimized table in NATIVE with only schema persisted on disk  

Just as reminder, INTEROP procedures allow using both disk-based and In-Memory optimized tables whereas the NATIVE (or natively compiled) procedures doesn't support disk-based tables. However, using the latter is very interesting in performance perspective because it improves drastically the execution time.

 

CREATE PROCEDURE [dbo].[sp_demo_insert_demo_db_hp] AS   SET NOCOUNT ON;   DECLARE @i INT = 1;   WHILE @i <= 1000000
  BEGIN          INSERT INTO dbo.DEMO_DB_HP VALUES (@i, CASE ROUND(RAND() * 10, 0) WHEN 1 THEN 'Marc' WHEN 2 THEN 'Bill' WHEN 3 THEN 'George' WHEN 4 THEN 'Eliot' WHEN 5 THEN 'Matt' WHEN 6 THEN 'Trey' ELSE 'Tracy' END, RAND() * 10000);   SET @i += 1;   END GO

 

CREATE PROCEDURE [dbo].[sp_demo_insert_demo_db_pk] AS   SET NOCOUNT ON;   DECLARE @i INT = 1;   WHILE @i <= 1000000
  BEGIN          INSERT INTO dbo.DEMO_DB_PK VALUES (@i, CASE ROUND(RAND() * 10, 0) WHEN 1 THEN 'Marc' WHEN 2 THEN 'Bill' WHEN 3 THEN 'George' WHEN 4 THEN 'Eliot' WHEN 5 THEN 'Matt' WHEN 6 THEN 'Trey' ELSE 'Tracy' END, RAND() * 10000);   SET @i += 1;   END GO

 

create procedure [dbo].[sp_demo_insert_demo_hk_sch_data] AS   SET NOCOUNT ON;   DECLARE @i INT = 1;   WHILE @i <= 1000000
  BEGIN          INSERT INTO dbo.DEMO_HK_SCH_DATA VALUES (@i, CASE ROUND(RAND() * 10, 0) WHEN 1 THEN 'Marc' WHEN 2 THEN 'Bill' WHEN 3 THEN 'George' WHEN 4 THEN 'Eliot' WHEN 5 THEN 'Matt' WHEN 6 THEN 'Trey' ELSE 'Tracy' END, RAND() * 10000);   SET @i += 1;   END GO

 

CREATE PROCEDURE [dbo].[sp_demo_insert_demo_hk_sch] AS   SET NOCOUNT ON;   DECLARE @i INT = 1;   WHILE @i <= 1000000
  BEGIN          INSERT INTO dbo.DEMO_HK_SCH VALUES (@i, CASE ROUND(RAND() * 10, 0) WHEN 1 THEN 'Marc' WHEN 2 THEN 'Bill' WHEN 3 THEN 'George' WHEN 4 THEN 'Eliot' WHEN 5 THEN 'Matt' WHEN 6 THEN 'Trey' ELSE 'Tracy' END, RAND() * 10000);   SET @i += 1;   END GO

 

Note that for the following natively compiled stored procedures, I rewrote one portion of code because it concerns CASE statement which is not supported with SQL Server 2014.

 

create procedure [dbo].[sp_demo_insert_demo_hk_sch_data_cp] with native_compilation, schemabinding, execute as owner as begin atomic with ( transaction isolation level=snapshot, language=N'us_english')          DECLARE @i INT = 1;        DECLARE @test_case INT = RAND() * 10;        DECLARE @name VARCHAR(20);          IF @test_case = 1              SET @name = 'Marc'        ELSE IF @test_case = 2              SET @name = 'Bill'        ELSE IF @test_case = 3              SET @name = 'George'        ELSE IF @test_case = 4              SET @name = 'Eliot'        ELSE IF @test_case = 5              SET @name = 'Matt'        ELSE IF @test_case = 6              SET @name = 'Trey'        ELSE              SET @name = 'Tracy';          WHILE @i <= 1000000
       BEGIN                INSERT INTO [dbo].[DEMO_HK_SCH_DATA] VALUES (@i, @name, RAND() * 10000);                SET @i += 1;        END End go

 

create procedure [dbo].[sp_demo_insert_demo_hk_sch_cp] with native_compilation, schemabinding, execute as owner as begin atomic with ( transaction isolation level=snapshot, language=N'us_english')          DECLARE @i INT = 1;        DECLARE @test_case INT = RAND() * 10;        DECLARE @name VARCHAR(20);          IF @test_case = 1              SET @name = 'Marc'        ELSE IF @test_case = 2              SET @name = 'Bill'        ELSE IF @test_case = 3              SET @name = 'George'        ELSE IF @test_case = 4              SET @name = 'Eliot'        ELSE IF @test_case = 5              SET @name = 'Matt'        ELSE IF @test_case = 6              SET @name = 'Trey'        ELSE              SET @name = 'Tracy';          WHILE @i <= 1000000
       BEGIN                INSERT INTO [dbo].[DEMO_HK_SCH] VALUES (@i, @name, RAND() * 10000);                SET @i += 1;        END end go

 

create procedure [dbo].[sp_demo_insert_demo_hk_sch_data_cp_d] with native_compilation, schemabinding, execute as owner as begin atomic with ( transaction isolation level=snapshot, language=N'us_english', delayed_durability = on)          DECLARE @i INT = 1;        DECLARE @test_case INT = RAND() * 10;        DECLARE @name VARCHAR(20);          IF @test_case = 1              SET @name = 'Marc'        ELSE IF @test_case = 2              SET @name = 'Bill'        ELSE IF @test_case = 3              SET @name = 'George'        ELSE IF @test_case = 4              SET @name = 'Eliot'        ELSE IF @test_case = 5              SET @name = 'Matt'        ELSE IF @test_case = 6              SET @name = 'Trey'        ELSE              SET @name = 'Tracy';          WHILE @i <= 1000000
       BEGIN                INSERT INTO [dbo].[DEMO_HK_SCH_DATA] VALUES (@i, @name, RAND() * 10000);                SET @i += 1;        END end GO

 

Ok it's time to run the different test scenarios. You will see below the different results and their related wait statistics that I found on my environment:

 

blog_44_-_1_-_results_

 

blog_44_-_2_-_wait_stats_

 

First of all, let’s notice that using In-Memory optimized tables in INTEROP mode seems to not improve the overall performance in all cases. Indeed, we still deal with the pretty same duration and the same CPU time as well when writing to the transaction log and checkpoint files for the In-Memory optimized tables with data persisted on disk – SCHEMA_AND_DATA.

Next, using In-Memory tables with only schema persisted on disk – SCHEMA_ONLY - contributes to better results (performance gain x 2) as we may expect. Indeed, the WRITELOG wait type has completely disappeared because data are not persisted in this case, so we minimize considerably the amount of records inside the Tlog.

Finally let’s have a look at the tests concerning natively compiled stored procedures. If we refer to the first tests (either for disk-based tables or for INTEROP), we can see that we reduce drastically the CPU consumption by using natively compiled stored procedures (roughly 97% in the best case). So, inserting 1M rows is very fast in this case.

 

Moreover, if we focus only on the results only between In-Memory optimized tables with different durability (SCHEMA_AND_DATA and SCHEMA_ONLY), we may notice that using transaction delayed durability may help. Once again persisting data by writing into the TLog and checkpoint files seems to slow down the insert process.

Some wait types still remain as PREEMPTIVE_OS_CREATEFILE, PREEMPTIVE_OS_CLOSEHANDLE and PREEMPTIVE_IS_FILEOPS and I will probably focus on them later. At this point, I would suspect a misconfigured storage or maybe my Hyper-V settings but I have to verify this point. I already double checked that I enabled instant file initialization according to the Microsoft documentation and disabled also some others features like 8.3 names, file indexing and last modification date tracking as well. So I will come back soon when I have more information.

But anyway for the moment we get a new time reference here: 2’’59’ for disk-based tables against 778 ms for In-Memory optimized tables + natively compiled stored procedures in the best scenario that tends to state that with In-Memory optimized we may get a huge performance improvement depending on our scenario.

So stay connected and see you soon for the next story!

 

Design Time @ Run Time: Apple Watch Put Through Its Paces in Beijing

Oracle AppsLab - Mon, 2015-05-18 10:50

Observations on UX research and road-testing wearable tech in the wild. The vehicle for today’s message is Ultan O’Broin (@usableapps), taking advantage of Oracle Applications User Experience events and outreach to evaluate the fitness and health option on the Apple Watch—and to continue his Fitbit Surge exploration—this time in China.

Emirates Apple Watch app used during the OAUX Asia trip. Emirates Apple Watch app used during the OAUX Asia trip.

The Watch Ethnography (say what?)

All the warnings about running in Beijing proved wrong: that my clothes would turn black; my skin would turn grey; I’d need a facemask; I wouldn’t see any other runners; I’d attract the attention of security personnel with my blue hair.

None of this happened.

I shoulda guessed. Running is one of the most “unasked-for-advice” activities out there, usually from non-runners or “joggers.”

Instead, I saw lots of other runners in Beijing’s parks and streets, mostly locals, with a small number of “ex-pats.” At times there were so many runners—and power walkers—early in the morning that I had to weave hard to get by them. On the long, straight streets of Beijing, I saw hardcore runners in action, percentage-wise more than, say, in Dublin.

Running in Beijing. Scene from Temple of Sun Park.

Running in Beijing. Scene from Temple of Sun Park.

I saw lots of runners sporting colorful running gear; more than I’ve seen in San Francisco, though the styling was far short of the effortless funky co-ordination of the lemons, oranges, and blacks of the Nordic scene. Yes, I’m a running fashion snob. It was kinda hard to tell what fitness devices the Beijing crowd was packing, but I did see some Garmins: a sure sign of serious runners.

I did one run to the Forbidden City and Tiananmen Square, a 10 miler; hauling myself around the Central Business District and diplomatic zones on other days. The eyes of Chinese security guards swiveled to follow me as I strode by, but generally they seemed nonplussed with my blue hair and obvious Apple Watch. I was kinda disappointed I didn’t end up on CNN.

Running to the Forbidden City. Alas, selfie sticks were not forbidden.

Running to the Forbidden City. Alas, selfie sticks were not forbidden.

The best time to run in Beijing is clearly in the early morning. Public parks were open by 5:30 AM and full of runners and walkers by the time I arrived. There is very bad air pollution in Beijing, but growing up in pre-smokeless-coal-carbon-fuel-ban Dublin, it really didn’t seem that menacing. However, I did detect a markedly poorer air quality later in the day. Your mileage may vary on that one, I guess.

The Device Findings

These runs in Beijing were another opportunity to test out the Fitbit Surge but really to try out the newer Apple Watch in another location. There are other comparisons between these two devices.

Both performed flawlessly, though I preferred the superior build quality of the Apple Watch, which is outstanding, and its UX with configurable glances display and superior styling. Henry Ford’s “Any Color As Long As It’s Black” as applied to smartwatches and fitness bands is #fashtech #fail by this stage.

Again, I was particularly impressed with the rapid GPS acquisition and holding capability of the Surge. I’ve used it on three continents now, and I love its robustness and long life battery.

Fitbit Surge GPS recording from Tiananmen Square run (on iOS)

Fitbit Surge GPS recording from Tiananmen Square run (on iOS)

The Apple Watch’s built-in Workout app proved easy to use for my runs. It has indoor and outdoor options for other activities too, whether with target metrics, distance, time, or calories, or you can use it for an “open” hustle. I was a little disappointed that the watch app doesn’t enable wearers to recall more basic run details from the last activity but being able to see real-time progress was great. I also enjoyed using the Apple Watch built-in Activity app too. Its simple and colorful progress analytics for exercise, moving, and standing were fun to glance at throughout the day, though the data is not for any serious runners or QS fanbois out there.

Using both of these Apple Watch apps together provided a compelling health and fitness experience.

Apple Watch Activity App

Apple Watch Activity App

Apple Watch Activity App

Apple Watch Activity App

Being able to use both devices without carrying a smartphone with me on a run was the UX joy. Being freed from dodgy Bluetooth pairing and GPS signal worries, and that tricky music selection procedure required by a smartphone, saved me 5 mins (about three quarters of a mile distance at my speeds) at the start of each run. Being able to see my performance in real time—on the go—without having to fish out a smartphone, was awesome.

That’s what a smartwatch glance UX is all about: being kept in the moment.

The battery life of the Apple Watch didn’t make it longer than 10 hours because of my runs, though without this kind of exertion, it seemed to last most of my waking day, which is reasonable.

What’s Next?

I normally carry a smartphone when running as my music platform, but increasingly to take Instagram images during my journey. The Strava app GPS integration with Instagram is a fave running experience. I did carry my Apple iPhone 5 in Beijing, to take pictures—no, I don’t really carry a selfie stick—and to try out the Strava app for comparison. The Instagram integration seemed to be DOA though.

So, my thoughts on wearable tech super watch evolution, and the emergence of the standalone wearable device as the way to go for smartwatches, were reinforced from my Beijing experience.

However, a super watch UX needs to be flexible and offer more capability. I’d like to see onboard music and image capture capability on the watches themselves somehow. Audio notifications for time, speed and distance and geographic points would also enhance the experience immensely. However, what such enhancements would mean for the bane of wearable tech UX right now—battery life—yet alone device size, remains just another challenge to be solved. And it will be.

And what UX research methodology lessons might be gleaned from running in Beijing with wearable tech? Firstly, don’t assume anything about your ethnographic experience upfront. Try it yourself on a dry run first to iron out any possible kinks. Run at different times of the day, over different distances and routes, in varying weather conditions, and, of course, with different devices along the way. Most importantly, find real native runners to follow around, and record what they do from start to finish, what they do offline as well as online, and with what tools, on their runs.

Running, just like user experience, is about the complete journey, a total contextual experience, not just where your rubber meets the road.Possibly Related Posts:

Mismatch

Floyd Teter - Mon, 2015-05-18 09:44
In the world of enterprise software, we sometimes find ourselves with a mismatch between sellers and customers.  From my worm's-eye view, we seem to be wrestling with one of those mismatches right now.

My typical week is mostly spent in conversations with customers.  Sometimes it's more of a formal work scenario with higher education institutions as part of my role with Sierra Cedar.  More often than not, it's a bit more informal:  advising customers across a wide range of industries in my role as an Oracle ACE Director (and, before you ask, yes - it's a freebie.  Just part of my advocacy role as a member of the Oracle ACE Director program.)

In listening to those customers - mostly about Oracle SaaS - the conversation is based on a description of an expectation or hope of an outcome or end state:

  1. Reduce required capital (think money here) required for using and maintaining enterprise software
  2. Eliminate or repurpose hardware
  3. Reduce headcount (due to budget or competitive pressures) or redirect skilled efforts toward more unique and value added activities (the latter is actually more common these days)
  4. Better information for making better decisions: more timely, more contextual, and more useful for both evaluation of the past and prediction of the future.
Now, keep in mind that no customer ever comes out and explicitly lists any of these four items.  Every customer is different, with unique wants and desires.  But, as I dig to get a better understanding, it typically comes back to some variation on one or more of these four.  So I'm shortcutting the process here to get to the real point:  the customers are in outcomes.  Will SaaS help us to enable/reduce/achieve something that delivers the outcome we desire?  Now just hang onto that concept for a moment; the idea of expected or hoped-for outcomes.

Think back to the last time you sat through an enterprise software or service demonstration, either as a seller or a customer or a partner.  Where was the emphasis?  I'll bet my dollars to your donuts that the demo focused on the capabilities of the software or service.  Lots of emphasis on an easy-to-use search engine, or great user experience, or cool features, or a well-integrated business process, or easy-to-build-and-delivery dashboards. All of which are very important.  But it's an emphasis on products and/or services.

The mismatch is between customers desiring enterprise outcomes and vendors/partners selling products and/or services to achieve those outcomes.  It's akin to shopping for a home in a certain neighborhood with a specific lifestyle in mind, while the builders sell to you based on the quality of their tools, their craftsmen, and the different homes they've built in the past - regardless of lifestyle or location.

In the enterprise software world, organizations often act like individuals.  Each one has unique needs and desires...which lead to unique outcomes.  The mismatch comes when vendors and partners sell services and products in response to those desired outcomes.  Folks, that is the mismatch.

So, how do I plan to handle this myself?  By first working harder to understand the outcomes desired by the customers with whom I engage in one way or another...mastering the who, what, why, when, where and how.  And then by explaining how the Oracle products and the services provided by Sierra Cedar can be utilized to help achieve those desired outcomes...and that's where I'll burn my calories going forward as I explain possibilities, lay out roadmaps, design solutions, and develop products/services.

So now that your life has been enlightened by this pearl of wisdom, may I ask for a small contribution on your way out?  Comments please.

Free Video : Troubleshoot Fusion Application : FAAdmin User is unable to create new users

Online Apps DBA - Mon, 2015-05-18 08:44
Last week We launched our YouTube Channel covering videos related to Oracle Apps, Fusion Middleware,  Fusion Applications, and database and since then 70 new users subscribed . Today we added new video related to Oracle Fusion Applications issue that we encountered for one of our customer (Contact Us if you need any help to install and manage Oracle Fusion […] The post Free Video : Troubleshoot Fusion Application : FAAdmin User is unable...

This is a content summary only. Visit my website http://onlineAppsDBA.com for full links, other content, and more!
Categories: APPS Blogs

Oracle at the Gartner Digital Workplace Summit

WebCenter Team - Mon, 2015-05-18 05:47

Gartner Theme: Workplace Reimagined: Agile, Engaged, Empowered

Prepare for the new era of employee engagement. By 2017, 25% of organizations will lose their market position due to "digital business incompetence." The reason: lack of a holistic response to how consumerization trends change how work is best accomplished.
Hot topics to be covered:
  • Increase Employee Engagement and Agility
  • Enable New Ways of Working
  • Consumerize Your Workforce's User Experience
  • Create World-Class Collaboration Environments
  • Re-architect for the Digital Workplace
Oracle is a proud to be a Platinum sponsor of the Gartner Digital Workplace Summit this May 18 - 20, 2015 in Orlando, FL.  Attendees will have the opportunity to meet with Oracle experts in a variety of sessions, including demonstrations during the showcase receptions. 

Stop By The Oracle Booth And Chat With Product Experts To Learn:

  • How Oracle can help you content-enable your cloud apps and other custom apps 
  • Why you need to be delivering omni-channel & web experiences
  • How you can secure collaborations with remote employees, partners and suppliers
  • Why your organization needs a  Digital Business strategy
  • How your employees can securely share large files with both inside and outside the firewall
  • Why you need to deliver a consistent, seamless and mobile customer experience 

Booth Hours:

  • Monday, May 18 – 11:45 a.m. to 2:00 p.m.
  • Monday, May 18 – 4:45 p.m. to 6:45 p.m.
  • Tuesday, May 19 – 12:00 p.m. to 2:00 p.m. 

Attend the Oracle Featured Solution Provider Session: Date:  Tuesday, May 19, 2015 | 10:30 a.m.-11:15 a.m. Location:  Osceola D Title:  Delivering Contextual Collaboration & Engagement in a Digital World

Abstract: Reaching audiences through a robust online experience across multiple channels and devices is critical in today’s ever-changing digital world. You need a digital experience platform that helps create, manage, simplify and integrate your processes, content, analytics, and social capabilities. Attend this session to learn how Oracle WebCenter provides content-enabled cloud processes to improve agility and performance and creates interactions that are personal, secure, and multichannel, delivering contextual collaboration and engagement to customers, employees and partners.

Speaker:  David Le Strat, Senior Director of Product Management, Oracle  

SQL Plan Baseline Manual Evolution

Dominic Brooks - Mon, 2015-05-18 04:39

I’ve never really spent time on evolution because most of the time I use baselines for emergency sql plan management and not with the added value of controlled evolution with which the feature was conceived.

But some observations on sql plan baseline evolution originating from the questions:

  • What does evolution actually do?
  • What information does it use?
  • What happens with DML?

Starting point – one baselined plan

1. FTS plan in memory from SQL which should do FTS and put in a baseline

drop table t1;
drop table t2;

create table t1 
(col1  number 
,col2  varchar2(50) 
,flag  varchar2(2));
 
insert into t1 
select rownum 
,      lpad('X',50,'X') 
,      case when rownum = 1 
            then 'Y1'
            when rownum = 2 
            then 'Y2' 
            when mod(rownum,2) = 0
            then 'N1'
            else 'N2' 
       end 
from   dual 
connect by rownum <= 100000; 

exec dbms_stats.gather_table_stats(USER,'T1',method_opt=>'FOR COLUMNS flag SIZE 5',no_invalidate=>FALSE); 
 
create index i1 on t1 (flag);

var n varchar2(2) 
exec :n := 'N1';
select /*+ domtest */ count(*), max(col2) from t1 where flag = :n;
select * from table(dbms_xplan.display_cursor);

declare
  l_op pls_integer;
begin
  l_op :=
  dbms_spm.load_plans_from_cursor_cache('45sygvgu8ccnz');
end;
/

select to_char(signature), sql_handle, sql_text, origin, last_verified, enabled, accepted from dba_sql_plan_baselines;

TO_CHAR(SIGNATURE)  SQL_HANDLE           SQL_TEXT                                                          ORIGIN       LAST_VERIFIED ENABLED ACCEPTED
------------------- -------------------- ----------------------------------------------------------------- ------------ ------------- ------- --------
8384366354684364262 SQL_745b40c6fdb2c5e6 select /*+ domtest */ count(*), max(col2) from t1 where flag = :n MANUAL-LOAD                YES     YES      

We have a full table scan baselined.

Step 2 – an alternative plan

Next, let’s execute a SQL statement which would rather do an index scan for a different set of binds.

When there is a baselined plan, then it does this automatically at hard parse time.

If a lower cost plan is generated then store it for future evaluation / evolution.

We need to flush or invalidate the existing cursor to get that hard parse though.

Then we get an AUTO_CAPTURE plan in DBA_SQL_PLAN_BASELINES, ACCEPTED = NO

comment on table t1 is 'flushme';
var n varchar2(2) 
exec :n := 'Y1';
select /*+ domtest */ count(*), max(col2) from t1 where flag = :n;
select * from table(dbms_xplan.display_cursor);

select to_char(signature), sql_handle, sql_text, origin, last_verified, enabled, accepted from dba_sql_plan_baselines;

TO_CHAR(SIGNATURE)  SQL_HANDLE           SQL_TEXT                                                          ORIGIN       LAST_VERIFIED ENABLED ACCEPTED
------------------- -------------------- ----------------------------------------------------------------- ------------ ------------- ------- --------
8384366354684364262 SQL_745b40c6fdb2c5e6 select /*+ domtest */ count(*), max(col2) from t1 where flag = :n MANUAL-LOAD                YES     YES      
8384366354684364262 SQL_745b40c6fdb2c5e6 select /*+ domtest */ count(*), max(col2) from t1 where flag = :n AUTO-CAPTURE               YES     NO   

Step 3 – Manual Evolution

Now let’s do an evolution.
And I’m going to set MODULE so we can track some sql.
(Perhaps we could/should do a sql trace but, having done it, it doesn’t really tell us much more in this case).

exec dbms_application_info.set_module('DOMTEST','SPM');

set serveroutput on
set long 10000
DECLARE
    r CLOB;
BEGIN
    r := 
    dbms_spm.evolve_sql_plan_baseline
    (sql_handle => 'SQL_745b40c6fdb2c5e6',verify=>'YES',commit=>'NO');
    dbms_output.put_line(r);
END;
/

Why would you do VERIFY => ‘NO’?

Only use case really is if you’ve already evaluated the plan yourself and want to just get it accepted.

The combination of VERIFY=> NO and COMMIT=> NO does nothing.

Our report looks like this:

------------------------------------------------------------------------------
                        Evolve SQL Plan Baseline Report
-------------------------------------------------------------------------------

Inputs:
-------
  SQL_HANDLE = SQL_745b40c6fdb2c5e6
  PLAN_NAME  = 
  TIME_LIMIT = DBMS_SPM.AUTO_LIMIT
  VERIFY     = YES
  COMMIT     = NO

Plan: SQL_PLAN_78qu0svyv5jg68576eb1f
------------------------------------
  Plan was verified: Time used .1 seconds.
  Plan passed performance criterion: 314.79 times better than baseline plan.

                            Baseline Plan      Test Plan       Stats Ratio
                            -------------      ---------       -----------
  Execution Status:              COMPLETE       COMPLETE
  Rows Processed:                       1              1
  Elapsed Time(ms):                 3.708           .047             78.89
  CPU Time(ms):                     3.777              0
  Buffer Gets:                        944              3            314.67
  Physical Read Requests:               0              0
  Physical Write Requests:              0              0
  Physical Read Bytes:                  0              0
  Physical Write Bytes:                 0              0
  Executions:                           1              1

-------------------------------------------------------------------------------
                                 Report Summary
-------------------------------------------------------------------------------
Number of plans verified: 1
Number of plans accepted: 0

I’ve also run this on 12.1.0.2 and there doesn’t seem to be a massive difference in behaviour although there are a few extra columns and stats in the output.
(including the two plans which aren’t in the output of 11g because they’re not stored although they could be derived from the analysis sql).

What? Where? How?

What has been run behind the scenes to do this?

What information has been used to do the evaluation?

select * from v$sql where module = 'DOMTEST' order by first_load_time;

Statements of interest:

/* SQL Analyze(686,0) */ select /*+ domtest */ count(*), max(col2) from t1 where flag = :n

/* SQL Analyze(686,0) */ select /*+ domtest */ count(*), max(col2) from t1 where flag = :n

SELECT obj_type, plan_id, comp_data FROM sqlobj$data WHERE signature = :1 AND category = :2;

SELECT st.sql_text, sod.comp_data, sox.optimizer_env, sox.bind_data, sox.parsing_schema_name, sox.creator
FROM   sql$text st, sqlobj$data sod, sqlobj$auxdata sox
WHERE  sod.signature = st.signature 
AND    sod.signature = :1 
AND    sod.category  = :2 
AND    sod.obj_type  = :3 
AND    sod.plan_id   = :4 
AND    sod.signature = sox.signature 
AND    sod.category  = sox.category
AND    sod.obj_type  = sox.obj_type
AND    sod.plan_id   = sox.plan_id;

What was run to evaluate the performance of the relative plans?

See the first two /* SQL Analyze */ statements.

We can look more closely at v$sql for these:

select s.sql_text, s.child_number, s.plan_hash_value phv
,      (select extractvalue(xmltype(other_xml),'other_xml/info[@type="plan_hash_2"]')
        from   v$sql_plan p
        where  p.sql_id          = s.sql_id
        and    p.plan_hash_value = s.plan_hash_value
        and    p.other_xml is not null) plan_hash_2
, s.executions, s.elapsed_time/1000/s.executions elapsed_ms, s.cpu_time/1000/s.executions cpu_ms, s.buffer_gets/s.executions, b.value_string
from   v$sql s, table(dbms_sqltune.extract_binds(s.bind_data)) b
where  s.module = 'DOMTEST' 
and    s.sql_id = '8rnh80j2b09kt';

SQL_TEXT                                                                                    CHILD_NUMBER PHV        PLAN_HASH_2 EXECUTIONS ELAPSED_MS CPU_MS S.BUFFER_GETS/S.EXECUTIONS VALUE_STRING
------------------------------------------------------------------------------------------- ------------ ---------- ----------- ---------- ---------- ------ -------------------------- ------------
/* SQL Analyze(686,0) */ select /*+ domtest */ count(*), max(col2) from t1 where flag = :n  0            1634389831 1634389831  10         4.0923     4.0994 944                        Y1
/* SQL Analyze(686,0) */ select /*+ domtest */ count(*), max(col2) from t1 where flag = :n  1            2239163167 2239163167  10         1.1728     0.1999 3                          Y1

We can see where some of the stats from the report have come from – they look to be an average over a number of executions in this example. The timings aren’t an exact match.

Note that in this case PHV = PHV2, but PHV2 is the plan_id in the underlying baselined tables as I have discussed previously elsewhere.

Where did the SQL statement come from?

Where did the bind data come from?

That’s the last statement in my “statements of interest” above.

SELECT st.sql_text, sod.comp_data, sox.optimizer_env, sox.bind_data, sox.parsing_schema_name, sox.creator
FROM   sql$text st, sqlobj$data sod, sqlobj$auxdata sox
WHERE  sod.signature = st.signature 
AND    sod.signature = :1 
AND    sod.category  = :2 
AND    sod.obj_type  = :3 
AND    sod.plan_id   = :4 
AND    sod.signature = sox.signature 
AND    sod.category  = sox.category
AND    sod.obj_type  = sox.obj_type
AND    sod.plan_id   = sox.plan_id;

The basic sql plan baseline mechanism is by signature (V$SQL.EXACT_MATCHING_SIGNATURE) but SQL$TEXT stores the actual sql text.

How else could be evaluate the baselined plans during evolution because the sql might not be in memory or in AWR!

The statement to execure bind data is from the hard parse which generated the lower cost of plan of interest and is stored in SYS.SQLOBJ$AUXDATA.BIND_DATA.

Makes sense, right?

What happens with DML?

There’s nothing I can find which spells it out in black and white but only the SELECT of the DML, be it INSERT SELECT, UPDATE or MERGE.

Even a sql trace doesn’t make this crystal clear.

But using a similar method to above:

create table t2
(cnt   number 
,col2  varchar2(50) 
,flag  varchar2(2));

var n varchar2(2) 
exec :n := 'N1';
insert into t2
select /*+ domtest */ count(*), max(col2), :n from t1 where flag = :n;
select * from table(dbms_xplan.display_cursor);

select sql_id, child_number, is_bind_aware, is_bind_sensitive, is_shareable, to_char(exact_matching_signature) sig
,      executions, plan_hash_value
from   v$sql 
where  sql_id = '08ravsws1s6bn';

declare
  l_op pls_integer;
begin
  l_op :=
  dbms_spm.load_plans_from_cursor_cache('08ravsws1s6bn');
end;
/

select to_char(signature), sql_handle, sql_text, origin, last_verified, enabled, accepted from dba_sql_plan_baselines;

comment on table t2 is 'flushme';

var n varchar2(2) 
exec :n := 'Y1';
insert into t2
select /*+ domtest */ count(*), max(col2), :n from t1 where flag = :n;
select * from table(dbms_xplan.display_cursor);

select to_char(signature), sql_handle, sql_text, origin, last_verified, enabled, accepted from dba_sql_plan_baselines;

exec dbms_application_info.set_module('DOMTEST2','SPM');
select * from v$sql where module = 'DOMTEST2' order by first_load_time;

set serveroutput on
set long 10000
DECLARE
    r CLOB;
BEGIN
    r := 
    dbms_spm.evolve_sql_plan_baseline
    (sql_handle => 'SQL_4ed45b79c88f3392',verify=>'YES',commit=>'NO');
    dbms_output.put_line(r);
END;
/

We get a similar report and similar evaluation sql.
We can see from ROWS_PROCESSED that no rows were actually inserted.

select s.sql_text, s.child_number, s.plan_hash_value phv
,      (select extractvalue(xmltype(other_xml),'other_xml/info[@type="plan_hash_2"]')
        from   v$sql_plan p
        where  p.sql_id          = s.sql_id
        and    p.plan_hash_value = s.plan_hash_value
        and    p.other_xml is not null) plan_hash_2
, s.rows_processed,s.executions, s.elapsed_time/1000/s.executions elapsed_ms, s.cpu_time/1000/s.executions cpu_ms, s.buffer_gets/s.executions
from   v$sql s
where  s.module = 'DOMTEST2' 
and    s.sql_id = 'b04dpwja6smx7';

SQL_TEXT                                                                                                       CHILD_NUMBER PHV        PLAN_HASH_2 ROWS_PROCESSED EXECUTIONS ELAPSED_MS CPU_MS S.BUFFER_GETS/S.EXECUTIONS
-------------------------------------------------------------------------------------------                    ------------ ---------- ----------- -------------- ---------- ---------- ------ --------------------------
/* SQL Analyze(771,0) */ insert into t2 select /*+ domtest */ count(*), max(col2), :n from t1 where flag = :n  0            1634389831 1634389831  0              10         4.1668     4.1993 944                       
/* SQL Analyze(771,0) */ insert into t2 select /*+ domtest */ count(*), max(col2), :n from t1 where flag = :n  1            2239163167 2239163167  0              10         0.1512     0.1  3                         

I also tried with the addition of a PK just to double check that it couldn’t be inserted. Nothing to see.

Finally…

One note about evolution behaviour and the adjustment of the threshold for plan verification, i.e. the bit which results in this:

Plan was verified: Time used .1 seconds.
Plan passed performance criterion: 314.79 times better than baseline plan.

You can use parameter _plan_verify_improvement_margin, e.g.

ALTER SYSTEM SET “_plan_verify_improvement_margin”=120;

Unfortunately valid range for parameter is between 100 and 10000 so not high enough for me to show it’s influence on the results above (> 300x improvment).


Sharepoint and Nintex Workflow: automate your Business Processes

Yann Neuhaus - Mon, 2015-05-18 03:58
 

Nintex_workflow_logo

 

What is a WORKFLOW?

A workflow is the representation of an operation sequence, declared as work of a person or group, an organization of staff, or one or more simple or complex mechanisms. These mechanisms could be automated using specific tools.

In other words, we could compare a "workflow" to a "recipe".
Imagine you have to cook something sweet, first you choose the kind, then open the book, and you have to follow steps, one by one with validation, to finally get a beautiful apple pie, Business Processes are the same. Step by step, each validation counts, in order to get a final "product". By NOT USING PROCESSES ... processnogood   ... following a WORKFLOW   precessgood Nintex Workflow

Nintex is an independent software vendor and workflow company with customers in over 90 countries. Nintex was founded in 2004, the company's products include Nintex Workflow, recognized by Forrester analysts as the most utilized third party tool for Microsoft SharePoint, Nintex Forms, Nintex Workflow for Project Server, Nintex Live and Nintex Mobile apps.

Sharepoint and Nintex Workflow is the best combination for advanced workflows

Sharepoint already has an adequate function integrated that enables you to use workflows.
As soon as users starts to use it, they became familiar with the tool and as usual, they wants more and more ... and here starts advanced needs.


Nintex is the best tool recommended. It can be integrated into the SharePoint user interface so it does not require the use of Visual Studio or SharePoint Designer. Nintex Workflow provides an intuitive visual interface and allows users and IT pro to create and maintain workflows more easily. Nintex Workflow 2007 and 2010 add a ‘drag and drop’ design program for workflows, connectivity and advanced workflow features to the Microsoft SharePoint 2010 platform.

Why automation?

The workflow automation ensures perfect interaction between documents, user and management applications.


The advantages of Nintex Workflow

  • It is intuitive and easy to use.
  • You can view the real-time status of workflows.
  • It improves your business performance.
  • It offers reusable workflows.
  • It is easy to use and manage; no client software is needed.
CONCLUSION

Benefits of workflow automation

  • Optimum productivity and competitiveness through a continuous processing workflows.
  • Shorter response times and better customer service through faster processing and full availability of information.
  • Optimal cash with a better customer invoice processing cycle
  • Manage financial and legal risks through audit processes and data retention over several years.
  • Reliability and consistency of the process to ensure compliance with procedures.
  • Transparency workflows with continuous tracking of documents, identifying bottlenecks and speeding decision making.
  • A complete history of all changes made by users , available at any time
  • A quick adaptability to environmental changes or customers business needs, providing a decisive competitive advantage.

 

megadesk  Use Workflow Automation with Nintex on SharePoint make life easier

Amazon Echo Official SDK

Oracle AppsLab - Sun, 2015-05-17 16:03

Image from wired.com

Back in February I was invited to participate in an pre-beta release of the Amazon Echo SDK. I was under NDA so I couldn’t share any of my finding here. But now that NDA has expired and I can share some of the integrations I did with this interesting device.

First of all I want to comment on the fact that not any of the OS level voice assistants in the market are quite getting it right when it comes to interacting with third party integrations. Let me explain, neither Google Now nor Siri or Amazon Echo will let you interact with a voice “app” unless you “open” or “start” that app first. For example to start an app in the any of the OSes mentioned above I have to do the following:

“[Ok Google], [Hey Siri], or [Alexa] open [name of application]”…”close” or “exit” [name of application]

Then I can start interacting with that application. This interaction paradigm belongs to a desktop model where you are used to open and close programs. And furthermore these actions are not even part of the mobile experience.

My proposal solution to fix this problem would be for the systems to create an “intent” model where a user could decide what to do with certain defined utterances. For example:

“[Ok Google], [Hey Siri], or [Alexa] do I have any new mail?”

In this case, the user should have the option to decide which will be the default application to handle “mail” through settings or through a first program run.

When you install app for the first time the system should ask:

“Would you like to use this app to handle your voice command for mail?”

Voice as the next user interface

Voice recognition and natural language processing (NLP) algorithms have advanced exponentially. These systems are getting truly ready for primetime. The use cases are only limited by our futuristic view of interacting with our systems with just our voice.

This is where the Amazon Echo shines. The idea of picking up my phone and commanding it with my voice, feels unnatural to me.  The Amazon Echo just sits there on my desk and is always ready for my commands. One could argue that Google Now and Siri could do the same but the lack of the rich sound presence and visual cues (RGB ring around the top) of the Echo are enough to have a better experience.

Demos

Without further ado, here are two demos of service integration I did with the Echo.  I used Temboo libraries for the Facebook, Twitter and Uber integrations. For IMAP mail, iCal, Philips Hue I created my own. All this of course was done in Java.

Office Automation

Internet of Things demo

So would you get an Amazon Echo?Possibly Related Posts:

OBIEE 11.1.1.9 Now Supports HiveServer2 and Cloudera Impala

Rittman Mead Consulting - Sun, 2015-05-17 15:52

As you all probably know I’m a big fan of Oracle’s BI and Big Data products, but something I’ve been critical of is OBIEE11g’s lack of support for HiveServer2 connections to Hadoop clusters. OBIEE 11.1.1.7 supported Hive connections using the older HiveServer1 protocol, but recent versions of Cloudera CDH4 and CDH5 use the HiveServer2 protocol by default and OBIEE 11.1.1.7 wouldn’t connect to them; not unless you switched to the Windows version of OBIEE and used the Cloudera ODBC drivers instead, which worked but weren’t supported by Oracle.

OBIEE 11.1.1.9 addresses this issue by shipping more recent DataDirect ODBC drivers for Hive, that are compatible with the HiveServer2 protocol used by CDH4 and CDH5 (check out this other article by Robin on general new features in 11.1.1.9). Oracle only really support Hive connectivity for Linux installs of OBIEE, and the Linux version of OBIEE 11.1.1.9 comes with the DataDirect ODBC drivers already installed and configured for use, all you have to do then is set up the ODBC connection in the odbc.ini file on Linux and install the Cloudera Hive ODBC drivers on your Windows workstation for the Admin too (the Hive ODBC drivers that Oracle supply on MOS still look like the old HIveServer1 version, though I could be wrong). To check that it all worked on this new 11.1.1.9 version of OBIEE11g I therefore downloaded and installed the Windows Cloudera Hive ODBC drivers and set up the System DSN like this:

NewImage

and set up a corresponding entry in the Linux OBIEE 11.1.1.9’s odbc.ini file, like this:

NewImage

with the key thing being to make sure you have matching DSN names on both the Windows workstation (for the Admin tool initial datasource setup and table metadata import) and the Linux server (for the actual online connection to Hive from the BI Server, and subsequent data retrieval). One thing I did notice was that whilst I could connect to the Hive database server and set up the connection in the Admin tool, I couldn’t view any Hive tables and had to manually create them myself in the RPD Physical Layer – this could just be a quirk on my workstation install though so I wouldn’t read too much into it. Checking connectivity in the Admin tool then showed it connecting properly and retrieving data from Hive on the Hadoop cluster. I didn’t test Kerberos-authentication connections but I’m assuming it’d work, as the previous version of OBIEE 11.1.1.7 on Linux just failed at this point anyway. The docs are here if you’d like to look into any more details, or check the full set of setup steps.

NewImage

For Cloudera Impala connections, you’re directed in the docs to download the Windows Cloudera Impala ODBC drivers as Oracle don’t even ship them on MOS, but again the Linux install of OBIEE 11.1.1.9 comes with DataDirect Impala drivers that are already setup and ready for use (note that if you upgrade from 11.1.1.7 to 11.1.1.9 rather than do the fresh install that I did for testing purposes, you’ll need to edit the opmn.xml file to register these updated DataDirect drivers). Then it’s a case of setting the Windows System DSN up for the initial metadata import, like this:

NewImage

then creating a corresponding entry in the Linux server’s odbc.ini file, like this:

NewImage

Note that the docs do mention the issue with earlier versions of Impala where the Impala server is expecting LIMIT clauses when using ORDER BY in Impala SQL queries, and gives a couple of workarounds to fix the issue and stop Impala expecting this clause; for more recent (CDH5+) versions of Impala this requirement is in-fact lifted and you can connect-to and use Impala without needing to make the configuration change mentioned in the doc (or use the workaround I mentioned in this earlier blog post). Checking connectivity in the Admin tool then shows the connection is making its way through OK, from the Windows environment to the Linux server’s ODBC connection:

NewImage

and creating a quick report shows data returned as expected, and considerably quicker than with Hive.

NewImage

As I said, I’ve not really tested either of these two connections using Kerberos or any edge-case setups, but connectivity seems to be working and we’re now in a position where OBIEE11g can properly connect to both Hive, and Impala, on recent CDH installs and of course the Oracle Big Data Appliance. Good stuff, now what about Spark SQL or ElasticSearch..?

Categories: BI & Warehousing

Variations on 1M rows insert (6): CPU Flame Graph

Yann Neuhaus - Sun, 2015-05-17 12:21

If you followed the 1M rows insert variations, you have seen how something simple brought me into an investigation with the latest tools available to see where the time is spent. When the time is CPU time, there is no wait event to get the detail of DB Time. I've used perf events in the previous post and I'll now visualize them with CPU Flame Graph. My goal is to understand why my test cases is 2x longer in 11.2.0.4 vs 11.2.0.3

The idea to use Flame Graphs come from Luca Canali:

With PL/SQL and LONGs (and PRODUCT_USER_PROFILE)

Gary Myers - Sun, 2015-05-17 00:45
One use for the 12.1.0.2 addition of PL/SQL functions in the WITH clause is to get the HIGH_VALUE of a partition in a usable column format.

with
 FUNCTION char2000(i_tab in varchar2, i_part in varchar2) 
 RETURN VARCHAR2 IS
   v_char varchar2(2000);
 BEGIN
   select high_value into v_char
   from user_tab_partitions a
   where a.table_name = i_tab
   and a.partition_name = i_part;
   --
   if v_char like 
     'TO_DATE(''%'', ''SYYYY-MM-DD HH24:MI:SS'', ''NLS_CALENDAR=GREGORIAN'')'
   then
      v_char := regexp_substr(v_char,q'{'[^']+'}');
   end if;
   --
   RETURN v_char;
 END;
select table_name, partition_name, 
       char2000(table_name, partition_name) high_val,
       partition_position, tablespace_name, 
       segment_created, num_rows, last_analyzed, 
       global_stats, user_stats
from user_tab_partitions ut
where segment_created='YES'
order by table_name, high_val;
/

Oracle have spent well over a decade telling us that LONG is deprecated, but still persist in using it in their data dictionary. PL/SQL is the only practical way of getting the values into a more usable data type.

You will want the last version of the SQL Plus client. For SQL, sqlplus treats the semi-colon as a "go off and execute this". PL/SQL has traditionally needed a period on an otherwise empty line to switch from the statement editor to the command prompt.

For example:

Having PL/SQL embedded in the SQL statement confuses the older clients, and we get a bout of premature execution.


In the 12.1.0.2 client, a WITH statement is treated as a PL/SQL statement if it contains PL/SQL (ie needing the period statement terminator). If it doesn't contain PL/SQL then it doesn't (so there's no change required for older scripts). That said, I'd recommend consistently using the period terminator for PL/SQL and SQL.  

The SQLcl client (still beta/early adopter) currently manages the straight select okay, but fails if it is part of a CREATE VIEW. 

Tim Hall has already noted that the WITH PL/SQL doesn't currently work when embedded in a PL/SQL block (such as a procedure), but that is expected in a future release. 
Oh, and while it isn't documented in manual, WITH is its own statement for the purposes of PRODUCT_USER_PROFILE. I can't imagine anyone on the planet is still using PRODUCT_USER_PROFILE for security. If they are, they need to rethink in light of WITH statements and result sets being returned by PL/SQL. 


Presentation Slides and Photos from the Rittman Mead BI Forum 2015, Brighton and Atlanta

Rittman Mead Consulting - Sat, 2015-05-16 13:41

It’s now the Saturday after the two Rittman Mead BI Forum 2015 events, last week in Atlanta, GA and the week before in Brighton, UK. Both events were a great success and I’d like to say thanks to the speakers, attendees, our friends at Oracle and my colleagues within Rittman Mead for making the two events so much fun. If you’re interested in taking a look at some photos from the two events, I’ve put together two Flickr photosets that you can access using the links below:

NewImage

We’ve also uploaded the presentation slides from the two events (where we’ve been given permission to share them) to our website, and you can download them including the Delivering the Oracle Information Management and Big Data Reference Architecture masterclass using the links below:

Delivering the Oracle Information Management & Big Data Reference Architecture (Mark Rittman & Jordan Meyer, Rittman Mead)

Brighton, May 7th and 8th 2015

Atlanta, May 14th and 15th 2015

Congratulations also to Emiel van Bockel and Robin Moffatt who jointly-won Best Speaker award at the Brighton event, and to Andy Rocha and Pete Tamsin who won Best Speaker in Atlanta for their joint session. It’s time for a well-earned rest now and then back to work, and hopefully we’ll see some of you at KScope’15, Oracle Openworld 2015 or the UKOUG Tech and Apps 2015 conferences later in 2015.

Categories: BI & Warehousing