Feed aggregator

Partition By and Group By

Tom Kyte - Fri, 2016-07-01 07:06
Hi Tom, I know the purpose of Partition by is to divide the data into groups and Group By is to retrieve result of some aggregate functions. What is the main difference between partition by and group by?
Categories: DBA Blogs

Understanding CPU on AIX Power SMT Systems

Jeremy Schneider - Fri, 2016-07-01 03:30

This month I worked with a chicagoland company to improve performance for eBusiness Suite on AIX. I’ve worked with databases running on AIX a number of times over the years now. Nevertheless, I got thrown for a loop this week.

TLDR: In the end, it came down to a fundamental change in resource accounting that IBM introduced with the POWER7 processor in 2010. The bottom line is twofold:

  1. if SMT is enabled then the meaning of CPU utilization numbers is changed. the CPU utilization numbers for individual processes mean something completely new.
  2. oracle database (I haven’t tested newer versions but they might also be affected) is not aware of this change. as a result, all CPU time values captured in AWR reports and extended SQL traces are wrong and misleading if it’s running on AIX/POWER7/SMT. (I haven’t tested CPU time values at other places in the database but they might also be wrong.)

On other unix operating systems (for example Linux with Intel Hyper-Threading), the CPU numbers for an individual process reflect the time that the process spent on the CPU. It’s pretty straightforward: 100% means that the process is spending 100% of its time on the logical CPU (a.k.a. thread – each hardware thread context on a hyper-threaded core appears as a CPU in Linux). However AIX with SMT is different. On AIX, when you look at an individual process, the CPU utilization numbers reflect IBM’s opinion about what percentage of physical capacity is being used.

Why did IBM do this? I think that their overall goal was to help us in system-wide monitoring and capacity planning – however it came at the expense of tuning individual processes. They are trying to address real shortcomings inherent to SMT – but as someone who does a lot of performance optimization, I find that their changes made my life a lot more difficult!


Ls Cheng started a conversation in November 2012 on the Oracle-L mailing list about his database on AIX with SMT enabled, where the CPU numbers in the AWR report didn’t even come close to adding up correctly. Jonathan Lewis argued that double-counting was the simplest explanation while Karl Arao made the case for time in the CPU run queue. A final resolution as never posted to the list – but in hindsight it was almost certainly the same problem I’m investigating in this article. It fooled all of us. CPU intensive Oracle workloads on AIX/Power7/SMT most frequently misleads performance experts into thinking there is a CPU runqueue problem at the OS level. In fact, after researching for this article I went back and looked at my own final report from a consulting engagement with an AIX/SMT client back in August 2011 and realized that I made this mistake myself!

As far as I’m aware, Marcin Przepiorowski was the first person to really “crack” the case when and he researched and published a detailed explanation back in February 2013 with his article Oracle on AIX – where’s my cpu time?. Marcin was tipped off by Steve Pittman’s detailed explanation published in a December 2012 article Understanding Processor Utilization on Power Systems – AIX. Karl Arao was also researching it back in 2013 and published a lot of information on his tricky cpu aix stuff tiddlywiki page. Finally, Graham Wood was digging into it at the same time and contributed to several conversations amongst oak table members. Just to be clear that I’m not posting any kind of new discovery! :)

However – despite the fact that it’s been in the public for a few years – most people don’t understand just how significant this is, or even understand exactly what the problem is in technical terms. So this is where I think I can make a contribution: by giving a few simple demonstrations of the behavior which Steve, Marcin and Karl have documented.

CPU and Multitasking

I recently spent a few years leading datbase operations for a cloud/SaaS company. Perhaps one of the most striking aspects of this job was that I had crossed over… from being one of the “young guys” to being one of the “old guys”! I certainly wasn’t the oldest guy at the company but more than half my co-workers were younger than me!

Well my generation might be the last one to remember owning personal computers that didn’t multitask. Ok… I know that I’m still working alongside plenty of folks who learned to program on punch-cards. But at the other end of the spectrum, I think that DOS was already obsolete when many of my younger coworkers starting using technology! Some of you younger devs started with Windows 95. You’ve always had computers that could run two programs in different windows at the same time.

Sometimes you take a little more notice of tech advancements you personally experience and appreciate. I remember it being a big deal when my family got our first computer that could do more than one thing at a time! Multitasking (or time sharing) is not a complicated concept. But it’s important and foundational.


CPU Time on Single CPU Multi Tasking System
  CPU color time for program P1

So obviously (I hope), if there are multiple processes and only a single CPU then the processes will take turns running. There are some nuances around if, when and how the operating system might force a process to get off the CPU but the most important thing to understand is just the timeline pictured above. Because for the rest of this blog post we will be talking about performance and time.

Here is a concrete example of the illustration above: one core in my laptop CPU can copy 13GB of data through memory in about 4-5 seconds:

$ time -p taskset 2 dd if=/dev/zero of=/dev/null bs=64k count=200k
204800+0 records in
204800+0 records out
13421772800 bytes (13 GB) copied, 4.73811 s, 2.8 GB/s
real 4.74
user 0.13
sys 4.54

The “taskset” command on linux pins a command on a specific CPU #2 – so “dd” is only allowed to execute on that CPU. This way, my example runs exactly like the illustration above, with just a single CPU.

What happens if we run two jobs at the same time on that CPU?

$ time -p taskset 2 dd if=/dev/zero of=/dev/null bs=64k count=200k &
[1] 18740

$ time -p taskset 2 dd if=/dev/zero of=/dev/null bs=64k count=200k &
[2] 18742

204800+0 records in
204800+0 records out
13421772800 bytes (13 GB) copied, 9.25034 s, 1.5 GB/s
real 9.25
user 0.09
sys 4.57
204800+0 records in
204800+0 records out
13421772800 bytes (13 GB) copied, 9.22493 s, 1.5 GB/s
real 9.24
user 0.12
sys 4.54

[1]-  Done                    time -p taskset 2 dd if=/dev/zero of=/dev/null bs=64k count=200k
[2]+  Done                    time -p taskset 2 dd if=/dev/zero of=/dev/null bs=64k count=200k

Naturally, it takes twice as long – 9-10 seconds. I ran these commands on my linux laptop but the same results could be observed on any platform. By the way, notice that the “sys” number was still 4-5 seconds. This means that each process was actually executing on the CPU for 4-5 seconds even though it took 9-10 seconds of wall clock time.

The “time” command above provides a summary of how much real (wall-clock) time has elapsed and how much time the process was executing on the CPU in both user and system modes. This time is tracked and accounted for by the operating system kernel. The linux time() command uses the wait4() system call to retrieve the CPU accounting information. This can be verified with strace:

$ strace -t time -p dd if=/dev/zero of=/dev/null bs=64k count=200k
10:07:06 execve("/usr/bin/time", ["time", "-p", "dd", "if=/dev/zero", "of=/dev/null", \
        "bs=64k", "count=200k"], [/* 48 vars */]) = 0
10:07:06 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, \
        child_tidptr=0x7f8f841589d0) = 12851
10:07:06 rt_sigaction(SIGINT, {SIG_IGN, [INT], SA_RESTORER|SA_RESTART, 0x7f8f83be90e0}, \
        {SIG_DFL, [], 0}, 8) = 0
10:07:06 rt_sigaction(SIGQUIT, {SIG_IGN, [QUIT], SA_RESTORER|SA_RESTART, 0x7f8f83be90e0}, \
        {SIG_IGN, [], 0}, 8) = 0
10:07:06 wait4(-1, 

204800+0 records in
204800+0 records out
13421772800 bytes (13 GB) copied, 4.66168 s, 2.9 GB/s

[{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, {ru_utime={0, 108000}, \
        ru_stime={4, 524000}, ...}) = 12851
10:07:11 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=12851, si_uid=1000, \
        si_status=0, si_utime=10, si_stime=454} ---
10:07:11 rt_sigaction(SIGINT, {SIG_DFL, [INT], SA_RESTORER|SA_RESTART, 0x7f8f83be90e0}, \
        {SIG_IGN, [INT], SA_RESTORER|SA_RESTART, 0x7f8f83be90e0}, 8) = 0
10:07:11 rt_sigaction(SIGQUIT, {SIG_IGN, [QUIT], SA_RESTORER|SA_RESTART, 0x7f8f83be90e0}, \
        {SIG_IGN, [QUIT], SA_RESTORER|SA_RESTART, 0x7f8f83be90e0}, 8) = 0
10:07:11 write(2, "r", 1r)                        = 1
10:07:11 ...

Two notes about this. First, you’ll see from the timestamps that there’s a 5 second pause during the wait4() syscall and the output from “dd” interrupts its output. Clearly this is when “dd” is running. Second, you’ll see that the wait4() call is returning two variables called ru_utime and ru_stime. The man page on wait4() clarifies that this return parameter is the rusage struct which is defined in the POSIX spec. The structure is defined in time.h and is the same structure returned by getrusage() and times(). This is how the operating system kernel returns the timing information to “time” for display on the output.

CPU Utilization on Linux with Intel SMT (Hyper-Threading)

Since many people are familiar with Linux, it will be helpful to provide a side-by-side comparison of Linux/Intel/Hyper-Threading with AIX/Power7/SMT. This will also help clarify exactly what AIX is doing that’s so unusual.

For this comparison, we will switch to Amos Waterland’s useful stress utility for CPU load generation. This program is readily available for all major unix flavors and cleanly loads a CPU by spinning on the sqrt() function. I found a copy at perzl.org already ported and packaged for AIX on POWER.

For our comparison, we will load a single idle CPU for 100 seconds of wall-clock time. We know that the process will spin on the CPU for all 100 seconds, but lets see how the operating system kernel reports it.

First, lets verify that we have SMT (Hyper-Threading):

user@debian:~$ lscpu | egrep '(per|name)'
Thread(s) per core:    2
Core(s) per socket:    2
Model name:            Intel(R) Core(TM) i3-4005U CPU @ 1.70GHz

Next lets run our stress test (pinned to a single CPU) and see what the kernel reports for CPU usage:

user@debian:~$ time -p taskset 2 stress -c 1 -t 100
stress: info: [20875] dispatching hogs: 1 cpu, 0 io, 0 vm, 0 hdd
stress: info: [20875] successful run completed in 100s
real 100.00
user 100.03
sys 0.00

Just what we would expect – the system is idle, and the process was on the CPU for all 100 seconds.

Now lets use mpstat to look at the utilization of CPU #2 in a second window:

user@debian:~$ mpstat -P 1 10 12
Linux 3.16.0-4-amd64 (debian) 	06/30/2016 	_x86_64_	(4 CPU)

01:58:07 AM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest  %gnice   %idle
01:58:17 AM    1    0.00    0.00    0.10    0.00    0.00    0.00    0.00    0.00    0.00   99.90
01:58:27 AM    1   17.44    0.00    0.10    0.00    0.00    0.00    0.00    0.00    0.00   82.45
01:58:37 AM    1  100.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00
01:58:47 AM    1  100.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00
01:58:57 AM    1  100.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00
01:59:07 AM    1  100.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00
01:59:17 AM    1  100.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00
01:59:27 AM    1  100.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00
01:59:37 AM    1  100.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00
01:59:47 AM    1  100.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00
01:59:57 AM    1  100.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00
02:00:07 AM    1   82.88    0.00    0.10    0.00    0.00    0.00    0.00    0.00    0.00   17.02
Average:       1   83.52    0.00    0.03    0.00    0.00    0.00    0.00    0.00    0.00   16.45

Again, no surprises here. We see that the CPU was running at 100% for the duration of our stress test.

Next lets check the system-wide view. On linux, most people use with the top command when they want to see what’s happening system-wide. Top shows a list of processes and for each process it estimates how much time is spent on the CPU. top uses the same kernel-tracked POSIX timing data that the time command returns. It then divides by the wall-clock time to express that timing data as a percentage. If two processes are running on one CPU, then each process will report 50% CPU utilization.

We will run top in a third window while the stress and mpstat programs are running to get the system-wide view:


Linux top (in Irix mode) reports that the “stress” program is using 100% of a single CPU and that 26.3% of my total CPU capacity is used by the system.

This is wrong. Did you spot it? If you have any linux servers with hyper-threading enabled then I really hope you understand the problem with this!

The problem is with the second statement – that 26% of my total CPU capacity is used. In reality, a “hardware thread” is nothing like a “real core”. (For more details about Hyper-Threading and CPU Capacity, Karl Arao might be one of the best sources of information.) Linux kernel developers represent each hardware thread as a logical CPU. As a result (and this is counter-intuitive) it’s very misleading to look at that “total CPU utilization” number as something related to total CPU capacity.

What does this mean for you? You must set your CPU monitoring thresholds on Linux/Hyper-Threading very low. You might consider setting critical threshold at 70%. Personally, I like to keep utilization on transactional systems under 50%. If your hyper-threaded linux system seems to have 70% CPU utilization, then in reality you be almost out of CPU capacity!

Why is this important? This is exactly the problem that IBM’s AIX team aimed to solve with SMT on POWER. But there is a catch: the source data used by standard tools to calculate system-level CPU usage is the POSIX-defined “rusage” process accounting information. IBM tweaked the meaning of rusage to fix our system-level CPU reporting problem – and they introduced a new problem at the individual process level. Lets take a look.

CPU Utilization on AIX with Power SMT

First, as we did on Linux, lets verify that we have SMT (Hyper-Threading):

# prtconf|grep Processor
Processor Type: PowerPC_POWER7
Processor Implementation Mode: POWER 7
Processor Version: PV_7_Compat
Number Of Processors: 4
Processor Clock Speed: 3000 MHz
  Model Implementation: Multiple Processor, PCI bus
+ proc0                                                                         Processor
+ proc4                                                                         Processor
+ proc8                                                                         Processor
+ proc12                                                                        Processor

# lparstat -i|egrep '(Type|Capacity  )'
Type                                       : Shared-SMT-4
Entitled Capacity                          : 2.00
Minimum Capacity                           : 2.00
Maximum Capacity                           : 4.00

So you can see that we’re working with 2 to 4 POWER7 processors in SMT4 mode, which will appear as 8 to 16 logical processors.

Now lets run the exact same stress test, again pinned to a single CPU.

# ps -o THREAD
    USER      PID     PPID       TID ST  CP PRI SC    WCHAN        F     TT BND COMMAND
jschneid 13238466 28704946         - A    0  60  1        -   240001  pts/0   - -ksh
jschneid  9044322 13238466         - A    3  61  1        -   200001  pts/0   - ps -o THREAD

# bindprocessor 13238466 4

# /usr/bin/time -p ./stress -c 1 -t 100
stress: info: [19398818] dispatching hogs: 1 cpu, 0 io, 0 vm, 0 hdd
stress: info: [19398818] successful run completed in 100s

Real   100.00
User   65.01
System 0.00

Wait… where did my CPU time go?! (This is one of the first things Marcin noticed too!) The AIX kernel reported that my process ran for 100 seconds of wall-clock time, but it was only running on the CPU for 65 seconds of that time!

On unix flavors such as Linux, this means that the operating system was not trying to put the process on the CPU during the missing time. Maybe the process was waiting for a disk operation or a signal from another process. But our stress test only executes the sqrt() function – so we know that it was not waiting for anything.

When you know the process was not waiting, there is only other reason the operating system wouldn’t put the process on the CPU. Look again at our very first demo in this article: two (or more) processes needed to share the CPU. And notice that the user+system time was lower than wall-clock time, exactly like our output here on AIX!

So lets take a look at the system-wide view with the “nmon” utility in a second window. (topas reports CPU usage solaris-style while nmon reports irix-style, so nmon will be more suitable for this test. they are actually the same binary anyway, just invoked differently.)


Wait… this doesn’t seem right! Our “stress” process is the only process running on the system, and we know that it is just spinning CPU with the sqrt() call. The “nmon” tool seems to verify the output of the time command – that the process is only on the CPU for 65% of the time! Why isn’t AIX letting my process run on the CPU?!

Lets take a look at the output of the mpstat command, which we are running in our third window:

# mpstat 10 12|egrep '(cpu|^  4)'
System configuration: lcpu=16 ent=2.0 mode=Uncapped 
cpu  min  maj  mpc  int   cs  ics   rq  mig lpa sysc us sy wa id   pc  %ec  lcs
  4    0    0    0    2    0    0    0    1 100    0  0 49  0 51 0.00  0.0    1
  4   19    0   40  143    7    7    1    1 100   19 100  0  0  0 0.61 30.7    7
  4    0    0    0  117    2    2    1    1 100    0 100  0  0  0 0.65 32.6    4
  4    0    0    0   99    1    1    1    1 100    0 100  0  0  0 0.65 32.6    3
  4    0    0    0  107    3    3    1    3 100    0 100  0  0  0 0.65 32.6    6
  4    0    0    0  145    5    5    1    3 100    0 100  0  0  0 0.65 32.6    9
  4    0    0    0  113    2    2    1    1 100    0 100  0  0  0 0.65 32.6    3
  4    0    0    0  115    1    1    1    1 100    0 100  0  0  0 0.65 32.6    7
  4    0    0    0  106    1    1    1    1 100    0 100  0  0  0 0.65 32.6    2
  4    0    0    0  113    1    1    1    1 100    0 100  0  0  0 0.65 32.6    5
  4    0    0   41  152    2    2    1    1 100    0 100  0  0  0 0.65 32.6    3
  4    5    0    0    6    0    0    0    1 100    4 100  0  0  0 0.04  1.8    1

Processor 4 is running at 100%. Right away you should realize something is wrong with how we are interpreting the nmon output – because our “stress” process is the only thing running on this processor. The mpstat utility is not using the kernel’s rusage process accounting data and it shows that our process is running on the CPU for the full time.

So… what in the world did IBM do? The answer – which Steve and Marcin published a few years ago – starts with the little mpstat column called “pc”. This stands for “physical consumption”. (It’s called “physc” in sar -P output and in topas/nmon.) This leads us to the heart of IBM’s solution to the system-wide CPU reporting problem.

IBM is thinking about everything in terms of capacity rather than time. The pc number is a fraction that scales down utilization numbers to reflect utilization of the core (physical cpu) rather than the hardware thread (logical cpu). And in doing this, they don’t just divide by four on an SMT-4 chip. The fraction is dynamically computed by the POWER processor hardware in real time and exposed through a new register called PURR. IBM did a lot of testing and then – starting with POWER7 – they built the intelligence in to the POWER processor hardware.

In our example, we are using one SMT hardware thread at 100% in SMT-4 mode. The POWER processor reports through the PURR register that this represents 65% of the processor’s capacity, exposed to us through the pc scale-down factor of 0.65 in mpstat. My POWER7 processor claims it is only 65% busy when one if its four threads is running at 100%.

I also ran the test using two SMT hardware threads at 100% on the same processor in SMT-4 mode. The processor scaled both threads down to 45% so that when you add them together, the processor is claiming that it’s 90% busy – though nmon & topas will show each of the two processes running at only 45% of a CPU! When all four threads are being used at 100% in SMT-4 mode then of course the processor will scale all four processes down to 25% – and the processor will finally show that it is 100% busy.

aix-topasOn a side note, the %ec column is showing the physical consumption as a percentage of entitled capacity (2 processors). My supposed 65% utilization of a processor equates to 32.6% of my system-wide entitled capacity. Not coincidentally, topas shows the “stress” process running at 32.6% (like I said, solaris-style).

So AIX is factoring in the PURR ratio when it populates the POSIX rusage process accounting structure. What is the benefit? Topas and other monitoring tools calculate system load by adding up the processor and/or process utilization numbers. By changing the meaning from time to capacity at such a low level, it helps us to very easily get an accurate view of total system utilization – taking into account the real life performance characteristics of SMT.

The big win for us is that on AIX, we can use our normal paging thresholds and we have better visibility into how utilized our system is.

The Big Problem With AIX/POWER7/SMT CPU Accounting Changes

But there is also a big problem. Even if it’s not a formal standard, it has been a widely accepted convention for decades that the POSIX rusage process accounting numbers represent time. Even on AIX with POWER7/SMT, the “time” command baked into both ksh and bash still uses the old default output format:

# time ./stress -c 1 -t 66  
stress: info: [34537674] dispatching hogs: 1 cpu, 0 io, 0 vm, 0 hdd
stress: info: [34537674] successful run completed in 66s

real    1m6.00s
user    0m41.14s
sys     0m0.00s

It’s obvious from the output here that everybody expects the rusage information to describe time. And the real problem is that many software packages use the rusage information based on this assumption. By changing how resource accounting works, IBM has essentially made itself incompatible with all of that code.

Of course, the specific software that’s most relevant to me is the Oracle database.

I did do a basic truss on a dedicated server process; truss logged a call to appgetrusage() which I couldn’t identify but I think it’s most likely calling getrusage() under the hood.

# truss -p 15728860
kread(0, 0x09001000A035EAC8, 1152921504606799456) (sleeping...)
kread(0, 0x09001000A035EAC8, 1152921504606799456) = 207
kwrite(6, "\n * * *   2 0 1 6 - 0 6".., 29)     = 29
lseek(6, 0, 1)                                  = 100316
kwrite(6, " C L O S E   # 4 5 7 3 8".., 59)     = 59
kwrite(6, "\n", 1)                              = 1
appgetrusage(0, 0x0FFFFFFFFFFF89C8)             = 0
kwrite(6, " = = = = = = = = = = = =".., 21)     = 21
kwrite(6, "\n", 1)                              = 1

For what it’s worth, I also checked the /usr/bin/time command on AIX – it is using the times() system call, in the same library as getrusage().

# truss time sleep 5
execve("/usr/bin/time", 0x2FF22C48, 0x200130A8)  argc: 3
sbrk(0x00000000)                                = 0x20001C34
vmgetinfo(0x2FF21E10, 7, 16)                    = 0
kwaitpid(0x2FF22B70, -1, 4, 0x00000000, 0x00000000) (sleeping...)
kwaitpid(0x2FF22B70, -1, 4, 0x00000000, 0x00000000) = 26017858
times(0x2FF22B78)                               = 600548912
kwrite(2, "\n", 1)                              = 1
kopen("/usr/lib/nls/msg/en_US/time.cat", O_RDONLY) = 3
Problems For Oracle Databases

The fundamental problem for Oracle databases is that it relies on getrusage() for nearly all of its CPU metrics. DB Time and DB CPU in the AWR report… V$SQLSTATS.CPU_TIME… extended sql trace sql execution statistics… as far as I know, all of these rely on the assumption that the POSIX rusage data represents time – and none of them are aware of the physc scaling factor on AIX/POWER7/SMT.

To quickly give an example, here is what I saw in one extended SQL trace file:

FETCH #4578129832:c=13561,e=37669,p=2,cr=527,...

I can’t list all the WAIT lines from this trace file – but the CPU time reported here is significantly lower than the elapsed time after removing all the wait time from it. Typically this would mean we need to check if the CPU is oversaturated or if there is a bug in Oracle’s code. But I suspect that now Oracle is just passing along the rusage information it received from the AIX kernel, assuming that ru_utime and ru_stime are both representing time.

If you use a profiler for analyzing trace files then you might see something like this:


The key is “unaccounted-for time within dbcalls” – this is what I’ve seen associated with the AIX/Power7/SMT change. It’s worth scrolling down to the next section of this profile too:


There was at least a little unaccounted-for time in every single one of the 81,000 dbcalls and it was the FETCH calls that account for 99% of the missing time. The FETCH calls also account for 99% of the CPU time.

What Can We Do Now

The problem with this unaccounted-for time on AIX/SMT is that you have far less visibility than usual into what it means. You can rest assured that CPU time will always be under-counted and a bunch of unaccounted-for time – but there’s no way to guess what the ratio might have been (it could be anywhere from 25% to 75% of the total real CPU time).

I’ve heard one person say that they always double the CPU numbers in the AWR for AIX/SMT systems. It’s a total stab in the dark but perhaps useful to think about. Also, I’m not sure whether someone has opened a bug with Oracle yet – but that should get done. If you’re an Oracle customer on AIX then you should open a ticket now and let Oracle know that you need their code to be aware of SMT resource accounting changes on POWER7!

In the meantime we will just need to keep doing what we can! The most important point to remember is that when you see unaccounted-for time on AIX/SMT, always remember that some or all of this time is normal CPU time which was not correctly accounted.

If you’re running Oracle on AIX, I’d love to hear your feedback. Please feel welcome to leave comments on this article and share your thoughts, additions and corrections!

Oracle Statement

Oracle Press Releases - Thu, 2016-06-30 18:55
Press Release
Oracle Statement

Redwood Shores, Calif.—Jun 30, 2016

"Five years ago, Oracle made a software development announcement which accurately reflected the future of the Itanium microprocessor. Two trials have now demonstrated clearly that the Itanium chip was nearing end of life, HP knew it, and was actively hiding that fact from its customers. Oracle never believed it had a contract to continue to port our software to Itanium indefinitely and we do not believe so today; nevertheless, Oracle has been providing all its latest software for the Itanium systems since the original ruling while HP and Intel stopped developing systems years ago. Further, it is very clear that any contractual obligations were reciprocal and HP breached its own obligations. Now that both trials have concluded, we intend to appeal both today's ruling and the prior ruling from Judge Kleinberg."
-Dorian Daley, Executive Vice President, General Counsel, and Secretary, Oracle

Contact Info
Deborah Hellinger
Oracle Corporate Communications
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Deborah Hellinger

  • 1.212.508.7935.

Oracle SOUNDEX Function with Examples

Complete IT Professional - Thu, 2016-06-30 16:46
The Oracle SOUNDEX function is one of the most interesting functions I’ve used in Oracle. Learn what it is and how it works in this article. Purpose of the Oracle SOUNDEX Function The Oracle SOUNDEX function allows you to check what a value sounds like. It returns a value that represents the phonetic value of […]
Categories: Development

Date Function Filter

Tom Kyte - Thu, 2016-06-30 12:46
I have table having date with two different formats in the same column. When I'm querying like <b>extract(year from to_timestamp(colmnname, 'DD/MM/YYYY hh24:mi'))</b> its giving me as<b> date format picture ends before converting entire input string<...
Categories: DBA Blogs

Query/Disk aggregated figures not matching execution plan in SQL trace for a single SQL statement.

Tom Kyte - Thu, 2016-06-30 12:46
Hi Tom, My question is, not sure I'm hitting some sort of bug though I can't find anything on mos, why the query-current-disk aggregated figures do not match the execution plan cr value multiplied by the number of iterations or fetch calls: SEL...
Categories: DBA Blogs

Regarding the Oracle SQL Tuning Report

Tom Kyte - Thu, 2016-06-30 12:46
Team, I recently had one issue with one of my production database where we had an application upgrade completed. Once the upgrade completed , one of the select query stared performing very slow. I have tried debugging using the sqltrpt.sql , but thi...
Categories: DBA Blogs


Tom Kyte - Thu, 2016-06-30 12:46
how to shadow user's tables' structure for other users. I have two users (schemas) - USER1 and USER2. USER1 has roles CONNECT and RESOURCE, but if i run 'DESC USER2.TABLE_NAME' i can show user2's table's structure. How can I forbid to show table'...
Categories: DBA Blogs

Avoid triggers

Tom Kyte - Thu, 2016-06-30 12:46
Hi Tom I have table A which goes through various DML and owned by different team, we want to capture a similar data with all the DML that's happening. we want to avoid triggers as this is owned by different team , is there a solution to capture wit...
Categories: DBA Blogs

querying data from backup file

Tom Kyte - Thu, 2016-06-30 12:46
Hi Tom / Connor, As a regular DBA Activity, everyday Production DBs are scheduled for backup and the backup files will be stored in archive location for a retention period depending on business requirement. Now my query is, I just want to pic...
Categories: DBA Blogs

how to store output of dynamic sql into a cursor.

Tom Kyte - Thu, 2016-06-30 12:46
Hi tom, I want to know how to declare a cursor which contains the data obtained by dynamically executing a query,where the query was taken from another cursor. CURSOR c1 IS SELECT QRY from QUERY_MASTER WHERE Q_ID IN (1,2); where QRY(an SQ...
Categories: DBA Blogs

How to export xml data ( clob column ) to a csv file

Tom Kyte - Thu, 2016-06-30 12:46
Hi, Let say, I have a table employee ( id number, name varchar2, notes clob ). I use dbms_util to write the data to a file as csv. The id (number) and name( Varchar2) columns are writing to csv file as expected. But when it comes to notes (clo...
Categories: DBA Blogs

Title on Report.

Tom Kyte - Thu, 2016-06-30 12:46
Hello All- May be another set of eyes!! How do I get the title to show on the spool output report file? Below is my report lay out. Thanks! <code> set heading on SPOOL U:\...\tt.txt ttitle left " Test report " skip 1 COLUMN...
Categories: DBA Blogs

covert string data to columns

Tom Kyte - Thu, 2016-06-30 12:46
Hi, I need some help to converting the string data from string to coulmns wise, i have the data in one column like : #1#5#3#2#5#7 #5#3#5#4#3#2 #1#7#5#3#6 #3#5#2#1#5 #6#5#4#7#1#2 and i need to each numberic should be columns wise like this...
Categories: DBA Blogs

Oracle Cloud Improves Financial Process Visibility and Cuts Upgrade Costs for University of Kansas

Oracle Press Releases - Thu, 2016-06-30 11:05
Press Release
Oracle Cloud Improves Financial Process Visibility and Cuts Upgrade Costs for University of Kansas

Redwood Shores, Calif.—Jun 30, 2016

The University of Kansas (KU) has paired its top-notch educational programs with the technology to advance them.  With Oracle Cloud , KU will drive strategic student educational initiatives, focus on operational efficiencies and reduce costs associated with lengthy upgrade cycles.

KU selected Oracle Enterprise Resource Planning (ERP) Cloud, Oracle Business Intelligence Cloud, and Oracle Planning and Budgeting Cloud to integrate with its existing Oracle’s PeopleSoft Human Capital Management (HCM) and PeopleSoft Campus Solutions applications.

“We have a strong relationship with Oracle, so they were at the top of the list when we were looking to upgrade and transform our current ERP solution,” said Diane Goddard, KU’s vice provost for administration and finance. “Oracle Cloud really stood out to us because it is on the cutting edge of innovation, offers us increased functionality, and we can easily integrate with our existing PeopleSoft HCM and PeopleSoft Campus Solutions applications.” 

With the Oracle Cloud solutions in place, KU will have a complete, modern system that delivers scalability, reliability, and rapid time to value. The university will now have a comprehensive view of its finances across the campus with end-to-end capabilities—from planning and budgeting to reporting—enabling the university to gain insight into critical information in real-time with an integrated analytics platform for improved decision-making. KU will also be able to improve the accuracy, efficiency, and consistency of its planning and budgeting processes.

“Oracle is successfully working with major universities, such as KU, to transform their current on-premise platforms to more innovative, flexible, and unified cloud platforms,” said Mark Armstrong, vice president, product development, Oracle Higher Education. “With Oracle Cloud, KU is upgrading and integrating with its current systems to create, enrich, and extend its financial processes with less effort and cost.”

Moving to Oracle Cloud positions KU to take advantage of the most advanced functionality and features while freeing IT staff from the responsibility of routine upgrades and maintenance.

“Outsourcing day-to-day system responsibilities to the professionals at Oracle allows KU Information Technology staff to focus their time and energy on strategic initiatives that drive institutional excellence,” said Eric Freeze, KU IT deputy technology officer.

Oracle Cloud delivers the industry’s broadest suite of enterprise-grade cloud services, including Software as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS), and Data as a Service (DaaS).

Contact Info
Katie Barron
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Katie Barron

  • +1-202-904-1138

What is SID in Oracle ?

Laurent Schneider - Thu, 2016-06-30 10:56

In the doc you’ll find, it’s the Oracle system identifier.

Okay, let’s imagine the following






SQL> select * from GLOBAL_NAME;


    (SID_DESC =

What is my SID? Actually there is more than one correct answer.

In the environment, Oracle SID is ORA001. This matches SID_NAME in listener.ora. It does not have to match database name, unique name, global name or instance_name.

$ lsnrctl services
Services Summary...
Service "GLO001.EXAMPLE.COM" has 1 instance(s).
  Instance "ORA001", status UNKNOWN, has 1 handler(s) for this service...
      "DEDICATED" established:7 refused:0

As the instance is not running, I have only my listener.ora static connections.


$ sqlplus "sys/***@
  )" as sysdba
Connected to an idle instance.
$ sqlplus "sys/***@
  )" as sysdba
Connected to an idle instance.

Let’s start

SQL> startup

and check my services

$ lsnrctl services
Services Summary...
Service "SVC001.EXAMPLE.COM" has 1 instance(s).
  Instance "INS001", status READY, has 1 handler(s) for this service...
      "DEDICATED" established:0 refused:0 state:ready
Service "SVC002.EXAMPLE.COM" has 1 instance(s).
  Instance "INS001", status READY, has 1 handler(s) for this service...
      "DEDICATED" established:0 refused:0 state:ready
Service "UNI001.EXAMPLE.COM" has 1 instance(s).
  Instance "INS001", status READY, has 1 handler(s) for this service...
      "DEDICATED" established:0 refused:0 state:ready
Service "GLO001.EXAMPLE.COM" has 1 instance(s).
  Instance "ORA001", status UNKNOWN, has 1 handler(s) for this service...
      "DEDICATED" established:13 refused:0
The command completed successfully

I know have 4 service names :

  1. The global name in listener.ora
  2. the unique name in init.ora
  3. both service name in init.ora

And 2 sid

  1. The SID in listener.ora
  2. The instance name in init.ora

While we often have sid = oracle_sid = service_name = service_names = global_name = instance_name = db_name = db_unique_name, if you switch from SID to SERVICE_NAME, this could be help to identify legacy application.

If you read the doc carefully, you may have noticed the SID is no longer documented as a valid clause of CONNECT_DATA in 11g and 12c

In 10gR2 :
Use the parameter SID to identify the Oracle8 database instance by its Oracle System Identifier (SID). If the database is Oracle9i or Oracle8, use the SERVICE_NAME parameter rather than the SID parameter.

This is probably a documentation bug, I would rephrase this as If the database is Oracle9i or Oracle8i or later.

In 11g and 12c, the comment disappeared. Oracle 8i was released last century, but SID didn’t completly disappear from tnsnames. Yet.

Integrating Telstra SMS API with the Apigee Edge Service Broker for Pivotal Cloud Foundry (PCF)

Pas Apicella - Thu, 2016-06-30 07:35
Apigee and Pivotal partnered to provide comprehensive API management capabilities that expedite the scalable delivery of apps on the powerful Pivotal Cloud Foundry platform. Apigee Edge is available for rapid deployment as a partner service in the Pivotal Network

The following link talks about this service in detail


In this blog post we walk through how we would use this Service on Pivotal Cloud Foundry 1.7 to expose Telstra SMS Api.

1. First we have to deploy our application which provides access to the Public SMS Api from Telstra. This is deployed to Pivotal Cloud Foundry (PCF). The GitHub project is as follows.


2. Once deployed the application provides two REST endpoints that also includes swagger UI

Note: This Telstra API only works for Australian based mobile numbers and you will need a https://dev.telstra.com/ account to invoke the free Telstra SMS Service. The API is explained in detail at this link https://dev.telstra.com/content/sms-api-0

3. Now at this point we will need to add "Apigee Edge Service Broker for PCF" tile to Pivotal Ops Manager. You can download it from the URL below and follow the instructions to install the Tile onto the


4. Once installed it will be shown as a tile on Pivotal Ops Manager as per the image below

5. To ensure it's part of our marketplace services we simply log into our PCF instance using the command line as shown below

OR from the Pivotal Apps Manager

6. Use the create-service command to create an instance of the Apigee Edge service broker as shown below

papicella@papicella:~/pivotal/services/apigee$ cf create-service apigee-edge org api-connectors-service -c api-connectors.json
Creating service instance api-connectors-service in org system / space pas as papicella@pivotal.io...


{"org":"papicella", "env":"prod", "user":"papicella@pivotal.io", "pass":"yyyyyyy", "host": "apigee.net", "hostpattern": "${apigeeOrganization}-${apigeeEnvironment}.${proxyHost}"}

The JSON specifies the Apigee Edge details needed to route traffic:

  • org -- Organization of the Apigee Edge proxy through which requests should be routed. You'll find this value at the top of the Edge UI while looking at the Dashboard.
  • env -- Environment of the Apigee Edge proxy through which requests should be routed. You'll find this value at the top of the Edge UI while looking at the Dashboard.
  • user -- Username of an Edge user who has access to create proxies. This is the username you use to log into the Edge UI.
  • pass -- Password of an Edge user who has access to create proxies. The password you use to log into the Edge UI.
  • host -- Edge host name to which requests to your API proxies can be sent.
  • hostpattern -- Pattern for generating the API proxy URL. For example, #{apigeeOrganization}-#{apigeeEnvironment}.#{proxyHost} for cloud accounts.
7. Use the bind-route-service command to create an Edge API proxy and bind your Cloud Foundry application to the proxy. This tells the Go router to redirect requests to the Apigee Edge proxy before sending them to the Cloud Foundry application.

papicella@papicella:~/pivotal/services/apigee$ cf bind-route-service pcfdemo.net api-connectors-service --hostname apples-springboot-telstrasms
Binding route apples-springboot-telstrasms.pcfdemo.net to service instance api-connectors-service in org system / space pas as papicella@pivotal.io...

Note: The hostname is the name of your REST based application

8. With the service created it will then exist within the space as per the image below

9. Click on the service as shown below

10. Click on Manage as shown below

11. In the Apigee management console, under APIs > API proxies, locate the name of the proxy you just created as shown below

12. Click the PCF proxy's name to view its overview page.

13. Click the Trace tab, the click the Start Trace Session button.

14. Back at the command line, access the REST based application endpoint for the Telstra SMS Service as shown below.

papicella@papicella:~/pivotal/services/apigee$ curl "http://apples-springboot-telstrasms.pcfdemo.net/telstra/sms?to=0411151350&appkey=apples-key&appsecret=apples-password"

15. Return to the Apigee Management Console to verify Trace Output and a Successful HTTP 200 call. The new proxy is just a pass-through. But it's ready for you or someone on your team to add policies to define security, traffic management, and more.

More Information

Pivotal Cloud Foundry and Apigee - http://apigee.com/about/solutions/pivotal-cloud-foundry-apigee
Pivotal Cloud Foundry - http://pivotal.io/platform

Categories: Fusion Middleware

Oracle Midlands : Event #16

Tim Hall - Thu, 2016-06-30 06:15

Don’t forget Oracle Midlands Event #15 next week!


It’ll be interesting to see what this new kid on the block Jonathan Lewis has to say for himself!

Please show your support and come along. It’s free thanks to the sponsorship by RedStackTech.



Oracle Midlands : Event #16 was first posted on June 30, 2016 at 12:15 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

GoldenGate 12.2 – Installation of the monitoring agent

Yann Neuhaus - Thu, 2016-06-30 02:50

As described in my last post, GoldenGate Studio requires the monitor agent on each GoldenGate instance. The goal of this agent is to allow GoldenGate Studio to interact with GoldenGate, for example, to deploy a new solution.

So lets start with the installation of the agent.


The first step is to download the monitor agent (Oracle GoldenGate Monitor). It is available here.



The second step is to install the product.

After you have transferred the installer to the server decompress it.

oracle@srvoracle:/tmp/ [DB1] cd monitor/
oracle@srvoracle:/tmp/monitor/ [DB1] ll
total 420092
-rw-r--r-- 1 oracle oinstall 430166267 Feb 29 13:32 fmw_12.
oracle@srvoracle:/tmp/monitor/ [DB1] unzip fmw_12. 
Archive:  fmw_12.
  inflating: fmw_12.  
oracle@srvoracle:/tmp/monitor/ [DB1] ll
total 840392
-rw-r--r-- 1 oracle oinstall 430166267 Feb 29 13:32 fmw_12.
-r-xr-xr-x 1 oracle oinstall 430387063 Oct 14 08:33 fmw_12.

For launching the installer it is mandatory to have at a minimum Java version 8 (1.8). If this is not available it can be downloaded here.

To start the installation, launch the fmw_12.

oracle@srvoracle:/home/oracle/Downloads/jdk1.8.0_73/bin/ [DB1] ./java -jar /tmp/monitor/fmw_12. 
Launcher log file is /tmp/OraInstall2016-02-29_01-39-26PM/launcher2016-02-29_01-39-26PM.log.
Extracting files.......
Starting Oracle Universal Installer

Checking if CPU speed is above 300 MHz.   Actual 2494.801 MHz    Passed
Checking monitor: must be configured to display at least 256 colors.   Actual 16777216    Passed
Checking swap space: must be greater than 512 MB.   Actual 4095 MB    Passed
Checking if this platform requires a 64-bit JVM.   Actual 64    Passed (64-bit not required)
Checking temp space: must be greater than 300 MB.   Actual 28817 MB    Passed

Preparing to launch the Oracle Universal Installer from /tmp/OraInstall2016-02-29_01-39-26PM
Log: /tmp/OraInstall2016-02-29_01-39-26PM/install2016-02-29_01-39-26PM.log
Logs successfully copied to /u01/app/oraInventory/logs.

The OUI (Oracle Universal Installer) will start. On the first screen just click on the next button.


On the next screen, we can choose the option for the updates. In my case, I leave the option to skip the auto updates.


Fill up the software location desired for GoldenGate agent.


Select the option to install only the agent monitor.


The OUI will test the system configuration and the java version.


The OUI provides a summary of the configuration. Click on next button, if all is ok.


The installation is done.


At the end OUI provides a summary of the installation with the location of the logs.


Now GoldenGate agent is installed.

Configuration Create instance

To create the instance of the agent, go where the binaries have been installed. In this example, it is /u01/app/oracle/product/jagent/oggmon/ogg_agent.

After that, launch the script createMonitorAgentInstance.sh.

oracle@srvoracle:/u01/app/oracle/product/jagent/oggmon/ogg_agent/ [DB1] ./createMonitorAgentInstance.sh 
Please enter absolute path of Oracle GoldenGate home directory : /u01/app/oracle/product/12.1.0/gg_1
Please enter absolute path of OGG Agent instance : /u01/app/oracle/product/
Please enter unique name to replace timestamp in startMonitorAgent script (startMonitorAgentInstance_20160229140552.sh) : 2
Sucessfully created OGG Agent instance.
Create password

The agent needs a password to work. All the passwords will be stored in a wallet. For this go to the ORACLE_HOME_AGENT/bin. In my case, /u01/app/oracle/product/

Launch the script pw_agent_util.sh.

oracle@srvoracle:/u01/app/oracle/product/ [DB1] ./pw_agent_util.sh -jagentonly
Please create a password for Java Agent: 
Please confirm password for Java Agent: 
Feb 29, 2016 2:18:55 PM oracle.security.jps.JpsStartup start
INFO: Jps initializing.
Feb 29, 2016 2:18:56 PM oracle.security.jps.JpsStartup start
INFO: Jps started.
Wallet is created successfully.
Enable monitoring

To enable the monitoring, launch ggsci command and edit the GOLBALS parameter file.

oracle@srvoracle:/u01/app/oracle/product/ [DB1] ggi 

GGSCI (srvoracle) 2> edit params ./GLOBALS

GGSCI (srvoracle) 4> view params ./GLOBALS

GGSCHEMA ggadmin
CHECKPOINTTABLE ggadmin.checkpoint

Now restart the ggsci command and the jagent appears when doing an “info all”.

oracle@srvoracle:/u01/app/oracle/product/ [DB1] ggi

GGSCI (srvoracle) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED                                           

To finalize, delete the datastore and recreate one.

oracle@srvoracle:/u01/app/oracle/product/ [DB1] ggi

GGSCI (srvoracle) 1> delete datastore
Are you sure you want to delete the datastore? yes

2016-02-29 14:33:30  INFO    OGG-06492  Datastore deleted.

GGSCI (srvoracle) 2> create datastore
Profile 'Trace' added.

2016-02-29 14:33:55  INFO    OGG-06489  Datastore created.

Now you can start the manager process and the jagent.


Now that the agents have been installed on each instance, all the prerequisite for GoldenGate Studio are met.

In the next blog, I will show you how to use GoldenGate Studio to deploy a solution


Cet article GoldenGate 12.2 – Installation of the monitoring agent est apparu en premier sur Blog dbi services.

SQL Server 2016: Always Encrypted – part 2

Yann Neuhaus - Thu, 2016-06-30 02:39

In my last blog post about SQL Server 2016 Always Encrypted, here, I showed how to use this new functionality but also that you have to separate the different execution context with an Application server, a database server and a security server to avoid that certificate will be available for all users and break the segregation.

Let’s see how to build those environment.
In my security server named SQL2016-2, I first create a Self-signed certificate with PowerShell after having import the PKI module. The certificate is stored in the local machine with:

  • DnsName, DNS name of the certificate: Demo_AlwaysEncrypted.cert
  • CertStoreLocation, specifies the certificate store in which to store the new certificate: Cert:\LocalMachine\MY
  • Provider, specifies the name of the KSP or CSP that this cmdlet uses to create the certificate: Microsoft Strong Cryptographic Provider
  • KeyAlgorithm, specifies the name of the algorithm that creates the asymmetric keys that are associated with the new certificate: RSA
  • KeyLength, specifies the length, in bits, of the key that is associated with the new certificate: 2048
  • HashAlgorithm, specifies the name of the hash algorithm to use to sign the new certificate: sha256

PowerShell script:

Import-Module PKI
New-SelfSignedCertificate -DnsName "Demo_AlwaysEncrypted.cert" -CertStoreLocation Cert:\LocalMachine\MY -Provider "Microsoft Strong Cryptographic Provider" -KeyAlgorithm RSA -KeyLength 2048 -HashAlgorithm sha256


I have now my new certificate.
I open my SQL Server instance and navigate in my database SecurityDB to Security, Always Encrypted Keys and right click on Column Master Keys and select “New Column Master Key…”.
I create a new Column Master Key with the certificate I have just created on my local Machine and I name it CMK2:


The code to create my Master Key is the following, you can have it by clicking on the Script button in the above screen:

       KEY_PATH = N'LocalMachine/My/A3DF88FA53ED5347ABCF709D7A15621CDC358456'

I create now a Column Encryption Key named CEK2 with the Column Master Key I have create before and named CMK2:


The code to create my Encryption Key is the following:

       ENCRYPTED_VALUE = 0x01700000016C006F00630061006C006D0061006300680069006E0065002F006D0079002F00610033006400660038003800660061003500330065006400350033003400370061006200630066003700300039006400370061003100350036003200310063006400630033003500380034003500360098721948A1283B0E5375B4E7F0A5F2EED05EDB0A87814BF66EABCBE2B4E75B5A759CF48D7D87D6A2C44A6DB2B587992E412EAF20D3FE826844FE76B1BC764E769C911486150E08EB972DBF406FF51738EFAA7F831A27A540F948CBC673273B1205A2CEF33AE77A0AF01DB3589A4A746151DC8C699DAB7AD1B99910A598EA7258670F1D5C0600704A9BB440F86761F41E8C72E52FFF6A7348B65F62E60D90B60CF1FC00B1B473F40E82B140219A317760CBC8591E07215AD114FAF80F08361C607823DB44037C455514B3789661560613638D058D7965846962B9FD3D9ECF0AC62325043AB40D892683744DC994E868C5F0E3FD14277D8941977426AC59DEA7EEF1394641DB314C2DCB083D3D5A12B97A131986D9834CBF2837908D3EF54C9104D92371C8A9EDC9B22555D8D615440D21B03393303FD4A9C0047282091178F31323A6EF4313576470349F32F106D38ABC3B0A1C4F47E870EA43F7B90D85D4F6C6686368E4E5EFC416EA08241F75F2D7B4F4B700114CA545AA4310829B2691A6D2C553F8F8B7E3983FC99F4E0A28BD79C84F50B7F6ABB17419DEAE06ACA694D68B00B48DBC0D66B46847A389C8A587F138CEF7791845A999EA45849D2FAA45BDA88FE134C70E592937981A7B2A18397CC615BB435A1EDDD66CF58119D89B24190744F4326718CBCDC1182AD20C8023EECB3994569844270DC90020B20D70B5CCC5

I will now transfer those two certificates to my SQL Server VM, named SQL2016-1, in order to encrypt my table columns with them. To do that, I will execute the scripts I have generated before in my SQL Server instance in my security VM. I have created a second database named TestDB2 where I will execute my scripts:


After execution of the script I have my Column Master Key and my Column Encryption Key in the Security\Always Encrypted Keys path of my TestDB2 database.

I will now create the same table that I have on my TestDB database (see my previous blog post) in my new database TestDB2. For that I will execute the following script with the same encrypted columns (CustAccountNumber, CustAccountBalance, CustCreditCardNumber) but with my new Column Encryption Key CEK2:


My table is created with the specified columns encrypted.
I will now export my certificate from my security server to my application server with PowerShell.
First, I retrieve my certificate from the store and export it to a file with a password:

PowerShell script:

# I retrieve the Certificate from the store
$MyCert = Get-ChildItem Cert:\LocalMachine\My -DnsName "Demo_AlwaysEncrypted.cert"
# I create a password to protect my certificate
$pwd = ConvertTo-SecureString -String "Demo" -Force –AsPlainText
# I export my certificate as PFX Certificate Archive
Export-PfxCertificate -Cert $MyCert -FilePath C:\Temp\MyCert.pfx -Password $pwd


I’m able now to copy my certificate in my application server and import it:

PowerShell script:

# I import the certificate from my file
$pwd = ConvertTo-SecureString -String "Demo" -Force –AsPlainText
Import-PfxCertificate -CertStoreLocation Cert:\LocalMachine\My -FilePath C:\Temp\MyCert.pfx -Password $pwd


My certificate is now in the store of my application server and I will be able at this time to insert and read plain text data from my application server. I will use a PowerShell script to insert data. I will not explain all this script but it’s good to know that:

  • The connection string needs to enable Always Encrypted for the client application, here PowerShell, with this parameter: Column Encryption Setting=enabled
  • The application passes plain text data in SqlParameter objects to the driver. The driver will encrypt the values and send encrypted values to the SQL Server VM

PowerShell script:

$sqlConn = New-Object System.Data.SqlClient.SqlConnection
$sqlConn.ConnectionString = "Server=SQL2016-1\SQL2016_1;Integrated Security=true; Initial Catalog=TestDB2; Column Encryption Setting=enabled;"
$sqlcmd = New-Object System.Data.SqlClient.SqlCommand
$sqlcmd.Connection = $sqlConn
$sqlcmd.CommandText = "INSERT INTO dbo.Customer (CustNb, CustName, CustSurname, CustAccountNumber, CustAccountBalance, CustCreditCardNumber, CustCountry) VALUES (@CustNb, @CustName, @CustSurname, @CustAccountNumber, @CustAccountBalance, @CustCreditCardNumber, @CustCountry)"
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustNb",[Data.SQLDBType]::uniqueidentifier)))
$sqlcmd.Parameters["@CustNb"].Value = [GUID]::NewGuid()
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustName",[Data.SQLDBType]::VarChar,50)))
$sqlcmd.Parameters["@CustName"].Value = "Durand"
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustSurname",[Data.SQLDBType]::VarChar,50)))
$sqlcmd.Parameters["@CustSurname"].Value = "Laurent"
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustAccountNumber",[Data.SQLDBType]::VarChar,50)))
$sqlcmd.Parameters["@CustAccountNumber"].Value = "1234-234589"
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustAccountBalance",[Data.SQLDBType]::Int)))
$sqlcmd.Parameters["@CustAccountBalance"].Value = 1256890
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustCreditCardNumber",[Data.SQLDBType]::VarChar,50)))
$sqlcmd.Parameters["@CustCreditCardNumber"].Value = "1234-4245-4563-4564"
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustCountry",[Data.SQLDBType]::VarChar,200)))
$sqlcmd.Parameters["@CustCountry"].Value = "France"


I have now inserted a row in my Customer table. If I try to do the same insert without the parameter Column Encryption Setting=enabled, I receive an error message:


Now, if I want to read my table from my application server I can execute a simple select * with this PowerShell script and my parameter Column Encryption Setting=enabled:

PowerShell script:

$sqlConn = New-Object System.Data.SqlClient.SqlConnection
$sqlConn.ConnectionString = “Server=SQL2016-1\SQL2016_1;Integrated Security=true; Initial Catalog=TestDB2; Column Encryption Setting=enabled;”
$sqlcmd = $sqlConn.CreateCommand()
$query = "SELECT * FROM dbo.customer"
$sqlcmd.CommandText = $query
$adp = New-Object System.Data.SqlClient.SqlDataAdapter $sqlcmd
$data = New-Object System.Data.DataSet
$adp.Fill($data) | Out-Null


I see plan text data but without the parameter:


I’m able to read the table but the encrypted columns appear as blob and cannot by readable.
If I go back to my SQL Server VM, even with SYSADMIN right and with the connection string parameter Column Encryption Setting=enabled I am not allowed to query the table:


The error indicates that I don’t have access on my SQL Server VM at the certificate used to encrypt the column master key and so I’m not able to decrypt the encrypted columns.
If I don’t use the parameter Column Encryption Setting=enabled during my connection via SQL Server Management Studio and try to query my table:


I can see my table row(s) but the encrypted columns appear as BLOB and so are protected to unexpected reader which is exactly the goal of this new feature: avoid that SQL Server DBAs or users with some rights should able to visualize critical data.
In order to validate this security, the certificate must not be saved in the certificate store of the SQL Server VM.
That’s all Folks! ;-)


Cet article SQL Server 2016: Always Encrypted – part 2 est apparu en premier sur Blog dbi services.


Subscribe to Oracle FAQ aggregator