Skip navigation.

Yann Neuhaus

Syndicate content
All blog entries from http://www.dbi-services.com/
Updated: 9 hours 17 min ago

dbi services wishes you a Merry Christmas with this SQL Server script

Thu, 2014-12-18 07:57

The end of the year approachs and soon it will be time to celebrate Christmas with your family and friends. At dbi services, we wish you a merry christmas via SQL Server with the following script, to execute in your SQL Server Management Studio for example (supported only by SQL Server 2012 and SQL Server 2014). You will find a suprise in the result pane :-)

 

SET NOCOUNT ON;   IF EXISTS(SELECT * FROM tempdb.sys.objects WHERE name LIKE N'#SURPRISE%')        DROP TABLE #SURPRISE; GO   CREATE TABLE #SURPRISE ( col1 CHAR(290) ); GO   ------------------------------   INSERT #SURPRISE VALUES (REPLICATE(CHAR(111),16)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1), 50, SPACE(1) + CHAR(165)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(77) + CHAR(101) + CHAR(114) + CHAR(114) + CHAR(121)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(165)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(246)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(149)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(165)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(67) + CHAR(104) + CHAR(114) + CHAR(105) + CHAR(115) + CHAR(116) + CHAR(109) + CHAR(97) + CHAR(115) ); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(165)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(42)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(42)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(95)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(124)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(124)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(95)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(95)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(95)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(95)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(95)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(42)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(42)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(186)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(186)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(42)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(47)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(47)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, + CHAR(92)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(186)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(182)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(124)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(164)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(164)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(2) + CHAR(124)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(91)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(93)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(124)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + REPLICATE(CHAR(42), 19)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(3) + CHAR(100) + CHAR(98) + CHAR(105)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(115) + CHAR(101) + CHAR(114) + CHAR(118) + CHAR(105) + CHAR(99) + CHAR(101) + CHAR(115)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(5) + CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, REPLICATE(CHAR(111), 17));   ------------------------------   DECLARE @SQL VARCHAR(100);   SELECT        @SQL = 'DBCC TRACEON(3604); DBCC PAGE (''tempdb'', 1, ' + CAST(allocated_page_page_id AS VARCHAR(20)) + ', 1)' FROM sys.dm_db_database_page_allocations(2, OBJECT_ID('tempdb..#SURPRISE'), 0, NULL, DEFAULT) WHERE is_iam_page = 0 AND is_allocated = 1;   EXEC(@SQL);

 

Merry Christmas to all of you!

How to store SQL Server data files in Azure Storage

Wed, 2014-12-17 22:02

A new functionality, introduced with SQL Server 2014, enables to store data files from a SQL Server database in Windows Azure Storage. In this posting, I will show how it works.

 

General.png

 

Accessing Azure Storage

The Azure Storage account is named “dbiservices”, and is composed of the “datafiles” container. This container does not contain blobs yet.

If you are not familiar with the Windows Azure Storage terms, you should read this Introduction to Microsoft Azure Storage.

In my example, I want to grant to share an access to my “datafiles” container without having to expose my account key. I teherfore need to generate a Shared Access Signature.

 

Shared Access Signature Generation

For this part, I will use a Third Party tool called Azure Storage Explorer.

As soon as you have installed and launched this software, you must register you Storage Account:

 

Register-Azure-Storage-Account.png

 

You must enter your Storage account name and your Storage account key. The key has been erased voluntary in this example.

As a Best Practice, I advise to enter your Secondary Access Key and not your Primary Access Key. Indeed, the Secondary Access Key is commonly used as a temporary key, and can be regenerated if necessary.

To generate my Shared Access Signature, I have to edit the Security of my container:

 

Edit-Security.png

 

I select the permissions and the duration linked to my Shared Access Signature:

 

Generate-Signature.png

 

I generate a Shared Access Signature available one week with all permissions. The Shared Access Signature generated is an URI related to the container. I voluntary deleted partially the URI.

You need to copy the URI from “sv=” to the end.

 

Credential Creation

I need to create a new Credential in SQL Server Management Studio:

 

New-Credential.png

Create-Credential.png

 

The name of my Credential is the URI location of my container, and the Password is the Shared Access Signature previously created.

 

Creating a database with Data Files in Azure Storage

I will perform two examples: first I will create a new database directly in Azure Storage, then I will migrate an existing database to Azure Storage.

 

Hybrid database creation

I execute a script to create a database with its data files in Azure Storage:

 

Create-Database-hybrid.png

 

If we refresh the “datafiles” container in Azure Storage explorer, we can see the datafiles previously created:

 

Explore-Datafiles.png


On-premise database creation

Now, I execute a script to create an on-premise database:

 

Create-database-onpremise.png

 

Then, I take the database offline:

 

Take-Offline-Database.png

 

I upload the data files in Azure Storage using Azure Storage Explorer tool:

 

Upload-Datafiles-to-Azure.png

 

Then, we need to alter the onpremisedb database to reference the datafiles moved to the Azure Storage:

 

Alter-Database.png

 

And now, I bring the database online:

 

Bring-Online-Database.png

 

But the following error occurred:

 

Bring-Online-Database-Error.png

 

To understand the origin of the problem, let’s see the datafiles in the Azure Storage Explorer:

 

Blob-Type.png

 

Conclusion

This new feature offers some advantages such as high availability or easy migration.

But on the other hand, you cannot use it on a existing database, which is a serious drawback.

Furthermore, I do not believe that this feature would be used with on-premsie SQL Server databases, due to the latency. But I think it can be used with a virtual machine running in Azure.

Journées SQL Server 2014: Vidéos des sessions disponibles

Tue, 2014-12-16 02:18
Les vidéos des sessions des journées SQL Server 2014 sont enfin en ligne:   > Infrastructure et AlwaysOn > Industrialisation des audits (avec Sarah Bessard)   Si vous avez des questions n'hésitez pas à me contacter par email ou par le biais des commentaires sur ce blog.   Bon visionnage!

Oracle 12c: Can we disable logging for DML?

Mon, 2014-12-15 09:38

If we don't mind about loosing our changes, then can we disable logging for DML? This is a question I've heard a lot. Ok, you don't need to recover your changes but Oracle may want to recover the consistency of its datafiles anyway. And that's why datafiles blocks changed though the buffer cache always generate redo.

But yes, in 12c you can do DML and generate only minimal redo. All DML: even updates and deletes. And that post is not about underscore parameters that allows corruption.

SQL Server 2014 : sortie du livre "Développer et administrer pour la performance"

Sun, 2014-12-14 23:50

Un billet en cette fin d'année qui approche à grand pas pour vous annoncer la sortie prochaine (fin décembre 2014) du livre SQL Server 2014 : Développer et administrer pour la performance en français et auquel j'ai eu l'immense plaisir de participer avec Frédéric Brouard (alias SQLPro - MVP SQL Server), Nicolas Souquet (alias Elsuket - MVP SQL Server) et Christian Soutou

Ce livre est destiné aussi bien aux développeurs qu'aux administrateurs débutants ou confirmés soucieux de la performance et couvre un ensemble complet de domaines (l'administration, le développement, la sécurité ou encore la haute disponibilité).

Bonne lecture !

 

 

 

Oracle lateral inline view, cursor expression and 12c implicit statement result

Sun, 2014-12-14 09:50

I'll present here 3 ways to run a query for each result of another query. Let's take an exemple: get all executions plan (select from dbms_xplan.display_cursor) for each of my queries (identified from v$sql). The 90's way was to run the first query, which generates the second queries into a spool file, and execute that file. Here are easier ways, some of them coming from 12c new features lateral join and implicit statement result.

UKOUG 2014 - Middleware Day 3

Wed, 2014-12-10 15:05

Today, no more sessions on middleware, but a lot of ADF, APEX, developments sessions that looks like really interesting! Unfortunately I can’t attend each one but here are some I selected!

The picture that paints a thousand words: Data Visualization (ADF) Duncan Mills (Oracle)

In this session Duncan was talking about my favorite topic: data visualization. I am quite used to ADF as I developed an interface for my Capacity Planning project. It was sometimes hard to have something clean and well built, a little bit confusing…

But this time as ended! With the new Alta interface!

Instead of building another stack over hundreds stacks, Oracle preferred to re-imagine and rebuild the whole interface. What a good idea. Then Alta was born, more clear, more precise and more “actual”.

It has been rewritten in a mobile based way as it’s the trend. But a new design is not sufficient it also includes more interactions and animations. The UI is designed to be more responsive, clean and simplified. One goal is to build the UI as disclosure info, it means when you click something in a chart the rest will disappear and go into a more detailed view about what you clicked. This way the UI will lead the user to the flow of the information. The user has to “search” the info through a path which builds the context of this information.

There is now a lot of slides and effect to increase this flow. For example you have a new chart build as a multi-layered circle so when you click on a part of the circle it will go into and open this part then build another whole circle to create a new level of information.

You have over than 50 graphs and charts types. All with the same kind of interactions to help the user have a coherence between charts such as zooming, zoomout, selection, redesigned time axis, improved data labeling and so on.

One of the new chars is called NBox. It’s a chart used for categorizing things or people. You have boxes, and you put people in it regarding a filter such as the company sector. You can drag and drop easily, it’s really to classify things.

ADF is getting really beautiful thanks to this new UI design and interactions, allowing more mobile app look and feel is a good thing as more and more processes in the industry can be monitored directly through smartphones.

SQL injection in APEX - More Attacks (& Defences) Tim Austwick and Nathan Catlow (Recx)

This session is more talking about security and SQL injections than APEX itself but they did some demos showing APEX could ease the usage of SQL injections.

Tim started by introducing his company Recx as they built a tool called ApexSec which can point out SQL injections spots in an APEX application; they said it can find about 90% of breaches. Thanks to their tool they heavily improved the security of Oracle Application Express, a tool from Oracle helping building application quite fast.

For Tim and Nathan, a SQL injection can be spotted when you don’t execute the SQL directly but you first generate the SQL and then send it to Oracle Database. In fact there is several kind of breaches such as dynamic SQL or query substitutions; these are the most found. It happens most of time when SQL write and execution are separated.

With SQL injections you can gather more data by adding columns or table joins, this is a “select” injection. You can also corrupt data, this is called an “update” injection. Then you have more serious issues when you can insert code between BEGIN and END in a PL/SQL function as you can call other functions or procedures such as Oracle procedures, which can be harmful for your application or database.

For example Tim did a live demo on how he could insert a call to the LDAP package and then connect it to his own LDAP on his machine. The result? He managed to get credentials directly in his LDAP console! Could be a serious issue isn’t it?

Vulnerabilities can appears if you append a variable in the build process of your query: when you put it at the end. Also when you create a function returning a SQL command. You better use bind variables instead of substitutions.

Tim also presented another demo where he managed to push a hidden button that committed a text area through a short javascript script. Thanks to a breach in the SQL behind the text area he could get the database version. With a simple research he found a known issue in this version. Then he wrote a pl/sql script and injected it in the text area. He managed to set the java permission to the schema, then he wrote a file directly on the file system with a script within. He could execute the file as sysdba as it was on the local system. The script gave the schema the DBA privileges, then he could get the passwords.

It was an amazing session.

Their advices to avoid SQL injection are the following:

- Avoid substitutions directly in SQL and prefer bind variables
- Ensure that APEX objects are protected
- Use APEX 4.2.1 or above
- Avoid dynamic SQL

Conclusion

This is the last day of UKOUG 2014 and we clearly see that the middlewares trend is the Cloud, Mobile applications and of course security. I can’t wait to see more sessions in the next UKOUG!

Taskset: a useful tool to set CPU affinity

Wed, 2014-12-10 05:14

Today at the #ukoug_tech14 I had the chance to attend to the Christo Kutrovky's session @kutrovsky (Pythian) about "Measuring Performance in Oracle Solaris & Oracle Linux". This session aimed to present how to measure system utilization in the Linux and Oracle Solaris operating systems and how to use these information for tuning and capacity planning. During this session we had a very good introduction to a bunch of performance monitoring tools that can be categorized in four categories (non exhaustive list of tool):


1. CPU:

  • top
  • vmstat
  • time
  • mpstat

2. Memory - RAM

2.1 Global

  • /proc/meminfo
  • vmstat
  • ipcs

2.2 Per process

  • -pmap -x

3. Disk

  • vmstat
  • iostat
  • iotop
  • dtrace

4. Network

  • ifconfig
  • netstat
  • nicstat
  • iftop
  • iperf
  • tcpdump

Regarding this list, I would like to point out a specific command which could be useful in the context of a performance test. This command is taskset. According to Manual page of taskset: taskset  is  used  to  set  or  retrieve  the CPU affinity of a running process given its PID or to launch a new COMMAND  with a given CPU affinity.  CPU affinity is a scheduler property that "bonds" a process to a given set of CPUs on the system. The Linux scheduler will honor the given CPU affinity and the process will not run on any other CPUs.

Let's try to make a short test, with sysbench and mysql in order to see how taskset works. This test consists of running sysbench with 4096 threads, the first time without taskset and a second time with taskset by setting CPU affinity on CPU number 3.

 

1. Sysbench test without CPU affinity

1.1 Let's tart the benchmark with sysbench

mysql@ThinkPad-T540p:/home/mysql/ [mysqld1] ./innodb/runallinnodb5.ksh

 

1.2 In order to proove that sysbench is running 4096 threads we can execute "top + C":

22110 mysql     20   0 4747720 334964   1548 S   5,3  2,1   0:06.55 sysbench --db-driver mysql --test oltp --num-threads 4096 --mysql-user sbtest --mysql-password sbtest --mysql-db sysbench .....

 

1.3 Let's now have a look on CPU usage with command "top + 1":

top - 11:47:38 up 42 min,  4 users,  load average: 5,97, 15,54, 16,48
Tasks: 238 total,   2 running, 236 sleeping,   0 stopped,   0 zombie
%Cpu0  : 25,5 us,  1,7 sy,  0,0 ni, 69,9 id,  3,0 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu1  : 15,1 us,  0,7 sy,  0,0 ni, 78,9 id,  5,4 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu2  : 21,3 us,  1,3 sy,  0,0 ni, 77,4 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu3  : 15,8 us,  1,0 sy,  0,0 ni, 83,2 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu4  : 18,6 us,  1,3 sy,  0,0 ni, 79,8 id,  0,3 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu5  : 15,4 us,  1,0 sy,  0,0 ni, 83,7 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu6  : 27,8 us,  1,0 sy,  0,0 ni, 71,2 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu7  : 14,8 us,  0,0 sy,  0,0 ni, 85,2 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
KiB Mem:  16309072 total,  4207804 used, 12101268 free,   206348 buffers
KiB Swap: 16651260 total,        0 used, 16651260 free.  1628872 cached Mem

 

1.4 As we could expect all CPUs are in use because no CPU affinity has been set:

steulet@ThinkPad-T540p:~$ sudo taskset -pc 22110

pid 21767's current affinity list: 0-7

 

2. Sysbench test with CPU affinity

2.1 Let's start by setting CPU Affinity of mysql process:

 

steulet@ThinkPad-T540p:~$ sudo taskset -pc 3 22110

pid 22110's current affinity list: 0-7
pid 22110's new affinity list: 3

 

2.2 We can now restart the benchmark:

 

mysql@ThinkPad-T540p:/home/mysql/ [mysqld1] ./innodb/runallinnodb5.ksh

 

2.3 Let's now have a look on CPU usage using command "top + 1":

Tasks: 240 total,   2 running, 238 sleeping,   0 stopped,   0 zombie
%Cpu0  :  1,7 us,  0,3 sy,  0,0 ni, 97,7 id,  0,0 wa,  0,0 hi,  0,3 si,  0,0 st
%Cpu1  :  0,3 us,  0,0 sy,  0,0 ni, 99,7 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu2  :  0,3 us,  0,0 sy,  0,0 ni, 99,7 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu3  : 98,0 us,  1,7 sy,  0,0 ni,  0,3 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu4  :  2,0 us,  0,3 sy,  0,0 ni, 97,3 id,  0,3 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu5  :  0,7 us,  0,0 sy,  0,0 ni, 99,3 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu6  :  0,3 us,  0,7 sy,  0,0 ni, 97,7 id,  1,3 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu7  :  0,7 us,  0,0 sy,  0,0 ni, 95,7 id,  3,7 wa,  0,0 hi,  0,0 si,  0,0 st

 

The processor 3 own all the mysql load as expected. Thanks to taskset you can also have a better understanding of HiperThreaded CPUs load. In order to better understand the problems related to the monitoring of Hyperthreaded CPU and CPU affinity, I suggest you to read the following blog: http://www.pythian.com/blog/virtual-cpus-with-amazon-web-services/

UKOUG 2014 - Middleware Day 2

Tue, 2014-12-09 11:06

Today the sessions were more “high level” so don’t expect deep information and concrete implementations.

Roadmap to Cloud Infrastructure and Service Integration with cloud application foundation and SOA Suite Frances Zhao-Perez and Simone Greib(Oracle)

Here Frances was talking about CAF (Cloud Application Foundation) which regroup products like weblogic, coherence and so on. She introduced the Oracle’s strategic investments list for this topic:

- #1 Mobile
- Fusion Middleware and applications
- Continuous availability
- Multitenancy for Density/Utilization
- Cloud management and operations

She also talk about future features in 12cR2 such as:

- Multi Datacenters support
- Coherence federated caching
- Recoverable caching
- Full Java EE7
- Java SE8
- Available in Cloud

Frances also briefly talk about ODA and OHS roadmaps but it was only from marketing side :)

Then Simone took the lead and made a recap’ of SOA key features such as:

- Operation made simple (startup acceleration, tuned profiles…)
- Developer productivity (Debugger and tester, Java database instead of full big one…)
- Mobile standards (REST, Json…)
- Cloud: SAP/JDE adapters

A new feature in the cloud is MFT (Managed File Transfer) for file-based integrations.

She also remind us about how simple it is to upgrade from SOA suite 11g to 12c and began with new incoming features such as:

- Business Insight: Business Process Monitoring (Business process simplified without JDeveloper)
- Internet of Things (IoT): Events driven actions
- BAM predictive analytics & BI Integration: it could build trends using our business data. For example it could predict the market for next weeks.

Mobile enabling your enterprise with Oracle SOA Suite Simone Greib and Yogesh Sontakke(Oracle)

This session was more oriented on the mobile part of SOA. Yogesh and Simone explained that you can support SOAP and REST on mobiles and they demonstrated how simple it is to build UI and business behind by exposing as a service.

They talked about architecture of mobile UI and their integration with a lot of adapters for different products. They took “Kiloutou”, in France, as an example of mobile application user as they use an application to manage their stocks, commands and services.

They also made a real live demo of how to use JSon or XML to manage events and communications between elements or services.

Maximun Availability in he Cloud: Oracle Weblogic Server and Oracle Coherence Frances Zhao-Perez(Oracle)

This session was heavily oriented on MAA (Maximum Availability Architecture) and Frances strongly underlined that Oracle is investing in maximum availability.

The goals of MAA are the following:

- Prevent business interruption
- Prevent data loss
- Deliver adequate response time
- Cost: Reduce deployments, managements and support costs
- Risk: Consistently achieve required service level

Here are the High Availability requirements for Oracle:

- Outage protection
- Recovery time
- Testing frequency
- Typical data loss
- Complexity in deployments
- ROI (Return on Investment)

Frances talked about Multi-data MAA solutions such as stretch cluster/domains, cache safety, Tx Logs in database, database Global Data Services, federated caching, recoverable caching and storage replication.

She introduced fastly Oracle Site Guard which provides recovery automation. And talked about next version features.

12.1.3:

- No Tlog option - Phase 1 (other phases will be implemented at each new releases)
- XA Transactions recovery across site

12.2.1 (will be a huge update next year):

- JTA HA
- Cross site Txn recovery
- Density for GridLink deployments
- Application continuity (Optimize connection harvesting on down events)

She finished on Coherence caching recovery allowing recover data from cache directly.

SQL*Plus COPY Command is back as BRIDGE

Mon, 2014-12-08 15:38

Did you ever use the COPY command in sqlplus? It's very old, and documentation says :
The COPY command is not being enhanced to handle datatypes or features introduced with, or after Oracle8i. The COPY command is likely to be deprecated in a future release.

Deprecated? But it is back, with a new name, in the new SQL Developer based SQL*Plus (currently called sdsql in beta)

UKOUG 2014: News on Oracle WebLogic, Fusion Middleware, SOA Suite

Mon, 2014-12-08 11:58

Today I went to UKOUG 2014 at Liverpool. In this blog, you'll find the sessions on Oracle WebLogic, Fusion Middleware, SOA Suite, and EDG I attended and a little resumé about it.

Administration of SOA Suite - A few tips Jon Petter Hjulstad (SYSCO Middleware)

Jon Petter's general advice was to make the monitoring stronger. You have to be pro-active in your administration of SOA Suite. You have several tools for monitoring SOA such as FMW Control, Enterprise Manager 12c or even JMC. You may want to check CPUs, memory usage and Disk space as usual but don’t forget to look at database growth and logs.

His advice is to look at patches as soon as they are releases as it will repair leaks and security issues but it will also tune up the whole system and its components.

Strongly use RDA support, it can brings a lot of information for creating Service Requests and it will hardly reduce the “ping pong” between your organization and the support team.

Fusion Middleware 12c Installation and Administration: First Experiences Peter de Vall and Nicolay Moot (Transfer Solutions)

Both are from Transfer Solutions Company from Netherland. They talked about the differences of installing and managing Fusion Middleware 11g and 12c. The benefits of using 12c is that it is clearly more integrated than the 11g version. Moreover when installing for Devs it packages for an easy to use version. This way, developers don’t have to borrow a heavy installation architecture.

Another difference is the homes path. When in 11g you had Middleware Home, Oracle Common Home, Oracle Product Home and RCU Home, here in 12c you only have Oracle Home.

Same thing for the configuration part. In 11g you had Domain Homes, Instance Homes and Node Manager Home, here you have only Domain Homes as you have one node per Domain.

Upgrading is really easy to handle as 12c provides some scripts for the upgrade. It manages the upgrade of domains and configuration in less than an hour.

It upgrades schemas, configurations and domains you just have to provide it the old (11g) domains path then the new (12c) domains path; which can be the same as demonstrated in their demo! And no redeployments or application adaptations to make, developers don’t even have to participate.

A new feature in FM 12c is RESTful. It provides urls that can be monitored by an external tool as it pushes json table which contain metrics information as memory usage and jvm utilization. Really interesting.

Best practices to manage and deliver oracle weblogic and fusion middleware to your organization James Kao (Oracle)

James is a really good speaker, managing to keep you intensively focused on his speech. He was talking about how to manage huge infrastructure with lot of domains. Talking about how many admin console you have to manage, how many configurations you have to perform before your whole environment is setup, when you have for example DEV/TEST/VAL and production environments. It could become hell out there! As it is time consuming and when upgrading or patching it is hard to automate tasks.

James’s answer is “Cloud”.

The goal is to make it available through a single point of access and make it globally administrable in one place.

“Use Cloud Control as a single administration console” he said. As you can manage your configurations and actions on a single place to specific domains or all at the same time. It provides more “templatized” administration through frameworks and it standardize ressources.

Here‘s his best practices:

#1 - Cloud Control as a single admin console
As said before, administration and configuration are integrated to Cloud Control. A system allows to enter your credentials once. When updating a configuration you lock it hence nobody can edit it during the time you are working on it. It can prevent issues when an error is reported and almost 20 administrator are trying to access the system to debug it!

Cloud Control can record every actions you do when editing or administrating. That means you can verify what you did for audit purpose or simply transform it into a WLST script to make it repeatable.

Cloud Control centralizes SOA tracing and stores metrics for history.

#2 - Automates tasks across domains
You can automate tasks by creating WLST scripts thanks to your records. For example you can start recording your session, then edit some values and stop the recording. You create a script from it and execute it on multiple domain!

#3 - Automate responses to issues vie corrective actions
And the best is that the scripts you wrote can be triggered by events from the system. You can create incident rules and specify the actions that would be triggered, all over the domains.

Enterprise Deployments: The real world of best practices Simon Haslam (Veriton) and Jacco Landlust (Oracle)

Simon and Jacco where here to talk about EDG: Enterprise Deployment Guides. These guides are a result from the historical MAA: Maximum Availability Architecture. So, EDG is a kind of guide on how to build your architecture to make it reusable and efficient but there is a difference between “paper” and reality.

EDG benefits:

- You don’t have to know every products by heart
- It’s based on Oracle experience
- It’s familiar to other Admins using EDG (a kind of OFA)
- It’s recognized by Oracle Support!

First EDG covers the following topics:

- Topologies and segregations
- Naming conventions
- It only focuses on single site high availability with load balancers
- Database HA (in RAC)

EDG is a layered step by step builder recipe.

- Layering each stage
- Assumes manually installation (No scripts)

The real world problems for EDG:

- It doesn’t including security thoughts
- It doesn’t include patching thoughts
- It doesn’t include dev/test thoughts (you have to build it by yourself)
- It’s not covering Disaster Recovery

FIO (Flexible I/O) - a benchmark tool for any operating system

Mon, 2014-12-08 04:55

I have just attended an interesting session held by Martin Nash (@mpnsh) at UKOUG 14 - Liverpool: "The least an Oracle DBA Should Know about Linux Administration" . During this session I had the opportunity to discover some interesting commands and tools such as FIO (Flexible I/O). FIO is a workload generator that can be used both for benchmark and stress/hardware verification.

FIO has support for 19 different types of I/O engines (sync, mmap, libaio, posixaio, SG v3, splice, null, network, syslet, guasi, solarisaio, and more), I/O priorities (for newer Linux kernels), rate I/O, forked or threaded jobs, and much more. It can work on block devices as well as files. fio accepts job descriptions in a simple-to-understand text format.

This tool has the huge advantage to be available for almost all kind of Operating Systems ( POSIX, Linux, BSD, Solaris, HP-UX, AIX ,OS X, Android, Windows). If you want to use this tool in the context of Oracle database I invite you to have a look on the following blog from Yann Neuhaus: Simulating database-like I/O activity with Flexible I/O

 

In order to install it on ubuntu simply use the following command:


steulet@ThinkPad-T540p:~$ sudo apt-get install fio

 

After having installed fio you can run your first test. This first test will run 2 gigabyte of IO (read write) in directory /u01/fio.


steulet@ThinkPad-T540p:~$ mkdir /u01/fio

 

Once the directory have been created we can set up the configuration script as described below. However it is perfectly possible to execute this command in command line without configuration script (fio --name=global --ioengine=posixaio --rw=readwrite --size=2g --directory=/u01/fio --threads=1 --name=myReadWriteTest-Thread1):

 

[global]
ioengine=posixaio
rw=readwrite
size=2g
directory=/u01/fio
threads=1

[myReadWriteTest-Thread1]

 

Now you can simply run your test with the command below:


steulet@ThinkPad-T540p:~$ fio testfio.fio

 

The output will looks like the following:

 

myReadWriteTest-Tread1: (g=0): rw=rw, bs=4K-4K/4K-4K/4K-4K, ioengine=posixaio, iodepth=1
fio-2.1.3
Starting 1 thread
Jobs: 1 (f=1): [M] [100.0% done] [112.9MB/113.1MB/0KB /s] [28.9K/29.2K/0 iops] [eta 00m:00s]
myReadWriteTest-Tread1: (groupid=0, jobs=1): err= 0: pid=7823: Mon Dec  8 12:45:27 2014
  read : io=1024.7MB, bw=98326KB/s, iops=24581, runt= 10671msec
    slat (usec): min=0, max=72, avg= 1.90, stdev= 0.53
    clat (usec): min=0, max=2314, avg=20.25, stdev=107.40
     lat (usec): min=5, max=2316, avg=22.16, stdev=107.41
    clat percentiles (usec):
     |  1.00th=[    4],  5.00th=[    6], 10.00th=[    7], 20.00th=[    7],
     | 30.00th=[    7], 40.00th=[    7], 50.00th=[    7], 60.00th=[    7],
     | 70.00th=[    8], 80.00th=[    8], 90.00th=[    8], 95.00th=[   10],
     | 99.00th=[  668], 99.50th=[ 1096], 99.90th=[ 1208], 99.95th=[ 1208],
     | 99.99th=[ 1256]
    bw (KB  /s): min=    2, max=124056, per=100.00%, avg=108792.37, stdev=26496.59
  write: io=1023.4MB, bw=98202KB/s, iops=24550, runt= 10671msec
    slat (usec): min=1, max=24, avg= 2.08, stdev= 0.51
    clat (usec): min=0, max=945, avg= 9.71, stdev=24.52
     lat (usec): min=5, max=947, avg=11.79, stdev=24.54
    clat percentiles (usec):
     |  1.00th=[    5],  5.00th=[    8], 10.00th=[    8], 20.00th=[    8],
     | 30.00th=[    8], 40.00th=[    8], 50.00th=[    9], 60.00th=[    9],
     | 70.00th=[    9], 80.00th=[    9], 90.00th=[   10], 95.00th=[   11],
     | 99.00th=[   15], 99.50th=[   20], 99.90th=[  612], 99.95th=[  628],
     | 99.99th=[  652]
    bw (KB  /s): min=108392, max=123536, per=100.00%, avg=114596.33, stdev=3108.03
    lat (usec) : 2=0.01%, 4=0.01%, 10=91.43%, 20=6.93%, 50=0.71%
    lat (usec) : 100=0.13%, 250=0.01%, 500=0.01%, 750=0.47%, 1000=0.01%
    lat (msec) : 2=0.31%, 4=0.01%
  cpu          : usr=10.46%, sys=21.17%, ctx=527343, majf=0, minf=12
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=262309/w=261979/d=0, short=r=0/w=0/d=0Run status group 0 (all jobs):
   READ: io=1024.7MB, aggrb=98325KB/s, minb=98325KB/s, maxb=98325KB/s, mint=10671msec, maxt=10671msec
  WRITE: io=1023.4MB, aggrb=98202KB/s, minb=98202KB/s, maxb=98202KB/s, mint=10671msec, maxt=10671msecDisk stats (read/write):
  sda: ios=6581/67944, merge=0/67, ticks=4908/196508, in_queue=201408, util=56.49%

You will find some really good examples and a detailed list of parameters on the following website: http://www.bluestop.org/fio/HOWTO.txt

This tool is really powerful and present the huge advantage to be available for more or less any Operating System. Such advantage will allow you to make some consistent comparison accross different kind of architecture.

FIO (Flexible I/O) - a benchmark tool for any Operating System

Mon, 2014-12-08 04:55

I just attended to an interesting session at UKOUG 14 - Liverpool, named "The least an Oracle DBA Should Know about Linux Administration". This session has been given by Martin Nash (@mpnsh).

During this session I had the opportunity to discover some interesting commands and tools such as fio (Flexible I/O). fio is a workload generator that can be used both for benchmark and stress/hardware verification.

fio has support for 19 different types of I/O engines (sync, mmap, libaio, posixaio, SG v3, splice, null, network, syslet, guasi, solarisaio, and more), I/O priorities (for newer Linux kernels), rate I/O, forked or threaded jobs, and much more. It can work on block devices as well as files. fio accepts job descriptions in a simple-to-understand text format.

This tool has the huge advantage to be available for almost all kind of Operating Systems ( POSIX, Linux, BSD, Solaris, HP-UX, AIX ,OS X, Android, Windows). If you want to use this tool in the context of Oracle database I invite you to have a look on the following blog from Yann Neuhaus: Simulating database-like I/O activity with Flexible I/O

 

In order to install it on ubuntu simply use the following command:


steulet@ThinkPad-T540p:~$ sudo apt-get install fio

 

After having installed fio you can run your first test. This first test will run 2 gigabyte of IO (read write) in directory /u01/fio.


steulet@ThinkPad-T540p:~$ mkdir /u01/fio

 

Once the directory have been created we can set up the configuration script as described below. However it is perfectly possible to execute this command in command line without configuration script (fio --name=global --ioengine=posixaio --rw=readwrite --size=2g --directory=/u01/fio --threads=1 --name=myReadWriteTest-Thread1):

 

[global]
ioengine=posixaio
rw=readwrite
size=2g
directory=/u01/fio
threads=1

[myReadWriteTest-Thread1]

 

Now you can simply run your test with the command below:


steulet@ThinkPad-T540p:~$ fio testfio.fio

 

The output will looks like the following:

 

myReadWriteTest-Tread1: (g=0): rw=rw, bs=4K-4K/4K-4K/4K-4K, ioengine=posixaio, iodepth=1
fio-2.1.3
Starting 1 thread
Jobs: 1 (f=1): [M] [100.0% done] [112.9MB/113.1MB/0KB /s] [28.9K/29.2K/0 iops] [eta 00m:00s]
myReadWriteTest-Tread1: (groupid=0, jobs=1): err= 0: pid=7823: Mon Dec  8 12:45:27 2014
  read : io=1024.7MB, bw=98326KB/s, iops=24581, runt= 10671msec
    slat (usec): min=0, max=72, avg= 1.90, stdev= 0.53
    clat (usec): min=0, max=2314, avg=20.25, stdev=107.40
     lat (usec): min=5, max=2316, avg=22.16, stdev=107.41
    clat percentiles (usec):
     |  1.00th=[    4],  5.00th=[    6], 10.00th=[    7], 20.00th=[    7],
     | 30.00th=[    7], 40.00th=[    7], 50.00th=[    7], 60.00th=[    7],
     | 70.00th=[    8], 80.00th=[    8], 90.00th=[    8], 95.00th=[   10],
     | 99.00th=[  668], 99.50th=[ 1096], 99.90th=[ 1208], 99.95th=[ 1208],
     | 99.99th=[ 1256]
    bw (KB  /s): min=    2, max=124056, per=100.00%, avg=108792.37, stdev=26496.59
  write: io=1023.4MB, bw=98202KB/s, iops=24550, runt= 10671msec
    slat (usec): min=1, max=24, avg= 2.08, stdev= 0.51
    clat (usec): min=0, max=945, avg= 9.71, stdev=24.52
     lat (usec): min=5, max=947, avg=11.79, stdev=24.54
    clat percentiles (usec):
     |  1.00th=[    5],  5.00th=[    8], 10.00th=[    8], 20.00th=[    8],
     | 30.00th=[    8], 40.00th=[    8], 50.00th=[    9], 60.00th=[    9],
     | 70.00th=[    9], 80.00th=[    9], 90.00th=[   10], 95.00th=[   11],
     | 99.00th=[   15], 99.50th=[   20], 99.90th=[  612], 99.95th=[  628],
     | 99.99th=[  652]
    bw (KB  /s): min=108392, max=123536, per=100.00%, avg=114596.33, stdev=3108.03
    lat (usec) : 2=0.01%, 4=0.01%, 10=91.43%, 20=6.93%, 50=0.71%
    lat (usec) : 100=0.13%, 250=0.01%, 500=0.01%, 750=0.47%, 1000=0.01%
    lat (msec) : 2=0.31%, 4=0.01%
  cpu          : usr=10.46%, sys=21.17%, ctx=527343, majf=0, minf=12
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=262309/w=261979/d=0, short=r=0/w=0/d=0Run status group 0 (all jobs):
   READ: io=1024.7MB, aggrb=98325KB/s, minb=98325KB/s, maxb=98325KB/s, mint=10671msec, maxt=10671msec
  WRITE: io=1023.4MB, aggrb=98202KB/s, minb=98202KB/s, maxb=98202KB/s, mint=10671msec, maxt=10671msecDisk stats (read/write):
  sda: ios=6581/67944, merge=0/67, ticks=4908/196508, in_queue=201408, util=56.49%

You will find some really good examples and a detailed list of parameters on the following website: http://www.bluestop.org/fio/HOWTO.txt

This tool is really powerful and present the huge advantage to be available for more or less any Operating System. Such advantage will allow you to make some consistent comparison accross different kind of architecture.

 

 

SQL Server tips: how to list orphaned logins

Thu, 2014-12-04 21:56

I read a lot of about orphaned database users in SQL Server, but I have almost never read about orphaned logins. Many of my customers migrate or remove databases in SQL Server. They forget - not every time but often - to remove the logins and jobs associated with these databases. I have created a script - without any cursors, YES, it is possible - allowing to search all logins who are not "attached" to a database of an instance.

Oracle 12c: comparing TTS with noncdb_to_pdb

Mon, 2014-12-01 08:44

How to migrate from non-CDB to CDB? Of course all known migration methods works. But there is also another solution: upgrade to 12c if necessary and then convert the non-CDB to a PDB. This is done with the noncdb_to_pdb.sql which converts a non-CDB dictionary to a PDB one, with metadata and object links. But do you get a clean PDB after that ? I tested it and compared the result with same database migrated by transportable tablespaces.

The test case

In 12c I can use Full Transportable database, but here I've only one tablespace as I'm doing my comparison on an empty database with the EXAMPLE schemas.

Here is my database:

RMAN> report schema;

Report of database schema for database with db_unique_name NDB1

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    790      SYSTEM               YES     /u01/app/oracle/oradata/NDB1/system01.dbf
3    610      SYSAUX               NO      /u01/app/oracle/oradata/NDB1/sysaux01.dbf
4    275      UNDOTBS1             YES     /u01/app/oracle/oradata/NDB1/undotbs01.dbf
5    1243     EXAMPLE              NO      /u01/app/oracle/oradata/NDB1/example01.dbf
6    5        USERS                NO      /u01/app/oracle/oradata/NDB1/users01.dbf

It's a new database, created with dbca, all defaults, and having only the EXAMPLE tablespace. SYSTEM is 790MB and SYSAUX is 610MB. We can have a lot of small databases like that, where system size is larger than user size and this is a reason to go to multitenant.

I will compare the following:

  • the migration with transportable tablespaces (into pluggable database PDB_TTS)
  • the plug and run noncdb_to_pdb (into the pluggable database PDB_PLG)

Transportable tablespace

Transportable tablespace will plug only the non system tablespaces and all the dictionary entries are recreated while importing metadata. Here it is:

SQL> alter tablespace EXAMPLE read only;
Tablespace altered.
SQL> host expdp '"/ as sysdba"' transport_tablespaces='EXAMPLE'

The log gives me the dump file (containing the metadata) and the datafiles to copy:

Master table "SYS"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TRANSPORTABLE_01 is:
  /u01/app/oracle/admin/NDB1/dpdump/expdat.dmp
******************************************************************************
Datafiles required for transportable tablespace EXAMPLE:
  /u01/app/oracle/oradata/NDB1/example01.dbf
Job "SYS"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at ... elapsed 00:03:55

then on the destination I create an empty pluggable database:

SQL> create pluggable database PDB_TTS admin user pdbadmin identified by "oracle" file_name_convert=('/pdbseed/','/PDB_TTS/');
SQL> alter pluggable database PDB_TTS open;
SQL> alter session set container=PDB_TTS;

and import the metadata after having copied the datafile to /u03:

SQL> create or replace directory DATA_PUMP_DIR_NDB1 as '/u01/app/oracle/admin/NDB1/dpdump';
SQL> host impdp '"sys/oracle@//vm211:1666/pdb_tts as sysdba"' transport_datafiles=/u03/app/oracle/oradata/PDB_TTS/example01.dbf directory=DATA_PUMP_DIR_NDB1

which took only two minutes because I don't have a lot of objects. That's all. I have a brand new pluggable database where I've imported my tablespaces.

Here I used the transportable tablespace and had to pre-create the users. But in 12c you can do everything with Full Tabsportable Database.

noncdb_to_pdb.sql

The other solution is to plug the whole database, including the SYSTEM and SYSAUX tablespaces, and then run the noncdb_to_pdb.sql script to transform the dictionary to a multitenant one. First, we generate the xml describing the database, which is similar to the one generated when we unplug a PDB:

SQL> shutdown immediate
SQL> startup open read only;
SQL> exec dbms_pdb.describe('/tmp/NDB01.xml');

And then plug it:

SQL> CREATE PLUGGABLE DATABASE PDB_PLG USING '/tmp/NDB01.xml' COPY FILE_NAME_CONVERT = ('/u01/app/oracle/oradata/NDB1', '/u03/app/oracle/oradata/PDB_PLG');

At that point I can open the PDB but it will act as a Non-CDB, with its own dictionary that is not linked to the root. For example, you have nothing when you query DBA_PDBS from the PDB:

SQL> show con_id
CON_ID
------------------------------
6
SQL> select * from dba_pdbs;
no rows selected

I put in my todo list to test what we can do in that PDB which is not yet a PDB before raising lot of ORA-600.

Now you have to migrate the dictionary to a PDB one. The noncdb_to_pdb.sql will do internal updates to transform the entries in OBJ$ to be metadata links.

SQL> alter session set container=PDB_PLG;
SQL> @?/rdbms/admin/noncdb_to_pdb;
SQL> alter pluggable database PDB_PLG open;

The updates will depend on the number of dictionary objects, so that is fixed for the version. And the remaining time is to recompile all objects, but that can be done in parallel. Here, I've run it in serial to see how long it takes (screenshot from Lighty):

b2ap3_thumbnail_Capturenon_cdb_to_pdb.png

Comparison

My goal was to compare both methods. As I expected, the SYSTEM and SYSAUX tablespaces did not decrease when using the noncdb_to_pdb, so if you want to go to multitenant to save space, the noncdb_to_pdb method is not the good one:

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB1_SITE1

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    781      SYSTEM               YES     /u02/app/oracle/oradata/cdb1_site1/system01.dbf
3    691      SYSAUX               NO      /u02/app/oracle/oradata/cdb1_site1/sysaux01.dbf
4    870      UNDOTBS1             YES     /u02/app/oracle/oradata/cdb1_site1/undotbs01.dbf
5    260      PDB$SEED:SYSTEM      NO      /u02/app/oracle/oradata/cdb1_site1/pdbseed/system01.dbf
6    5        USERS                NO      /u02/app/oracle/oradata/cdb1_site1/users01.dbf
7    570      PDB$SEED:SYSAUX      NO      /u02/app/oracle/oradata/cdb1_site1/pdbseed/sysaux01.dbf
8    260      PDB1:SYSTEM          NO      /u02/app/oracle/oradata/cdb1_site1/pdb1/system01.dbf
9    580      PDB1:SYSAUX          NO      /u02/app/oracle/oradata/cdb1_site1/pdb1/sysaux01.dbf
10   10       PDB1:USERS           NO      /u02/app/oracle/oradata/cdb1_site1/pdb1/pdb1_users01.dbf
14   270      PDB_TTS:SYSTEM       NO      /u02/app/oracle/oradata/cdb1_site1/PDB_TTS/system01.dbf
15   590      PDB_TTS:SYSAUX       NO      /u02/app/oracle/oradata/cdb1_site1/PDB_TTS/sysaux01.dbf
17   1243     PDB_TTS:EXAMPLE      NO      /u03/app/oracle/oradata/PDB_TTS/example01.dbf
22   790      PDB_PLG:SYSTEM       NO      /u03/app/oracle/oradata/PDB_PLG/system01.dbf
23   680      PDB_PLG:SYSAUX       NO      /u03/app/oracle/oradata/PDB_PLG/sysaux01.dbf
24   5        PDB_PLG:USERS        NO      /u03/app/oracle/oradata/PDB_PLG/users01.dbf
25   1243     PDB_PLG:EXAMPLE      NO      /u03/app/oracle/oradata/PDB_PLG/example01.dbf

The SYSTEM tablespace which is supposed to contain only links (my user schemas don't have a lot of objects) is the same size as the root. This is bad. Let's look at the detail:

SQL> select *
  from (select nvl(pdb_name,'CDB$ROOT') pdb_name,owner,segment_type,bytes from cdb_segments 
  left outer join dba_pdbs using(con_id))
  pivot (sum(bytes/1024/1024) as "MB" for (pdb_name) 
  in ('CDB$ROOT' as "CDB$ROOT",'PDB_TTS' as PDB_TTS,'PDB_PLG' as PDB_PLG))
  order by greatest(nvl(PDB_TTS_MB,0),nvl(PDB_PLG_MB,0))-least(nvl(PDB_TTS_MB,0),nvl(PDB_PLG_MB,0)) 
  desc fetch first 20 rows only;

OWNER                SEGMENT_TYPE       CDB$ROOT_MB PDB_TTS_MB PDB_PLG_MB
-------------------- ------------------ ----------- ---------- ----------
SYS                  TABLE                      539         96        540
SYS                  INDEX                      187        109        195
SYS                  LOBSEGMENT                 117        105        118
SYS                  TABLE PARTITION             17          1         12
SYSTEM               INDEX                       10          1         10
SYS                  SYSTEM STATISTICS                                  8
SYSTEM               TABLE                        8          1          8
SYS                  LOBINDEX                    12          7         13
SYS                  INDEX PARTITION              9          0          6
SYSTEM               LOBSEGMENT                   5          0          5
APEX_040200          LOBSEGMENT                  80         74         80
SYSTEM               INDEX PARTITION              4                     4
SYSTEM               TABLE PARTITION              3                     3
SYS                  TABLE SUBPARTITION           2                     2
SYS                  CLUSTER                     52         50         52
SYS                  LOB PARTITION                3          1          2
SYSTEM               LOBINDEX                     2          0          2
APEX_040200          TABLE                      100         99        100
XDB                  TABLE                        7          6          7
AUDSYS               LOB PARTITION                1          0          1

20 rows selected.

Here I've compared the dictionary sizes. While the PDB_TTS table segments are below 100MB, the PDB_PLG is the same size as the root. The noncdb_to_pdb has updated OBJ$ but did not delete the rows reclaim space from other tables (see update 2).

Which tables?

SQL> select *
   from (select nvl(pdb_name,'CDB$ROOT') pdb_name,owner,segment_type,segment_name,bytes 
   from cdb_segments left outer join dba_pdbs using(con_id) 
   where owner='SYS' and segment_type in ('TABLE'))
   pivot (sum(bytes/1024/1024) as "MB" for (pdb_name) 
   in ('CDB$ROOT' as "CDB$ROOT",'PDB_TTS' as PDB_TTS,'PDB_PLG' as PDB_PLG))
   order by greatest(nvl(PDB_TTS_MB,0),nvl(PDB_PLG_MB,0))-least(nvl(PDB_TTS_MB,0),nvl(PDB_PLG_MB,0))
   desc fetch first 20 rows only;

OWNER             SEGMENT_TYPE       SEGMENT_NAME                   CDB$ROOT_MB PDB_TTS_MB PDB_PLG_MB
----------------- ------------------ ------------------------------ ----------- ---------- ----------
SYS               TABLE              IDL_UB1$                               288          3        288
SYS               TABLE              SOURCE$                                 51          2         52
SYS               TABLE              IDL_UB2$                                32         13         32
SYS               TABLE              ARGUMENT$                               13          0         13
SYS               TABLE              IDL_CHAR$                               11          3         11

The IDL_UB1$ is the table that contains all the pcode for pl/sql. All those wrapped dbms_ packages are there. And we don't need them in the PDB: we have link to the root which has exactly the same version.

Conclusion

My conclusion is that I'll not advise to use using noncdb_to_pdb. First, that script doing a lot of internal stuff scares me. I prefer to start that new implementation of the dictionary with a clean one.

But now that I made this test, I've two additional reasons to avoid it. First, it's not faster - except if you have a lot of objects. And the main goal is to reduce the total space by having the oracle packages stored only once. And this is cleary not done by the noncdb_to_pdb.

However, that conclusion is only for small databases. If you a database with a huge number of objects and pl/sql packages, then the overhead to keep the dictionary objects will not be very significant. And the TSS solution will be longer because it has to import all metadata. So there is still a case for noncdb_to_pdb. But test is before. And be sure to have a large shared pool for the recompile step.

Update 1: I forgot to add another reason to be very careful with noncdb_to_pdb from Bertrand Drouvot in his post about optimizer_adaptive_features huge negative impact on it.

Update 2: From a comment on OTN forum I changed the sentence about deleted rows because it was wrong. In fact, rows are deleted when the objects are recompiled:

SQL> select name,count(*) from containers(IDL_UB1$) left outer join v$containers using(con_id) group by name order by 1;

NAME                             COUNT(*)
------------------------------ ----------
CDB$ROOT                            53298
PDB1                                 6457
PDB_PLG                              6432
PDB_TTS                              6354

SQL> select name,count(*) from containers(SOURCE$) left outer join v$containers using(con_id) group by name order by 1;

NAME                             COUNT(*)
------------------------------ ----------
CDB$ROOT                           327589
PDB1                                73055
PDB_PLG                             20306
PDB_TTS                             17753
The issue is only that space is still allocated. And you can't SHRINK those objects because SYSTEM is DMT, and anyway the large tables contain LONG, and finally:
SQL> alter table sys.IDL_UB1$ shrink space;
alter table sys.IDL_UB1$ shrink space
*
ERROR at line 1:
ORA-65040: operation not allowed from within a pluggable database
Of course, the space can be reused, but do you expect to add 200MB of compiled pl/sql in future releases?

WSFC with SQL Server: Manual or automatic failover? That is the question

Mon, 2014-12-01 00:05

During the night, you receive an alert concerning your SQL Server failover cluster or your availability groups. You're in panic because the message displayed is "a failover has occured .. see the log for more details" ...

So you try to keep quiet and after connecting to your environment, you are not able to find anything ... What has happened? Maybe someone has triggered a failover manually and you are not aware of it. I'm sure that by reading the previous sentences, many of you will find the situation familiar, but the real question is: Is it possible to distinguish a manual failover from an automatic failover with a Windows failover cluster?

The answer is yes and one way to find out the response is to take a look at the cluster.log. In fact, you have a record entry that clearly identifies a manual failover of resources:

 

[RCM] rcm::RcmApi::MoveGroup: (, 1, 0, MoveType::Manual )

 

As a reminder, this is the resource control monitor [RCM] that is responsible for performing actions according to the state of a resource. In fact, when you trigger a manual failover, the MoveGroup API is called with a identified parameter MoveType::Manual

Let me know if you find a other way of discovering a manual failover :-)

Happy failover (or not)!

Security improvements in MySQL 5.7

Sun, 2014-11-30 05:24

If you have a look on the last mysql 5.7.4 version or later you will probably see that there are several security improvements. The list of added security features and improvements can be seen on the following page: http://dev.mysql.com/doc/refman/5.7/en/mysql-nutshell.html

There are three main improvements that are shortly described in this blog:

1. Nonempty plugin column
2. Password lifetime policy
3. mysql_install_db secured

 

Nonempty plugin column

As of MySQL 5.7.2, the server requires account rows in the mysql.user table to have a nonempty plugin column value and disables accounts with an empty value. The following error will occor when trying to connect with user having empty plugin colum:

 

2014-11-30T10:41:04.943384Z 2 [Note] Access denied for user 'sbtest'@'localhost' (using password: YES)

 

If the user is connected when you update the plugin column, MySQL will behave as described below:

1. The user connect to the database:

 

mysql -u sbtest -p --socket=/u00/app/mysql/admin/mysqld5/socket/mysqld5.sock
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 9
Server version: 5.7.4-m14 MySQL Community Server (GPL)Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>

2. Update column with root user:

mysql> update mysql.user set plugin='' where user='sbtest';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0


3. With the root user you flush the privileges:

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)


4. The following message will appear in the sbtest session:

2014-11-30T22:08:16.652477Z 8 [Warning] User entry 'sbtest'@'localhost' has an empty plugin value. The user will be ignored and no one can login with this user anymore.



Password Lifetime policy

Since mysql 5.7.4 MySQL enables database administrators to expire account passwords manually and to establish a policy for automatic password expiration. How does it work ?

Two new columns have been added to MySQL:

 

| password_last_changed | timestamp            | YES | | NULL |
| password_lifetime     | smallint(5) unsigned | YES | | NULL |

 

These two columns allow to see when password has been changed and to set a password lifetime.

You can establish a global password policy by setting the variable default_password_lifetime in the option file. By default this variable is set to 360. It means that all users will have to change their password once per year. A value of 0 disables automatic password expiration.

As stated in the documentation, the global password expliration policy can be overridden as desired for individual accounts using the ALTER USER statement.


Example:

mysql> ALTER USER 'sbtest'@'localhost' PASSWORD EXPIRE INTERVAL 90 DAY;
Query OK, 0 rows affected (0.00 sec)


A client session operates in restricted mode if the account password has been expired. In restricted mode, operations performed in the session result in an error until the user issues a SET PASSWORD statement to establish a new account password:

 

mysql> alter user 'sbtest'@'localhost' password expire interval 1 day;

[mysqld5] mysql -u sbtest -p

mysql> show databases;
ERROR 1820 (HY000): You must SET PASSWORD before executing this statement

mysql> set password=password('sbtest');
Query OK, 0 rows affected (0.01 sec)

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| sysbench           |
+--------------------+
2 rows in set (0.00 sec)

 

To remove the password expiration policy simple use "expire never" as presented in the following example:

mysql>alter user 'sbtest'@'localhost' password expire never;

 

mysql_install_db secured

MySQL deployments installed using mysql_install_db now are secure by default. The following changes have been implemented as the default deployment characteristics:

The installation process creates only a single root account and not anymore anonymous-user accounts.

Example on MySQL 5.7.4:

 

mysql> select user,password, host from mysql.user;

+-------------+-------------------------------------------+-----------+
| user        | password                                  | host      |
+-------------+-------------------------------------------+-----------+
| root        | *7D2ABFF56C15D67445082FBB4ACD2DCD26C0ED57 | localhost |
+-------------+-------------------------------------------+-----------+
1 rows in set (0.00 sec)

 

Example on mysql 5.6.20:

 

mysql> select user,password, host from mysql.user;

+-------------+-------------------------------------------+----------------+
| user        | password                                  | host           |
+-------------+-------------------------------------------+----------------+
| root        | *7D2ABFF56C15D67445082FBB4ACD2DCD26C0ED57 | localhost      |
| root        |                                           | thinkpad-t540p |
| root        |                                           | 127.0.0.1      |
| root        |                                           | ::1            |
|             |                                           | localhost      |
|             |                                           | thinkpad-t540p |
+-------------+-------------------------------------------+----------------+
6 rows in set (0.01 sec)

 

After the installation you can also note that there is no "test" database anymore.

Example on MySQL 5.7.4:


mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
+--------------------+
3 rows in set (0.00 sec)


Example on MySQL 5.6.20:

mysql> show databases;

+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+
4 rows in set (0.00 sec)

 

Conclusion

Oracle did some significative security improvements on MySQL 5.7 version. This improvements will help database administrators to deploy MySQL with a better security level than with previous versions. Some improvements have also been done on the mysql_secure_installation script. The list of improvement and additional features can be seen on the following URL: http://dev.mysql.com/doc/refman/5.7/en/mysql-secure-installation.html

When Oracle resets session statistics

Sat, 2014-11-29 15:59

During our Oracle 12c New Features workshop I had a very good question about whether the session statistics are reset or not when doing ALTER SESSION SET CONTAINER. My initial thought was that they were not reset because it's the same session (same SID and SERIAL#). But when I'm not 100% sure about something, I test it. And once again, it proves that even the instructor can learn something new when giving a workshop, thanks to the great interaction with the participants.

My test was very simple, querying the 'logon' statistics after an ALTER SESSION SET CONTAINER and I came with the following tweet:

Quizz: How can I be connected and get 0 logons from V$MYSTAT ? pic.twitter.com/YZPQNU8FiH

— Franck Pachot (@FranckPachot) November 26, 2014

Of course that needs more investigation. This is about session statistics. What about session events? and session time model?

SQL> connect / as sysdba
Connected.

SQL> select name,value from v$mystat join v$statname using(statistic#) where name like 'logon%';

NAME                                     VALUE
----------------------------------- ----------
logons cumulative                            1
logons current                               1

I'm connected to the root container. I generate a bit of activity (using dbms_system.wait_for_event for fake i/o activity):

SQL> exec for i in 1..1e5 loop dbms_system.wait_for_event('db file sequential read', 0 , 0 ); end loop;
PL/SQL procedure successfully completed.

SQL> exec for i in 1..100 loop dbms_system.wait_for_event('db file sequential read', 0 , 1 ); end loop;
PL/SQL procedure successfully completed.

SQL> declare s date:=sysdate; begin loop exit when sysdate>s+60/24/60/60; end loop; end;
  2  /
PL/SQL procedure successfully completed.

and here are my session statistics:

Time Model show 60 seconds of CPU and the 100 seconds of I/O is in the DB time (time model values are in microseconds):

SQL> select stat_name,value from v$sess_time_model where sid=sys_context('userenv','sid') and stat_name in ('DB time','DB CPU');

STAT_NAME                                VALUE
----------------------------------- ----------
DB time                              165084940
DB CPU                                61119000

Session Events shows those 100 seconds of I/O and the longest call took 1 second (values are in centisecond when not with 'MICRO'):

SQL> select event,total_waits,time_waited_micro,max_wait from v$session_event where sid=sys_context('userenv','sid');

EVENT                          TOTAL_WAITS TIME_WAITED_MICRO   MAX_WAIT
------------------------------ ----------- ----------------- ----------
Disk file operations I/O                19              1407          0
db file sequential read             100100         104060605        101
SQL*Net message to client               33               123          0
SQL*Net message from client             32             13842          0

Session Statistics where time is in centiseconds show 60 seconds of CPU and 100 + 60 of DB time:

SQL> select name,value from v$mystat join v$statname using(statistic#) where name = 'DB time' or name like 'CPU%' or name like 'SQL*Net%client';

NAME                                     VALUE
----------------------------------- ----------
CPU used when call started                6118
CPU used by this session                  6118
DB time                                  16513
user I/O wait time                       10406
SQL*Net roundtrips to/from client           33

Ok, so that is consistent. And this has to be consistent as we often compare information for those 3 sources.

Let's now change to another container with ALTER SESSION SET CONTAINER:

SQL> show con_id

CON_ID
------------------------------
1

SQL> alter session set container=PDB1;

Session altered.

and look at the same statistics now:

SQL> select stat_name,value from v$sess_time_model where sid=sys_context('userenv','sid') and stat_name in ('DB time','DB CPU');

STAT_NAME                                VALUE
----------------------------------- ----------
DB time                              103907514
DB CPU                                    2000
SQL> select event,total_waits,time_waited_micro,max_wait from v$session_event where sid=sys_context('userenv','sid');

EVENT                          TOTAL_WAITS TIME_WAITED_MICRO   MAX_WAIT
------------------------------ ----------- ----------------- ----------
Disk file operations I/O                25              2084          0
db file sequential read             100100         104060605        101
SQL*Net message to client               42               180          0
SQL*Net message from client             41             17774          0
SQL> select name,value from v$mystat join v$statname using(statistic#) where name = 'DB time' or name like 'CPU%' or name like 'SQL*Net%client';

NAME                                     VALUE
----------------------------------- ----------
CPU used when call started                  11
CPU used by this session                    11
DB time                                      8
user I/O wait time                           0
SQL*Net roundtrips to/from client            5

All the session statistics have been reset and now have low values. And only them: events and time model still show cumulative values from the begining of the session.

So this unfortunately brings inconsistency. Here:

  • session statistic DB time cannot be compared with time model
  • SQL*Net roundtrips cannot be compared to the count of 'SQL*net message to client' event
  • CPU info from session statistics cannot be compared with the time model
  • session statistic wait time cannot be compared with session event time
and this how I can be connected and see 0 logons for my session:

SQL> select name,value from v$mystat join v$statname using(statistic#) where name like 'logon%';

NAME                                     VALUE
----------------------------------- ----------
logons cumulative                            0
logons current                               0

Now, as we are talking about resetting statistics, there is something else we can do. As you see above the session keeps track of the longest time for each wait event. Here above, the maximum I/O time was 1 second. We can reset that if we want:

SQL> exec dbms_system.kcfrms;
PL/SQL procedure successfully completed.

we still have all cumulative values for the session, but the high water mark of wait event duration has been reset:

SQL> select event,total_waits,time_waited_micro,max_wait from v$session_event where sid=sys_context('userenv','sid');

EVENT                          TOTAL_WAITS TIME_WAITED_MICRO   MAX_WAIT
------------------------------ ----------- ----------------- ----------
Disk file operations I/O                29              2263          0
db file sequential read             100100         104060605          0
SQL*Net message to client               49               209          0

Finally, I made other tests that I do not reproduce here:

If you ALTER SESSION SET CONTAINER with your current container then statistics are not reset.

If you ALTER SESSION SET CURRENT_SCHEMA the statistics are not reset.

So I'll stay with a reconnect when I want to reset all session statistics with wait events and time model as well.

Windows Server 2012 R2: solving .NET Framework 3.5 installation problems

Thu, 2014-11-27 20:57

I faced a problem at a customer site last week when I tried to install the .NET Framework 3.5 - a prerequisite for installing SQL Server 2012 on a Windows Server 2012 R2. I opened the Server Manager and then navigated to the Manage, Add Roles and Features section:

 

b2ap3_thumbnail_pic01.jpg

 

I selected the .NET Framework 3.5 Features option:

 

b2ap3_thumbnail_pic02.jpg

 

I specified an alternate source path:

 

b2ap3_thumbnail_pic03.jpg

 

... and surprise! Even though an ISO of Windows Server 2012 R2 was mapped to my D: drive, the installation failed with this strange error: "The source file could not be found..."

After some investigations, I found that this problem is quite common and that Microsoft has published a fix ... which unfortunately does not work for me!

I tried the same installation with different ways: command prompt, PowerShel l... but absolutely NO RESULT.

I finally decided to open a PowerShell console to check the Windows Features available on my server with the cmdlet Get-WindowsFeature:

 

b2ap3_thumbnail_Pic1.jpg

 

Strangely, the status of the .NET Framework 3.5 is not showing "Available", but "Removed"!

 

b2ap3_thumbnail_Pic2.jpg

 

So, how do I change this state from removed to available?

After some investigations and after having tried some fixes provided by persons who faced to the same problem as me, I finally found the Standalone Offline Installer tool that solved my problem by enabling the .NET Framework 3.5 (many thanks to Abbodi1406).

I downloaded this exe file and executed it on my server.

An installer screen appeared:

 

b2ap3_thumbnail_pic4.jpg

 

After clicking on the Next button, a command prompt screen appeared which showed the completion state of the process.

pic6.jpg

 

As soon as the process was finished, I went back to my PowerShell screen to check if my .NET Framework 3.5 is now available - by running my PowerShell cmdlet Get-WindowsFeature:

 

b2ap3_thumbnail_pic10.jpg

 

The .NET Framework 3.5 now was available and I as able to restart the installation process from the beginning by navigating to the server manager, selecting the concerned feature and giving the alternate source path.

pic8.jpg

I finally succeded in installing my .NET Framework 3.5!

I hope that my blog post will help some of you to resolve this installation problem ;-)

Oracle AVDF - Database Firewall Policies

Thu, 2014-11-27 04:13

The successful deployment of a Database Firewall depends on an effective policy. Oracle AVDF includes preconfigured firewall policies as described in the Firewall Policy page in the Policy tab of the Audit Vault Server console.

These include policies that log all SQL statements, or log only unique SQL statements. In addition, the Database Firewall policy editor enables you to design your own policies quickly and efficiently.

Policy rules can depend on any combination of the SQL statement type, name of the database user, IP address of the database client, operating system user name, client program name, or any exceptions you specify.

 

First policy and global concept

1. Log in to the Audit Vault Server console as an auditor, and click on the Policy tab:

 

001_20141122-190809_1.png

 

2. Under the Policy menu, click Firewall Policy.

3. Click Create Policy.

The Create Policy dialog appears. Select the Database Type from the drop-down list (choice between IBM DB2, Microsoft SQL Server, MySQL, Oracle Database, Sybase ASE, Sybase SQL Anywhere), Enter a Policy Name and Optionally, enter a Description:

 

002.png

 

3. Click on “Create”. The new policy is created, and the policy's Overview page appears:

 

003.png

 

When you create a new policy, or click an existing policy name in the Firewall Policies page, that policy's Overview page appears. This page shows the policy rules that are being applied to the statement types (clusters) being monitored by the Database Firewall, as well as exceptions and other rules that may apply.

The policy's Overview page is divided into these sub-sections:

  • Exception Rules - Lists exceptions you have created. The rules that you have assigned to SQL statement clusters will not apply to these exceptions. You can move the rules up or down in the list. The rules are evaluated in the order listed.
  • Analyzed SQL - Displays the number of SQL statement clusters for which you have defined policy rules, and their policy actions (such as Warn or Block).
  • Novelty Policies (Any) - Lists special policies you have created for specific statement classes and/or specific tables in your secured target databases. If you have identified specific tables in a policy in this section, the policy rule applies if it matches Any of the tables.
  • Novelty Policies (All) - Lists special policies you have created for specific statement classes and/or specific tables in your secured target databases. If you have identified specific tables in a policy in this section, the policy rule applies if it matches All of the tables.
  • Default Rule - Shows the default rule for any statements that are not matched by the rules set for Analyzed SQL clusters, Exceptions, or Novelty Policies.
  • Policy Controls - Lets you configure firewall policy settings, create policy profiles, as well as sets of filters to use in defining profiles and Exception rules.

 

Practical case

Developing a policy is an iterative process that keeps refining and improving the policy with new data. In order to be able to create a policy statements have to be executed on the database. The examples below present some possibilities provided by Firewall Policy.

These examples are based on a very simple context with two schemas/users:

  1. PSI
  2. GRS

PSI schema contains three tables:

  1. PSI.CREDIT_CARD containing Credit Cards numbers
  2. PSI.EMP containing employees’ salaries
  3. PSI.TEST containing one non sensitive row

 

SQL> select table_name from dba_tables where owner='PSI';

TABLE_NAME

------------------------------

EMP

CREDIT_CARD

TEST

 

Novelty Policy

The table CREDIT_CARD contains credit cards numbers and EMP contains Employee salary. These two tables are very sensitive and nobody can have a look on these tables:

 

 

GRS

PSI

PSI.EMP

NOK

NOK

PSI.CREDIT_CARD

NOK

NOK

PSI.TEST

OK

OK

 

The first step in order to create this policy is to create a novelty rule. Novelty policies specify the action, logging level, and threat severity to use for specific types of statements and/or statements that operate on selected tables. Novelty policies can be used to loosen or tighten your normal policy rules if certain statements are encountered. In our context we want to create a novelty policy that will block all access to these tables:

1. In the Audit Vault Server console, select the “Policy” tab.

2. From the Policy menu, click “Firewall Policy”.

3. Click on the newly created Firewall Policy named “MyPolicy

4. Click Add Novelty Rule in section Novelty Policy (Any):

5. In the Novelty Policy Details dialog, define the following:

a. Novelty Rule: Enter a name for this rule: MyNR

b. Statement Classes: Select one or more types of statements that SQL statements must match in order to apply this rule. In this example we have to select “Data Manipulation Read Only”

c. Policy Controls: Select the Action, Logging Level, and Threat Severity for this rule from the appropriate drop-down list. In this example we have to select “Block” for action and specify in the substitution field, the statement below:

select 'You do not have access to this table' from dual

6. Affected Tables: Select the table(s) to use for matching statements to this policy. In order to have tables in this list, tables have to be accessed first. If there is no activity on the database the list will be empty. In our specific case we select tables: PSI.EMP and PSI.CREDIT_CARD and we click on “Add Tables”:

 

004.png

 

7. Click on “Create”.

8. Now we can test this policy. For the moment this policy will block access to any user trying to have access to these two tables.In order to apply this policy we have to save the policy by clicking on “save” and then “publish”.

9. Click on “Secured Targets”

10. Click on the target where you want to apply the policy

11. Click on Firewall Policy

12. Select the Policy “MyPolicy”

13. Now you can check that the policy is applied by doing a select on this table.

 

C:Usersadministrateur>sqlplus psi/psi@souk
SQL*Plus: Release 12.1.0.2.0 Production on Mer. Nov. 12 13:36:14 2014

Copyright (c) 1982, 2014, Oracle. All rights reserved.


ConnectÚ Ó :
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

SQL> select * from psi.credit_card;


'YOUDONOTHAVEACCESSTOTHISTABLE'

------------------------------------
You do not have access to this table
SQL>

 

We can execute the same query with user GRS, the result will be the same:

 

C:Usersadministrateur>sqlplus grs/grs@souk
SQL*Plus: Release 12.1.0.2.0 Production on Mer. Nov. 12 13:36:14 2014

Copyright (c) 1982, 2014, Oracle. All rights reserved.


ConnectÚ Ó :
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

SQL> select * from psi.credit_card;


'YOUDONOTHAVEACCESSTOTHISTABLE'
------------------------------------
You do not have access to this table

SQL>
  Exception Rule

The table CREDIT_CARD contains credit cards numbers and EMP contains Employee salary. These two tables are still very sensitive but since PSI has been promoted Chief Financial Officer he need access to these tables. Therefore we will create an exception for him:

 

GRS

PSI

PSI.EMP

NOK

OK

PSI.CREDIT_CARD

NOK

OK

PSI.TEST

OK

OK

  1. In order to change the policy this one has to be unused. Click on secured targets, select the target, and click on firewall policy and change the policy to “log all”:

 

005.png

 

2. Now you can make modification to your policy “MyPolicy”.

First of all we need to create a Profile. Within a firewall policy, a profile lets you define a different set of policy rules based on the session data associated with SQL statements. To define the profile, you use the session filters you defined in the Policy Controls section of the firewall policy. These session filters filter SQL statements based on:

  • IP addresses
  • Database user login names
  • Client program names (for example, SQL*Plus)
  • Operating system user names

In this example we will create a profile based on Database user login named. This user will be PSI.

3.Click on Policy

4.Click on Firewall Policy

5. Click on MyPolicy

6. Click on Database User Set

7. Create a new set by clicking on “Create New Set”

8. Enter values for field New Set Name and member:

 

006.png

 

9.Click on “Create Set”

10. Click on “Cancel”

11.Click on “Profiles”

12. Create a new Profile by clicking on “Create New Profile”

13. Enter the Profile Name and select “UsersHavingAccessToMyTable” in the field “DB User Set”:

 

007.png

 

14. Click on “Create Profile”

 

008.png

 

Now we have to create an exception based on this profile. An exception determines the action, logging level, and threat severity to use when certain session data is encountered. For example, an exception could specify rules for statements that originate (or do not originate) from selected client IP addresses or database user names. In this example, the exception will be based on database user name.

Exceptions override all other policy rules. For example, you may want to override the normal policy rules if SQL statements originate from an administrator, or if they originate from anywhere other than a specific IP address.

You can define many exceptions and control the order in which they are evaluated. Each Exception has its own Action, Logging, and Threat Severity settings.

15. Click on policy

16. Click on firewall policy

17. Click on your newly created policy “MyPolicy”

18. Click on “Add Exception”

19. Enter the expception rule name : “ExceptionForPSITable”

20. Into DB User Set select “Include” and select “UsersHavingAccessToblMyTable”

21. In Policy Control click on “Pass”

22. Click on “Create”:

 

009.png

 

23. Click on “Save” and “Publish”

24. Apply this policy to the target

Now, the user PSI can access to all his tables and user GRS have no access to sensitive tables.

 

C:Usersadministrateur>sqlplus grs/grs@souk
SQL*Plus: Release 12.1.0.2.0 Production on Mer. Nov. 12 14:09:07 2014

Copyright (c) 1982, 2014, Oracle. All rights reserved.


ConnectÚ Ó :
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

SQL> select * from psi.emp;

'YOUDONOTHAVEACCESSTOTHISTABLE'
------------------------------------

You do not have access to this table

SQL> select * from psi.credit_card;

'YOUDONOTHAVEACCESSTOTHISTABLE'

------------------------------------

You do not have access to this table


SQL> select * from psi.test;

NAME
----------
hello C:Usersadministrateur>sqlplus psi/psi@souk
SQL*Plus: Release 12.1.0.2.0 Production on Mer. Nov. 12 14:18:54 2014

Copyright (c) 1982, 2014, Oracle. All rights reserved.


ConnectÚ Ó :
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

SQL> select * from psi.credit_card;

NAME               ID
---------- ----------

Larry     4,8960E+15
John       7,8623E+15

SQL> select * from psi.emp;


NAME             SAL
---------- ----------
Larry         150000
John           80000

SQL> select * from psi.test;

NAME
----------
hello
SQL>
  Analyzed SQL

With the exception we granted access to PSI database user to all his table. Since PSI didn’t present good results to shareholders he has been replaced by a new CFO and this one decided that PSI has now only access to credit card number but is not anymore allowed to make select statement on employees’ salaries table:

 

 

GRS

PSI

PSI.EMP

NOK

NOK

PSI.CREDIT_CARD

NOK

OK

PSI.TEST

OK

OK

  1. First of all we have to remove the exception we did before by clicking on the exception rule and clicking on “delete”

  2. In the policy overview click on “Modify SQL”3.Click on “Change”

  3. Select “Secured Target”, select the profile “ProfileForPrivsUsers” and enter in Event Time last 24 hours.

  4. Click on “Apply”

  5. Click on column header “User Name”

  6. Select “psi”

  7. A line looking like the one below should appear in the list

  8. Select this line

  9. Click on “Set Policy”

  10. In the Action list click on “Pass”:


    0010.png


  11. If you have a look on the list of SQL, the following statement should appear.


    0013.png

  12. Create a profile for user GRS named “ProfileForStdUsers” as we did for user PSI. This profile won't have the possibility to execute statement “Select * from psi.credit_card”.

  13. In “Analyze SQL” select profile “ProfileForStdUsers” and filter on GRS user as we did in step 6.

  14. Select the following statement:


    0014.png


  15. Click on “Set Policy”

  16. Select action “Block”

  17. In the field Substitution enter the following: “select 'Your profile does not allow access to this statement' from dual ”

  18. Now your section “Analyzed SQL” should look like the screenshot below:

  19. Save and Publish the modification done on this policy's

  20. Apply this policy to your target, click on secured targets, Firewall Policy and select “MyPolicy” in the list.

Now we can test the access:


C:Usersadministrateur>sqlplus grs/grs@souk
SQL*Plus: Release 12.1.0.2.0 Production on Mer. Nov. 12 16:33:55 2014

Copyright (c) 1982, 2014, Oracle. All rights reserved.

ConnectÚ Ó :Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

SQL> select * from psi.emp;


'YOUDONOTHAVEACCESSTOTHISTABLE'
------------------------------------
You do not have access to this table

SQL> select * from psi.credit_card;
'

YOURPROFILEDOESNOTALLOWACCESSTOTHISSTATEMENT'
----------------------------------------------------
Your profile does not allow access to this statement

SQL> select * from psi.test;

NAME

----------

hello

 

C:Usersadministrateur>sqlplus psi/psi@souk
SQL*Plus: Release 12.1.0.2.0 Production on Mer. Nov. 12 16:35:35 2014

Copyright (c) 1982, 2014, Oracle. All rights reserved.

ConnectÚ Ó :
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

SQL> select * from psi.emp;

'YOUDONOTHAVEACCESSTOTHISTABLE'
------------------------------------
You do not have access to this table


SQL> select * from psi.credit_card;


NAME               ID
---------- ----------
Larry     4,8960E+15
John       7,8623E+15

SQL> select * from psi.test;

NAME
----------
hello

 

This blog is not intended to be a comprehensive description of all AVDF policies functionnalities but I do hope that it provided you a good overview of some basic functionnalities.