Skip navigation.

Yann Neuhaus

Syndicate content
All blog entries from http://www.dbi-services.com/
Updated: 7 hours 50 min ago

MySQL versions performance comparison

Fri, 2014-12-26 03:18

This blog aims to make a performance comparison between the different MySQL versions/editions and also comparing the differents MySQL forks such as Percona Server and MariaDB.  Indeed number of improvements as been done to innodb storage engine in the last MySQL versions. You can find below some of the performance improvements applied to InnoDB these last years (non exhaustive list):


MySQL 5.0

1. New compact storage format which can save up to 20% of the disk space required in previous MySQL/InnoDB versions.
2. Faster recovery from a failed or aborted ALTER TABLE.
3. Faster implementation of TRUNCATE TABLE.


MySQL 5.5

1. MySQL Enterprise Thread Pool, As of MySQL 5.5.16, MySQL Enterprise Edition distributions include a thread pool plugin that provides an alternative thread-handling model designed to reduce overhead and improve performance.
2. Changes to the InnoDB I/O subsystem enable more effective use of available I/O capacity. The changes also provide more control over configuration of the I/O subsystem.


MySQL 5.6

1. Improvements to the algorithms for adaptive flushing make I/O operations more efficient and consistent under a variety of workloads. The new algorithm and default configuration values are expected to improve performance and concurrency for most users. Advanced users can fine-tune their I/O responsiveness through several configuration options.
2. InnoDB has several internal performance enhancements, including reducing contention by splitting the kernel mutex, moving flushing operations from the main thread to a separate thread, enabling multiple purge threads, and reducing contention for the buffer pool on large-memory systems.
3. You can now set the InnoDB page size for uncompressed tables to 8KB or 4KB, as an alternative to the default 16KB. This setting is controlled by the innodb_page_size configuration option. You specify the size when creating the MySQL instance. All InnoDB tablespaces within an instance share the same page size. Smaller page sizes can help to avoid redundant or inefficient I/O for certain combinations of workload and storage devices, particularly SSD devices with small block sizes.


MySQL 5.7

1. In MySQL 5.7.2, InnoDB buffer pool dump and load operations are enhanced. A new system variable, innodb_buffer_pool_dump_pct, allows you to specify the percentage of most recently used pages in each buffer pool to read out and dump. When there is other I/O activity being performed by InnoDB background tasks, InnoDB attempts to limit the number of buffer pool load operations per second using the innodb_io_capacity setting.

2. As of MySQL 5.7.4, InnoDB supports multiple page cleaner threads for flushing dirty pages from buffer pool instances. A new system variable, innodb_page_cleaners, is used to specify the number of page cleaner threads. The default value of 1 maintains the pre-MySQL 5.7.4 configuration in which there is a single page cleaner thread. This enhancement builds on work completed in MySQL 5.6, which introduced a single page cleaner thread to offload buffer pool flushing work from the InnoDB master thread.

 

You can find an exhaustive performance improvement list on:

http://dev.mysql.com/doc/refman/5.1/en/mysql-nutshell.html

http://dev.mysql.com/doc/refman/5.5/en/mysql-nutshell.html

http://dev.mysql.com/doc/refman/5.6/en/mysql-nutshell.html

http://dev.mysql.com/doc/refman/5.7/en/mysql-nutshell.html

  Test limitations

This test won't take into consideration all new possible optimizations provided through new variables and functionnalities. The aim of this one is simply to demonstrate the performance improvement with a non optimized but consistent configuration. In this context, a limited set of variables available in all MySQL versions (since version 5.0) have been set up.

This test is obvisously not representative of your own environnement (hardware, queries, database schema, storage engine, data type, etc..). Therefore you probably won't have the same performance behavior.

 

MySQL performance test Hardware configuration

This test has been done with sysbench 0.5, it has been run on a laptop equiped with a Processor Intel(R) Core(TM) i7-4700MQ CPU @ 2.40GH and 16Go RAM. The data are stored on a Samsung SSD 840 PRO Series.

 

First step: Installation

The first step consists in installing several different MySQL versions. Thanks to mysql_multi I've been able to run the following versions in parallel:

 

Editor/Provider

MySQL Server

Version

Port

Edition

Oracle

mysqld1

5.0.15

33001

Community Edition

Oracle

mysqld2

5.1.73

33002

Community Edition

Oracle

mysqld3

5.5.39

33003

Community Edition

Oracle

mysqld4

5.6.20

33004

Community Edition

Oracle

mysqld5

5.7.4

33005

Community Edition

Oracle

mysqld6

5.6.21

33006

Enterprise Edition

Percona

mysqld7

5.6.20

33007

N/A

Mariadb

mysqld8

10.0.15

33008

N/A

 

These servers have been setup with the same settings. However depending on the MySQL version, the default MySQL settings are different. For instance, on MySQL 5.0.15 the default value for global variable innodb_buffer_pool_size is 8388608 wheras on MySQL 5.1.73 the default value is 134217728. The default MySQL version settings have not been changed.

The only variables which have been set up are the following:

  • max_connections = 8000
  • table_open_cache=8000
  • open_files_limit = 8192


max_connections: The maximum permitted number of simultaneous client connections
table_open_cache: (or table_cache): The number of open tables for all threads:
open_files_limit: The number of files that the operating system permits mysqld to open. The value of this variable at runtime is the real value permitted by the system and might be different from the value you specify at server startup.

 

The OFA (Optimal Flexible Architecture) directory structure has been used to install the MySQL Servers.

 

You can find below an example of this structure:

port           = 33001
mysqladmin     = /u00/app/mysql/product/mysql-5.0.15/bin/mysqladmin
mysqld         = /u00/app/mysql/product/mysql-5.0.15/bin/mysqld
socket         = /u00/app/mysql/admin/mysqld1/socket/mysqld1.sock
pid-file       = /u00/app/mysql/admin/mysqld1/socket/mysqld1.pid
log-error      = /u00/app/mysql/admin/mysqld1/log/mysqld1.err
datadir        = /u01/mysqldata/mysqld1
basedir        = /u00/app/mysql/product/mysql-5.0.15


Second step: Test preparation

Once all MySQL Server installed and running, the second step is to prepare the table containing the records where the queries will be performed. In this test I decided to create only one table. This one is automatically named sbtest1 by sysbench. Notice that it is possible to create several tables by using “oltp-table-count” parameter.

The number of rows in this table is specified by the parameter “oltp-table-size”. This test table will contain 20'000'000 rows. The test mode is OLTP. According to sysbench documentation, this test mode was written to benchmark a real database performance.

At the prepare stage the following table is created:


mysql> desc sbtest1;
+-------+------------------+------+-----+---------+----------------+
| Field | Type             | Null | Key | Default | Extra          |
+-------+------------------+------+-----+---------+----------------+
| id    | int(10) unsigned | NO   | PRI | NULL    | auto_increment |
| k     | int(10) unsigned | NO   | MUL | 0       |                |
| c     | char(120)        | NO   |     |         |                |
| pad   | char(60)         | NO   |     |         |                |
+-------+------------------+------+-----+---------+----------------+


Each record contains random strings in the fields c and pad and random integers between 1 and oltp-table-size in the field k as presented in the following picture:

 sbtest.png

 

Sysbench prepare script:

sysbench \
--db-driver=mysql \
--mysql-table-engine=innodb \
--oltp-table-size=20000000 \
--mysql-socket=/u00/app/mysql/admin/mysqld1/socket/mysqld1.sock \
--mysql-port=33301 \
--mysql-db=sysbench \
--mysql-user=sbtest \
--mysql-password=sbtest \
--test=/home/mysql/sysbench/oltp.lua \
prepare

In order to be sure to have the same set of data on each server a MySQL dump has been done on the server after the first load. This dump has been imported on each server.

 

Third step: Running the test

The test has been run with different number of threads in order to understand how the different version/edition and fork of MySQL scale depending on the number of threads. The parameter max-request limits the total number of requests. The OLTP test mode (oltp.lua) has been written to improve performance's benchmarking of database servers by providing a realistic scenario of an OLTP database.

 

sysbench \
--db-driver=mysql \
--test=oltp \
--num-threads=1 \
--mysql-user=sbtest \
--mysql-password=sbtest \
--mysql-db=sysbench \
--max-requests=10000 \
--oltp-test-mode=complex \
--test=/home/mysql/sysbench/oltp.lua \
--mysql-socket=/u00/app/mysql/admin/mysqld1/socket/mysqld1.sock \
--oltp-table-name=sbtest1 \
run

 

In order to ensure correct results, avoiding any side effects due to external process and ensuring consistent results over time, the benchmark has been run twice.

 

Fourth step: Collecting results

All the results have been collected in an excel sheet and the following graph directly comes from these results:

 MySQLPerformanceComparison_20141226-105949_1.png

  Fifth step: results analysis

1. innodb has been improved over time in regards of scalability and the tests results tempt to proove that. The performance with 64 threads are radically different depending on the MySQL Version:

MySQL 5.0.15 – 1237 tps
MySQL 5.1.73 – 1818 tps
MySQL 5.5.39 -  2978 tps
MySQL 5.6.20 – 2801 tps
MySQL 5.6.21 – 2830 tps
MySQL 5.7.4 – 2725 tps
Percona 5.6.21 – 2853 tps
Mariadb 10.0.15 – 2941 tps

 

2. For application using only one thread the peformance between MySQL version (with default settings) is more or less equivalent (+/-10%):

MySQL 5.0.15 – 163 tps
MySQL 5.1.73 – 158 tps
MySQL 5.5.39 -  150 tps
MySQL 5.6.20 – 145 tps
MySQL 5.6.21 – 149 tps
MySQL 5.7.4 – 145 tps
Percona 5.6.21 – 145 tps
Mariadb 10.0.15 – 143 tps

 

3. For large number of threads it definitively worth to use pool of threads plugin from Percona. During these tests a improvement factor of x30 has been observed. Unfortunately I didn't see any performance improvement with MySQL 5.6.21 with the thread_pool plugin and thread_pool_size parameter set to 36 ( Best performances with Sysbench according to http://dev.mysql.com/doc/refman/5.6/en/thread-pool-tuning.html) . Regarding Percona I set up the parameter thread_pool_high_prio_mode to transactions. You can find below the results with 4096 thread:

MySQL 5.0.15 – error
MySQL 5.1.73 – 3.97 tps
MySQL 5.5.39 -  9.05 tps
MySQL 5.6.20 – 9.29 tps
MySQL 5.6.21 – 9.07 tps
MySQL 5.6.21 pool of thread plugin – 8.75
MySQL 5.7.4 – 5.64 tps
Percona 5.6.21 – 9.83 tps
Percona 5.6.21 pool of thread plugin – 295.4 tps
Mariadb 10.0.15 – 8.04 tps

It is interesting to notice that performance degradation can occur with the thread pool plugin activated for MySQL and for Percona. This performance degradation has been observed for a number of thread between 16 and 128 for Percona and 32 and 512 with MySQL.

 

Conclusion

These results tempt to prove that last MySQL releases perform better than older ones especially with several threads (64 threads in this case). The only exception is MySQL 5.7.4 which is a development release.


Applications using only one thread won't benefit from a huge performance improvement with the last MySQL versions. However enhancements provided in last versions such as ONLINE DDL, faster deadlock detection, dynamic innodb_buffer_pool_size parameter, etc, etc.. will for sure save you lots of time.


MySQL forks such as Percona and MariaDB, perform as MySQL Server. In addition I didn't observe any performance difference between MySQL Enterprise Edition and MySQL Community Edition. It is interesting to notice that thread pool plugin provided by Percona provide a huge performance improvement with large number of threads compared to standard behavior.


Regarding MySQL Enterprise Edition I haven't been able to see any performance improvement with MySQL Thread Pool plugin activated even with large number of threads. This is perhaps due to a misconfiguration from my side... however I presented these results to an Oracle MySQL specialist present on the Oracle UKOUG booth and he hasn't been able to find any error in my configuration.

dbi services wishes you a Merry Christmas with this SQL Server script

Thu, 2014-12-18 07:57

The end of the year approachs and soon it will be time to celebrate Christmas with your family and friends. At dbi services, we wish you a merry christmas via SQL Server with the following script, to execute in your SQL Server Management Studio for example (supported only by SQL Server 2012 and SQL Server 2014). You will find a suprise in the result pane :-)

 

SET NOCOUNT ON;   IF EXISTS(SELECT * FROM tempdb.sys.objects WHERE name LIKE N'#SURPRISE%')        DROP TABLE #SURPRISE; GO   CREATE TABLE #SURPRISE ( col1 CHAR(290) ); GO   ------------------------------   INSERT #SURPRISE VALUES (REPLICATE(CHAR(111),16)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1), 50, SPACE(1) + CHAR(165)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(77) + CHAR(101) + CHAR(114) + CHAR(114) + CHAR(121)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(165)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(246)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(149)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(165)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(67) + CHAR(104) + CHAR(114) + CHAR(105) + CHAR(115) + CHAR(116) + CHAR(109) + CHAR(97) + CHAR(115) ); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(165)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(42)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(42)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(95)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(124)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(124)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(95)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(95)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(95)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(95)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(95)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(42)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(42)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(186)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(186)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(42)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(47)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(47)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, + CHAR(92)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(186)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(182)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(124)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(164)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(164)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(2) + CHAR(124)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(91)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(93)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(124)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + REPLICATE(CHAR(42), 19)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(3) + CHAR(100) + CHAR(98) + CHAR(105)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(115) + CHAR(101) + CHAR(114) + CHAR(118) + CHAR(105) + CHAR(99) + CHAR(101) + CHAR(115)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(5) + CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, REPLICATE(CHAR(111), 17));   ------------------------------   DECLARE @SQL VARCHAR(100);   SELECT        @SQL = 'DBCC TRACEON(3604); DBCC PAGE (''tempdb'', 1, ' + CAST(allocated_page_page_id AS VARCHAR(20)) + ', 1)' FROM sys.dm_db_database_page_allocations(2, OBJECT_ID('tempdb..#SURPRISE'), 0, NULL, DEFAULT) WHERE is_iam_page = 0 AND is_allocated = 1;   EXEC(@SQL);

 

Merry Christmas to all of you!

How to store SQL Server data files in Azure Storage

Wed, 2014-12-17 22:02

A new functionality, introduced with SQL Server 2014, enables to store data files from a SQL Server database in Windows Azure Storage. In this posting, I will show how it works.

 

General.png

 

Accessing Azure Storage

The Azure Storage account is named “dbiservices”, and is composed of the “datafiles” container. This container does not contain blobs yet.

If you are not familiar with the Windows Azure Storage terms, you should read this Introduction to Microsoft Azure Storage.

In my example, I want to grant to share an access to my “datafiles” container without having to expose my account key. I teherfore need to generate a Shared Access Signature.

 

Shared Access Signature Generation

For this part, I will use a Third Party tool called Azure Storage Explorer.

As soon as you have installed and launched this software, you must register you Storage Account:

 

Register-Azure-Storage-Account.png

 

You must enter your Storage account name and your Storage account key. The key has been erased voluntary in this example.

As a Best Practice, I advise to enter your Secondary Access Key and not your Primary Access Key. Indeed, the Secondary Access Key is commonly used as a temporary key, and can be regenerated if necessary.

To generate my Shared Access Signature, I have to edit the Security of my container:

 

Edit-Security.png

 

I select the permissions and the duration linked to my Shared Access Signature:

 

Generate-Signature.png

 

I generate a Shared Access Signature available one week with all permissions. The Shared Access Signature generated is an URI related to the container. I voluntary deleted partially the URI.

You need to copy the URI from “sv=” to the end.

 

Credential Creation

I need to create a new Credential in SQL Server Management Studio:

 

New-Credential.png

Create-Credential.png

 

The name of my Credential is the URI location of my container, and the Password is the Shared Access Signature previously created.

 

Creating a database with Data Files in Azure Storage

I will perform two examples: first I will create a new database directly in Azure Storage, then I will migrate an existing database to Azure Storage.

 

Hybrid database creation

I execute a script to create a database with its data files in Azure Storage:

 

Create-Database-hybrid.png

 

If we refresh the “datafiles” container in Azure Storage explorer, we can see the datafiles previously created:

 

Explore-Datafiles.png


On-premise database creation

Now, I execute a script to create an on-premise database:

 

Create-database-onpremise.png

 

Then, I take the database offline:

 

Take-Offline-Database.png

 

I upload the data files in Azure Storage using Azure Storage Explorer tool:

 

Upload-Datafiles-to-Azure.png

 

Then, we need to alter the onpremisedb database to reference the datafiles moved to the Azure Storage:

 

Alter-Database.png

 

And now, I bring the database online:

 

Bring-Online-Database.png

 

But the following error occurred:

 

Bring-Online-Database-Error.png

 

To understand the origin of the problem, let’s see the datafiles in the Azure Storage Explorer:

 

Blob-Type.png

 

Conclusion

This new feature offers some advantages such as high availability or easy migration.

But on the other hand, you cannot use it on a existing database, which is a serious drawback.

Furthermore, I do not believe that this feature would be used with on-premsie SQL Server databases, due to the latency. But I think it can be used with a virtual machine running in Azure.

Journées SQL Server 2014: Vidéos des sessions disponibles

Tue, 2014-12-16 02:18
Les vidéos des sessions des journées SQL Server 2014 sont enfin en ligne:   > Infrastructure et AlwaysOn > Industrialisation des audits (avec Sarah Bessard)   Si vous avez des questions n'hésitez pas à me contacter par email ou par le biais des commentaires sur ce blog.   Bon visionnage!

Oracle lateral inline view, cursor expression and 12c implicit statement result

Mon, 2014-12-15 09:50

I'll present here 3 ways to run a query for each result of another query. Let's take an exemple: get all executions plan (select from dbms_xplan.display_cursor) for each of my queries (identified from v$sql). The 90's way was to run the first query, which generates the second queries into a spool file, and execute that file. Here are easier ways, some of them coming from 12c new features lateral join and implicit statement result.

Oracle 12c: Can we disable logging for DML?

Mon, 2014-12-15 09:38

If we don't mind about loosing our changes, then can we disable logging for DML? This is a question I've heard a lot. Ok, you don't need to recover your changes but Oracle may want to recover the consistency of its datafiles anyway. And that's why datafiles blocks changed though the buffer cache always generate redo.

But yes, in 12c you can do DML and generate only minimal redo. All DML: even updates and deletes. And that post is not about underscore parameters that allows corruption.

SQL Server 2014 : sortie du livre "Développer et administrer pour la performance"

Sun, 2014-12-14 23:50

Un billet en cette fin d'année qui approche à grand pas pour vous annoncer la sortie prochaine (fin décembre 2014) du livre SQL Server 2014 : Développer et administrer pour la performance en français et auquel j'ai eu l'immense plaisir de participer avec Frédéric Brouard (alias SQLPro - MVP SQL Server), Nicolas Souquet (alias Elsuket - MVP SQL Server) et Christian Soutou

Ce livre est destiné aussi bien aux développeurs qu'aux administrateurs débutants ou confirmés soucieux de la performance et couvre un ensemble complet de domaines (l'administration, le développement, la sécurité ou encore la haute disponibilité).

Bonne lecture !

 

 

 

UKOUG 2014 - Middleware Day 3

Wed, 2014-12-10 15:05

Today, no more sessions on middleware, but a lot of ADF, APEX, developments sessions that looks like really interesting! Unfortunately I can’t attend each one but here are some I selected!

The picture that paints a thousand words: Data Visualization (ADF) Duncan Mills (Oracle)

In this session Duncan was talking about my favorite topic: data visualization. I am quite used to ADF as I developed an interface for my Capacity Planning project. It was sometimes hard to have something clean and well built, a little bit confusing…

But this time as ended! With the new Alta interface!

Instead of building another stack over hundreds stacks, Oracle preferred to re-imagine and rebuild the whole interface. What a good idea. Then Alta was born, more clear, more precise and more “actual”.

It has been rewritten in a mobile based way as it’s the trend. But a new design is not sufficient it also includes more interactions and animations. The UI is designed to be more responsive, clean and simplified. One goal is to build the UI as disclosure info, it means when you click something in a chart the rest will disappear and go into a more detailed view about what you clicked. This way the UI will lead the user to the flow of the information. The user has to “search” the info through a path which builds the context of this information.

There is now a lot of slides and effect to increase this flow. For example you have a new chart build as a multi-layered circle so when you click on a part of the circle it will go into and open this part then build another whole circle to create a new level of information.

You have over than 50 graphs and charts types. All with the same kind of interactions to help the user have a coherence between charts such as zooming, zoomout, selection, redesigned time axis, improved data labeling and so on.

One of the new chars is called NBox. It’s a chart used for categorizing things or people. You have boxes, and you put people in it regarding a filter such as the company sector. You can drag and drop easily, it’s really to classify things.

ADF is getting really beautiful thanks to this new UI design and interactions, allowing more mobile app look and feel is a good thing as more and more processes in the industry can be monitored directly through smartphones.

SQL injection in APEX - More Attacks (& Defences) Tim Austwick and Nathan Catlow (Recx)

This session is more talking about security and SQL injections than APEX itself but they did some demos showing APEX could ease the usage of SQL injections.

Tim started by introducing his company Recx as they built a tool called ApexSec which can point out SQL injections spots in an APEX application; they said it can find about 90% of breaches. Thanks to their tool they heavily improved the security of Oracle Application Express, a tool from Oracle helping building application quite fast.

For Tim and Nathan, a SQL injection can be spotted when you don’t execute the SQL directly but you first generate the SQL and then send it to Oracle Database. In fact there is several kind of breaches such as dynamic SQL or query substitutions; these are the most found. It happens most of time when SQL write and execution are separated.

With SQL injections you can gather more data by adding columns or table joins, this is a “select” injection. You can also corrupt data, this is called an “update” injection. Then you have more serious issues when you can insert code between BEGIN and END in a PL/SQL function as you can call other functions or procedures such as Oracle procedures, which can be harmful for your application or database.

For example Tim did a live demo on how he could insert a call to the LDAP package and then connect it to his own LDAP on his machine. The result? He managed to get credentials directly in his LDAP console! Could be a serious issue isn’t it?

Vulnerabilities can appears if you append a variable in the build process of your query: when you put it at the end. Also when you create a function returning a SQL command. You better use bind variables instead of substitutions.

Tim also presented another demo where he managed to push a hidden button that committed a text area through a short javascript script. Thanks to a breach in the SQL behind the text area he could get the database version. With a simple research he found a known issue in this version. Then he wrote a pl/sql script and injected it in the text area. He managed to set the java permission to the schema, then he wrote a file directly on the file system with a script within. He could execute the file as sysdba as it was on the local system. The script gave the schema the DBA privileges, then he could get the passwords.

It was an amazing session.

Their advices to avoid SQL injection are the following:

- Avoid substitutions directly in SQL and prefer bind variables
- Ensure that APEX objects are protected
- Use APEX 4.2.1 or above
- Avoid dynamic SQL

Conclusion

This is the last day of UKOUG 2014 and we clearly see that the middlewares trend is the Cloud, Mobile applications and of course security. I can’t wait to see more sessions in the next UKOUG!

Taskset: a useful tool to set CPU affinity

Wed, 2014-12-10 05:14

Today at the #ukoug_tech14 I had the chance to attend to the Christo Kutrovky's session @kutrovsky (Pythian) about "Measuring Performance in Oracle Solaris & Oracle Linux". This session aimed to present how to measure system utilization in the Linux and Oracle Solaris operating systems and how to use these information for tuning and capacity planning. During this session we had a very good introduction to a bunch of performance monitoring tools that can be categorized in four categories (non exhaustive list of tool):


1. CPU:

  • top
  • vmstat
  • time
  • mpstat

2. Memory - RAM

2.1 Global

  • /proc/meminfo
  • vmstat
  • ipcs

2.2 Per process

  • -pmap -x

3. Disk

  • vmstat
  • iostat
  • iotop
  • dtrace

4. Network

  • ifconfig
  • netstat
  • nicstat
  • iftop
  • iperf
  • tcpdump

Regarding this list, I would like to point out a specific command which could be useful in the context of a performance test. This command is taskset. According to Manual page of taskset: taskset  is  used  to  set  or  retrieve  the CPU affinity of a running process given its PID or to launch a new COMMAND  with a given CPU affinity.  CPU affinity is a scheduler property that "bonds" a process to a given set of CPUs on the system. The Linux scheduler will honor the given CPU affinity and the process will not run on any other CPUs.

Let's try to make a short test, with sysbench and mysql in order to see how taskset works. This test consists of running sysbench with 4096 threads, the first time without taskset and a second time with taskset by setting CPU affinity on CPU number 3.

 

1. Sysbench test without CPU affinity

1.1 Let's tart the benchmark with sysbench

mysql@ThinkPad-T540p:/home/mysql/ [mysqld1] ./innodb/runallinnodb5.ksh

 

1.2 In order to proove that sysbench is running 4096 threads we can execute "top + C":

22110 mysql     20   0 4747720 334964   1548 S   5,3  2,1   0:06.55 sysbench --db-driver mysql --test oltp --num-threads 4096 --mysql-user sbtest --mysql-password sbtest --mysql-db sysbench .....

 

1.3 Let's now have a look on CPU usage with command "top + 1":

top - 11:47:38 up 42 min,  4 users,  load average: 5,97, 15,54, 16,48
Tasks: 238 total,   2 running, 236 sleeping,   0 stopped,   0 zombie
%Cpu0  : 25,5 us,  1,7 sy,  0,0 ni, 69,9 id,  3,0 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu1  : 15,1 us,  0,7 sy,  0,0 ni, 78,9 id,  5,4 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu2  : 21,3 us,  1,3 sy,  0,0 ni, 77,4 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu3  : 15,8 us,  1,0 sy,  0,0 ni, 83,2 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu4  : 18,6 us,  1,3 sy,  0,0 ni, 79,8 id,  0,3 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu5  : 15,4 us,  1,0 sy,  0,0 ni, 83,7 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu6  : 27,8 us,  1,0 sy,  0,0 ni, 71,2 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu7  : 14,8 us,  0,0 sy,  0,0 ni, 85,2 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
KiB Mem:  16309072 total,  4207804 used, 12101268 free,   206348 buffers
KiB Swap: 16651260 total,        0 used, 16651260 free.  1628872 cached Mem

 

1.4 As we could expect all CPUs are in use because no CPU affinity has been set:

steulet@ThinkPad-T540p:~$ sudo taskset -pc 22110

pid 21767's current affinity list: 0-7

 

2. Sysbench test with CPU affinity

2.1 Let's start by setting CPU Affinity of mysql process:

 

steulet@ThinkPad-T540p:~$ sudo taskset -pc 3 22110

pid 22110's current affinity list: 0-7
pid 22110's new affinity list: 3

 

2.2 We can now restart the benchmark:

 

mysql@ThinkPad-T540p:/home/mysql/ [mysqld1] ./innodb/runallinnodb5.ksh

 

2.3 Let's now have a look on CPU usage using command "top + 1":

Tasks: 240 total,   2 running, 238 sleeping,   0 stopped,   0 zombie
%Cpu0  :  1,7 us,  0,3 sy,  0,0 ni, 97,7 id,  0,0 wa,  0,0 hi,  0,3 si,  0,0 st
%Cpu1  :  0,3 us,  0,0 sy,  0,0 ni, 99,7 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu2  :  0,3 us,  0,0 sy,  0,0 ni, 99,7 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu3  : 98,0 us,  1,7 sy,  0,0 ni,  0,3 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu4  :  2,0 us,  0,3 sy,  0,0 ni, 97,3 id,  0,3 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu5  :  0,7 us,  0,0 sy,  0,0 ni, 99,3 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu6  :  0,3 us,  0,7 sy,  0,0 ni, 97,7 id,  1,3 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu7  :  0,7 us,  0,0 sy,  0,0 ni, 95,7 id,  3,7 wa,  0,0 hi,  0,0 si,  0,0 st

 

The processor 3 own all the mysql load as expected. Thanks to taskset you can also have a better understanding of HiperThreaded CPUs load. In order to better understand the problems related to the monitoring of Hyperthreaded CPU and CPU affinity, I suggest you to read the following blog: http://www.pythian.com/blog/virtual-cpus-with-amazon-web-services/

UKOUG 2014 - Middleware Day 2

Tue, 2014-12-09 11:06

Today the sessions were more “high level” so don’t expect deep information and concrete implementations.

Roadmap to Cloud Infrastructure and Service Integration with cloud application foundation and SOA Suite Frances Zhao-Perez and Simone Greib(Oracle)

Here Frances was talking about CAF (Cloud Application Foundation) which regroup products like weblogic, coherence and so on. She introduced the Oracle’s strategic investments list for this topic:

- #1 Mobile
- Fusion Middleware and applications
- Continuous availability
- Multitenancy for Density/Utilization
- Cloud management and operations

She also talk about future features in 12cR2 such as:

- Multi Datacenters support
- Coherence federated caching
- Recoverable caching
- Full Java EE7
- Java SE8
- Available in Cloud

Frances also briefly talk about ODA and OHS roadmaps but it was only from marketing side :)

Then Simone took the lead and made a recap’ of SOA key features such as:

- Operation made simple (startup acceleration, tuned profiles…)
- Developer productivity (Debugger and tester, Java database instead of full big one…)
- Mobile standards (REST, Json…)
- Cloud: SAP/JDE adapters

A new feature in the cloud is MFT (Managed File Transfer) for file-based integrations.

She also remind us about how simple it is to upgrade from SOA suite 11g to 12c and began with new incoming features such as:

- Business Insight: Business Process Monitoring (Business process simplified without JDeveloper)
- Internet of Things (IoT): Events driven actions
- BAM predictive analytics & BI Integration: it could build trends using our business data. For example it could predict the market for next weeks.

Mobile enabling your enterprise with Oracle SOA Suite Simone Greib and Yogesh Sontakke(Oracle)

This session was more oriented on the mobile part of SOA. Yogesh and Simone explained that you can support SOAP and REST on mobiles and they demonstrated how simple it is to build UI and business behind by exposing as a service.

They talked about architecture of mobile UI and their integration with a lot of adapters for different products. They took “Kiloutou”, in France, as an example of mobile application user as they use an application to manage their stocks, commands and services.

They also made a real live demo of how to use JSon or XML to manage events and communications between elements or services.

Maximun Availability in he Cloud: Oracle Weblogic Server and Oracle Coherence Frances Zhao-Perez(Oracle)

This session was heavily oriented on MAA (Maximum Availability Architecture) and Frances strongly underlined that Oracle is investing in maximum availability.

The goals of MAA are the following:

- Prevent business interruption
- Prevent data loss
- Deliver adequate response time
- Cost: Reduce deployments, managements and support costs
- Risk: Consistently achieve required service level

Here are the High Availability requirements for Oracle:

- Outage protection
- Recovery time
- Testing frequency
- Typical data loss
- Complexity in deployments
- ROI (Return on Investment)

Frances talked about Multi-data MAA solutions such as stretch cluster/domains, cache safety, Tx Logs in database, database Global Data Services, federated caching, recoverable caching and storage replication.

She introduced fastly Oracle Site Guard which provides recovery automation. And talked about next version features.

12.1.3:

- No Tlog option - Phase 1 (other phases will be implemented at each new releases)
- XA Transactions recovery across site

12.2.1 (will be a huge update next year):

- JTA HA
- Cross site Txn recovery
- Density for GridLink deployments
- Application continuity (Optimize connection harvesting on down events)

She finished on Coherence caching recovery allowing recover data from cache directly.

SQL*Plus COPY Command is back as BRIDGE

Mon, 2014-12-08 15:38

Did you ever use the COPY command in sqlplus? It's very old, and documentation says :
The COPY command is not being enhanced to handle datatypes or features introduced with, or after Oracle8i. The COPY command is likely to be deprecated in a future release.

Deprecated? But it is back, with a new name, in the new SQL Developer based SQL*Plus (currently called sdsql in beta)

UKOUG 2014: News on Oracle WebLogic, Fusion Middleware, SOA Suite

Mon, 2014-12-08 11:58

Today I went to UKOUG 2014 at Liverpool. In this blog, you'll find the sessions on Oracle WebLogic, Fusion Middleware, SOA Suite, and EDG I attended and a little resumé about it.

Administration of SOA Suite - A few tips Jon Petter Hjulstad (SYSCO Middleware)

Jon Petter's general advice was to make the monitoring stronger. You have to be pro-active in your administration of SOA Suite. You have several tools for monitoring SOA such as FMW Control, Enterprise Manager 12c or even JMC. You may want to check CPUs, memory usage and Disk space as usual but don’t forget to look at database growth and logs.

His advice is to look at patches as soon as they are releases as it will repair leaks and security issues but it will also tune up the whole system and its components.

Strongly use RDA support, it can brings a lot of information for creating Service Requests and it will hardly reduce the “ping pong” between your organization and the support team.

Fusion Middleware 12c Installation and Administration: First Experiences Peter de Vall and Nicolay Moot (Transfer Solutions)

Both are from Transfer Solutions Company from Netherland. They talked about the differences of installing and managing Fusion Middleware 11g and 12c. The benefits of using 12c is that it is clearly more integrated than the 11g version. Moreover when installing for Devs it packages for an easy to use version. This way, developers don’t have to borrow a heavy installation architecture.

Another difference is the homes path. When in 11g you had Middleware Home, Oracle Common Home, Oracle Product Home and RCU Home, here in 12c you only have Oracle Home.

Same thing for the configuration part. In 11g you had Domain Homes, Instance Homes and Node Manager Home, here you have only Domain Homes as you have one node per Domain.

Upgrading is really easy to handle as 12c provides some scripts for the upgrade. It manages the upgrade of domains and configuration in less than an hour.

It upgrades schemas, configurations and domains you just have to provide it the old (11g) domains path then the new (12c) domains path; which can be the same as demonstrated in their demo! And no redeployments or application adaptations to make, developers don’t even have to participate.

A new feature in FM 12c is RESTful. It provides urls that can be monitored by an external tool as it pushes json table which contain metrics information as memory usage and jvm utilization. Really interesting.

Best practices to manage and deliver oracle weblogic and fusion middleware to your organization James Kao (Oracle)

James is a really good speaker, managing to keep you intensively focused on his speech. He was talking about how to manage huge infrastructure with lot of domains. Talking about how many admin console you have to manage, how many configurations you have to perform before your whole environment is setup, when you have for example DEV/TEST/VAL and production environments. It could become hell out there! As it is time consuming and when upgrading or patching it is hard to automate tasks.

James’s answer is “Cloud”.

The goal is to make it available through a single point of access and make it globally administrable in one place.

“Use Cloud Control as a single administration console” he said. As you can manage your configurations and actions on a single place to specific domains or all at the same time. It provides more “templatized” administration through frameworks and it standardize ressources.

Here‘s his best practices:

#1 - Cloud Control as a single admin console
As said before, administration and configuration are integrated to Cloud Control. A system allows to enter your credentials once. When updating a configuration you lock it hence nobody can edit it during the time you are working on it. It can prevent issues when an error is reported and almost 20 administrator are trying to access the system to debug it!

Cloud Control can record every actions you do when editing or administrating. That means you can verify what you did for audit purpose or simply transform it into a WLST script to make it repeatable.

Cloud Control centralizes SOA tracing and stores metrics for history.

#2 - Automates tasks across domains
You can automate tasks by creating WLST scripts thanks to your records. For example you can start recording your session, then edit some values and stop the recording. You create a script from it and execute it on multiple domain!

#3 - Automate responses to issues vie corrective actions
And the best is that the scripts you wrote can be triggered by events from the system. You can create incident rules and specify the actions that would be triggered, all over the domains.

Enterprise Deployments: The real world of best practices Simon Haslam (Veriton) and Jacco Landlust (Oracle)

Simon and Jacco where here to talk about EDG: Enterprise Deployment Guides. These guides are a result from the historical MAA: Maximum Availability Architecture. So, EDG is a kind of guide on how to build your architecture to make it reusable and efficient but there is a difference between “paper” and reality.

EDG benefits:

- You don’t have to know every products by heart
- It’s based on Oracle experience
- It’s familiar to other Admins using EDG (a kind of OFA)
- It’s recognized by Oracle Support!

First EDG covers the following topics:

- Topologies and segregations
- Naming conventions
- It only focuses on single site high availability with load balancers
- Database HA (in RAC)

EDG is a layered step by step builder recipe.

- Layering each stage
- Assumes manually installation (No scripts)

The real world problems for EDG:

- It doesn’t including security thoughts
- It doesn’t include patching thoughts
- It doesn’t include dev/test thoughts (you have to build it by yourself)
- It’s not covering Disaster Recovery

FIO (Flexible I/O) - a benchmark tool for any operating system

Mon, 2014-12-08 04:55

I have just attended an interesting session held by Martin Nash (@mpnsh) at UKOUG 14 - Liverpool: "The least an Oracle DBA Should Know about Linux Administration" . During this session I had the opportunity to discover some interesting commands and tools such as FIO (Flexible I/O). FIO is a workload generator that can be used both for benchmark and stress/hardware verification.

FIO has support for 19 different types of I/O engines (sync, mmap, libaio, posixaio, SG v3, splice, null, network, syslet, guasi, solarisaio, and more), I/O priorities (for newer Linux kernels), rate I/O, forked or threaded jobs, and much more. It can work on block devices as well as files. fio accepts job descriptions in a simple-to-understand text format.

This tool has the huge advantage to be available for almost all kind of Operating Systems ( POSIX, Linux, BSD, Solaris, HP-UX, AIX ,OS X, Android, Windows). If you want to use this tool in the context of Oracle database I invite you to have a look on the following blog from Yann Neuhaus: Simulating database-like I/O activity with Flexible I/O

 

In order to install it on ubuntu simply use the following command:


steulet@ThinkPad-T540p:~$ sudo apt-get install fio

 

After having installed fio you can run your first test. This first test will run 2 gigabyte of IO (read write) in directory /u01/fio.


steulet@ThinkPad-T540p:~$ mkdir /u01/fio

 

Once the directory have been created we can set up the configuration script as described below. However it is perfectly possible to execute this command in command line without configuration script (fio --name=global --ioengine=posixaio --rw=readwrite --size=2g --directory=/u01/fio --threads=1 --name=myReadWriteTest-Thread1):

 

[global]
ioengine=posixaio
rw=readwrite
size=2g
directory=/u01/fio
threads=1

[myReadWriteTest-Thread1]

 

Now you can simply run your test with the command below:


steulet@ThinkPad-T540p:~$ fio testfio.fio

 

The output will looks like the following:

 

myReadWriteTest-Tread1: (g=0): rw=rw, bs=4K-4K/4K-4K/4K-4K, ioengine=posixaio, iodepth=1
fio-2.1.3
Starting 1 thread
Jobs: 1 (f=1): [M] [100.0% done] [112.9MB/113.1MB/0KB /s] [28.9K/29.2K/0 iops] [eta 00m:00s]
myReadWriteTest-Tread1: (groupid=0, jobs=1): err= 0: pid=7823: Mon Dec  8 12:45:27 2014
  read : io=1024.7MB, bw=98326KB/s, iops=24581, runt= 10671msec
    slat (usec): min=0, max=72, avg= 1.90, stdev= 0.53
    clat (usec): min=0, max=2314, avg=20.25, stdev=107.40
     lat (usec): min=5, max=2316, avg=22.16, stdev=107.41
    clat percentiles (usec):
     |  1.00th=[    4],  5.00th=[    6], 10.00th=[    7], 20.00th=[    7],
     | 30.00th=[    7], 40.00th=[    7], 50.00th=[    7], 60.00th=[    7],
     | 70.00th=[    8], 80.00th=[    8], 90.00th=[    8], 95.00th=[   10],
     | 99.00th=[  668], 99.50th=[ 1096], 99.90th=[ 1208], 99.95th=[ 1208],
     | 99.99th=[ 1256]
    bw (KB  /s): min=    2, max=124056, per=100.00%, avg=108792.37, stdev=26496.59
  write: io=1023.4MB, bw=98202KB/s, iops=24550, runt= 10671msec
    slat (usec): min=1, max=24, avg= 2.08, stdev= 0.51
    clat (usec): min=0, max=945, avg= 9.71, stdev=24.52
     lat (usec): min=5, max=947, avg=11.79, stdev=24.54
    clat percentiles (usec):
     |  1.00th=[    5],  5.00th=[    8], 10.00th=[    8], 20.00th=[    8],
     | 30.00th=[    8], 40.00th=[    8], 50.00th=[    9], 60.00th=[    9],
     | 70.00th=[    9], 80.00th=[    9], 90.00th=[   10], 95.00th=[   11],
     | 99.00th=[   15], 99.50th=[   20], 99.90th=[  612], 99.95th=[  628],
     | 99.99th=[  652]
    bw (KB  /s): min=108392, max=123536, per=100.00%, avg=114596.33, stdev=3108.03
    lat (usec) : 2=0.01%, 4=0.01%, 10=91.43%, 20=6.93%, 50=0.71%
    lat (usec) : 100=0.13%, 250=0.01%, 500=0.01%, 750=0.47%, 1000=0.01%
    lat (msec) : 2=0.31%, 4=0.01%
  cpu          : usr=10.46%, sys=21.17%, ctx=527343, majf=0, minf=12
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=262309/w=261979/d=0, short=r=0/w=0/d=0Run status group 0 (all jobs):
   READ: io=1024.7MB, aggrb=98325KB/s, minb=98325KB/s, maxb=98325KB/s, mint=10671msec, maxt=10671msec
  WRITE: io=1023.4MB, aggrb=98202KB/s, minb=98202KB/s, maxb=98202KB/s, mint=10671msec, maxt=10671msecDisk stats (read/write):
  sda: ios=6581/67944, merge=0/67, ticks=4908/196508, in_queue=201408, util=56.49%

You will find some really good examples and a detailed list of parameters on the following website: http://www.bluestop.org/fio/HOWTO.txt

This tool is really powerful and present the huge advantage to be available for more or less any Operating System. Such advantage will allow you to make some consistent comparison accross different kind of architecture.

FIO (Flexible I/O) - a benchmark tool for any Operating System

Mon, 2014-12-08 04:55

I just attended to an interesting session at UKOUG 14 - Liverpool, named "The least an Oracle DBA Should Know about Linux Administration". This session has been given by Martin Nash (@mpnsh).

During this session I had the opportunity to discover some interesting commands and tools such as fio (Flexible I/O). fio is a workload generator that can be used both for benchmark and stress/hardware verification.

fio has support for 19 different types of I/O engines (sync, mmap, libaio, posixaio, SG v3, splice, null, network, syslet, guasi, solarisaio, and more), I/O priorities (for newer Linux kernels), rate I/O, forked or threaded jobs, and much more. It can work on block devices as well as files. fio accepts job descriptions in a simple-to-understand text format.

This tool has the huge advantage to be available for almost all kind of Operating Systems ( POSIX, Linux, BSD, Solaris, HP-UX, AIX ,OS X, Android, Windows). If you want to use this tool in the context of Oracle database I invite you to have a look on the following blog from Yann Neuhaus: Simulating database-like I/O activity with Flexible I/O

 

In order to install it on ubuntu simply use the following command:


steulet@ThinkPad-T540p:~$ sudo apt-get install fio

 

After having installed fio you can run your first test. This first test will run 2 gigabyte of IO (read write) in directory /u01/fio.


steulet@ThinkPad-T540p:~$ mkdir /u01/fio

 

Once the directory have been created we can set up the configuration script as described below. However it is perfectly possible to execute this command in command line without configuration script (fio --name=global --ioengine=posixaio --rw=readwrite --size=2g --directory=/u01/fio --threads=1 --name=myReadWriteTest-Thread1):

 

[global]
ioengine=posixaio
rw=readwrite
size=2g
directory=/u01/fio
threads=1

[myReadWriteTest-Thread1]

 

Now you can simply run your test with the command below:


steulet@ThinkPad-T540p:~$ fio testfio.fio

 

The output will looks like the following:

 

myReadWriteTest-Tread1: (g=0): rw=rw, bs=4K-4K/4K-4K/4K-4K, ioengine=posixaio, iodepth=1
fio-2.1.3
Starting 1 thread
Jobs: 1 (f=1): [M] [100.0% done] [112.9MB/113.1MB/0KB /s] [28.9K/29.2K/0 iops] [eta 00m:00s]
myReadWriteTest-Tread1: (groupid=0, jobs=1): err= 0: pid=7823: Mon Dec  8 12:45:27 2014
  read : io=1024.7MB, bw=98326KB/s, iops=24581, runt= 10671msec
    slat (usec): min=0, max=72, avg= 1.90, stdev= 0.53
    clat (usec): min=0, max=2314, avg=20.25, stdev=107.40
     lat (usec): min=5, max=2316, avg=22.16, stdev=107.41
    clat percentiles (usec):
     |  1.00th=[    4],  5.00th=[    6], 10.00th=[    7], 20.00th=[    7],
     | 30.00th=[    7], 40.00th=[    7], 50.00th=[    7], 60.00th=[    7],
     | 70.00th=[    8], 80.00th=[    8], 90.00th=[    8], 95.00th=[   10],
     | 99.00th=[  668], 99.50th=[ 1096], 99.90th=[ 1208], 99.95th=[ 1208],
     | 99.99th=[ 1256]
    bw (KB  /s): min=    2, max=124056, per=100.00%, avg=108792.37, stdev=26496.59
  write: io=1023.4MB, bw=98202KB/s, iops=24550, runt= 10671msec
    slat (usec): min=1, max=24, avg= 2.08, stdev= 0.51
    clat (usec): min=0, max=945, avg= 9.71, stdev=24.52
     lat (usec): min=5, max=947, avg=11.79, stdev=24.54
    clat percentiles (usec):
     |  1.00th=[    5],  5.00th=[    8], 10.00th=[    8], 20.00th=[    8],
     | 30.00th=[    8], 40.00th=[    8], 50.00th=[    9], 60.00th=[    9],
     | 70.00th=[    9], 80.00th=[    9], 90.00th=[   10], 95.00th=[   11],
     | 99.00th=[   15], 99.50th=[   20], 99.90th=[  612], 99.95th=[  628],
     | 99.99th=[  652]
    bw (KB  /s): min=108392, max=123536, per=100.00%, avg=114596.33, stdev=3108.03
    lat (usec) : 2=0.01%, 4=0.01%, 10=91.43%, 20=6.93%, 50=0.71%
    lat (usec) : 100=0.13%, 250=0.01%, 500=0.01%, 750=0.47%, 1000=0.01%
    lat (msec) : 2=0.31%, 4=0.01%
  cpu          : usr=10.46%, sys=21.17%, ctx=527343, majf=0, minf=12
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=262309/w=261979/d=0, short=r=0/w=0/d=0Run status group 0 (all jobs):
   READ: io=1024.7MB, aggrb=98325KB/s, minb=98325KB/s, maxb=98325KB/s, mint=10671msec, maxt=10671msec
  WRITE: io=1023.4MB, aggrb=98202KB/s, minb=98202KB/s, maxb=98202KB/s, mint=10671msec, maxt=10671msecDisk stats (read/write):
  sda: ios=6581/67944, merge=0/67, ticks=4908/196508, in_queue=201408, util=56.49%

You will find some really good examples and a detailed list of parameters on the following website: http://www.bluestop.org/fio/HOWTO.txt

This tool is really powerful and present the huge advantage to be available for more or less any Operating System. Such advantage will allow you to make some consistent comparison accross different kind of architecture.

 

 

SQL Server tips: how to list orphaned logins

Thu, 2014-12-04 21:56

I read a lot of about orphaned database users in SQL Server, but I have almost never read about orphaned logins. Many of my customers migrate or remove databases in SQL Server. They forget - not every time but often - to remove the logins and jobs associated with these databases. I have created a script - without any cursors, YES, it is possible - allowing to search all logins who are not "attached" to a database of an instance.

Oracle 12c: comparing TTS with noncdb_to_pdb

Mon, 2014-12-01 08:44

How to migrate from non-CDB to CDB? Of course all known migration methods works. But there is also another solution: upgrade to 12c if necessary and then convert the non-CDB to a PDB. This is done with the noncdb_to_pdb.sql which converts a non-CDB dictionary to a PDB one, with metadata and object links. But do you get a clean PDB after that ? I tested it and compared the result with same database migrated by transportable tablespaces.

The test case

In 12c I can use Full Transportable database, but here I've only one tablespace as I'm doing my comparison on an empty database with the EXAMPLE schemas.

Here is my database:

RMAN> report schema;

Report of database schema for database with db_unique_name NDB1

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    790      SYSTEM               YES     /u01/app/oracle/oradata/NDB1/system01.dbf
3    610      SYSAUX               NO      /u01/app/oracle/oradata/NDB1/sysaux01.dbf
4    275      UNDOTBS1             YES     /u01/app/oracle/oradata/NDB1/undotbs01.dbf
5    1243     EXAMPLE              NO      /u01/app/oracle/oradata/NDB1/example01.dbf
6    5        USERS                NO      /u01/app/oracle/oradata/NDB1/users01.dbf

It's a new database, created with dbca, all defaults, and having only the EXAMPLE tablespace. SYSTEM is 790MB and SYSAUX is 610MB. We can have a lot of small databases like that, where system size is larger than user size and this is a reason to go to multitenant.

I will compare the following:

  • the migration with transportable tablespaces (into pluggable database PDB_TTS)
  • the plug and run noncdb_to_pdb (into the pluggable database PDB_PLG)

Transportable tablespace

Transportable tablespace will plug only the non system tablespaces and all the dictionary entries are recreated while importing metadata. Here it is:

SQL> alter tablespace EXAMPLE read only;
Tablespace altered.
SQL> host expdp '"/ as sysdba"' transport_tablespaces='EXAMPLE'

The log gives me the dump file (containing the metadata) and the datafiles to copy:

Master table "SYS"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TRANSPORTABLE_01 is:
  /u01/app/oracle/admin/NDB1/dpdump/expdat.dmp
******************************************************************************
Datafiles required for transportable tablespace EXAMPLE:
  /u01/app/oracle/oradata/NDB1/example01.dbf
Job "SYS"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at ... elapsed 00:03:55

then on the destination I create an empty pluggable database:

SQL> create pluggable database PDB_TTS admin user pdbadmin identified by "oracle" file_name_convert=('/pdbseed/','/PDB_TTS/');
SQL> alter pluggable database PDB_TTS open;
SQL> alter session set container=PDB_TTS;

and import the metadata after having copied the datafile to /u03:

SQL> create or replace directory DATA_PUMP_DIR_NDB1 as '/u01/app/oracle/admin/NDB1/dpdump';
SQL> host impdp '"sys/oracle@//vm211:1666/pdb_tts as sysdba"' transport_datafiles=/u03/app/oracle/oradata/PDB_TTS/example01.dbf directory=DATA_PUMP_DIR_NDB1

which took only two minutes because I don't have a lot of objects. That's all. I have a brand new pluggable database where I've imported my tablespaces.

Here I used the transportable tablespace and had to pre-create the users. But in 12c you can do everything with Full Tabsportable Database.

noncdb_to_pdb.sql

The other solution is to plug the whole database, including the SYSTEM and SYSAUX tablespaces, and then run the noncdb_to_pdb.sql script to transform the dictionary to a multitenant one. First, we generate the xml describing the database, which is similar to the one generated when we unplug a PDB:

SQL> shutdown immediate
SQL> startup open read only;
SQL> exec dbms_pdb.describe('/tmp/NDB01.xml');

And then plug it:

SQL> CREATE PLUGGABLE DATABASE PDB_PLG USING '/tmp/NDB01.xml' COPY FILE_NAME_CONVERT = ('/u01/app/oracle/oradata/NDB1', '/u03/app/oracle/oradata/PDB_PLG');

At that point I can open the PDB but it will act as a Non-CDB, with its own dictionary that is not linked to the root. For example, you have nothing when you query DBA_PDBS from the PDB:

SQL> show con_id
CON_ID
------------------------------
6
SQL> select * from dba_pdbs;
no rows selected

I put in my todo list to test what we can do in that PDB which is not yet a PDB before raising lot of ORA-600.

Now you have to migrate the dictionary to a PDB one. The noncdb_to_pdb.sql will do internal updates to transform the entries in OBJ$ to be metadata links.

SQL> alter session set container=PDB_PLG;
SQL> @?/rdbms/admin/noncdb_to_pdb;
SQL> alter pluggable database PDB_PLG open;

The updates will depend on the number of dictionary objects, so that is fixed for the version. And the remaining time is to recompile all objects, but that can be done in parallel. Here, I've run it in serial to see how long it takes (screenshot from Lighty):

b2ap3_thumbnail_Capturenon_cdb_to_pdb.png

Comparison

My goal was to compare both methods. As I expected, the SYSTEM and SYSAUX tablespaces did not decrease when using the noncdb_to_pdb, so if you want to go to multitenant to save space, the noncdb_to_pdb method is not the good one:

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB1_SITE1

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    781      SYSTEM               YES     /u02/app/oracle/oradata/cdb1_site1/system01.dbf
3    691      SYSAUX               NO      /u02/app/oracle/oradata/cdb1_site1/sysaux01.dbf
4    870      UNDOTBS1             YES     /u02/app/oracle/oradata/cdb1_site1/undotbs01.dbf
5    260      PDB$SEED:SYSTEM      NO      /u02/app/oracle/oradata/cdb1_site1/pdbseed/system01.dbf
6    5        USERS                NO      /u02/app/oracle/oradata/cdb1_site1/users01.dbf
7    570      PDB$SEED:SYSAUX      NO      /u02/app/oracle/oradata/cdb1_site1/pdbseed/sysaux01.dbf
8    260      PDB1:SYSTEM          NO      /u02/app/oracle/oradata/cdb1_site1/pdb1/system01.dbf
9    580      PDB1:SYSAUX          NO      /u02/app/oracle/oradata/cdb1_site1/pdb1/sysaux01.dbf
10   10       PDB1:USERS           NO      /u02/app/oracle/oradata/cdb1_site1/pdb1/pdb1_users01.dbf
14   270      PDB_TTS:SYSTEM       NO      /u02/app/oracle/oradata/cdb1_site1/PDB_TTS/system01.dbf
15   590      PDB_TTS:SYSAUX       NO      /u02/app/oracle/oradata/cdb1_site1/PDB_TTS/sysaux01.dbf
17   1243     PDB_TTS:EXAMPLE      NO      /u03/app/oracle/oradata/PDB_TTS/example01.dbf
22   790      PDB_PLG:SYSTEM       NO      /u03/app/oracle/oradata/PDB_PLG/system01.dbf
23   680      PDB_PLG:SYSAUX       NO      /u03/app/oracle/oradata/PDB_PLG/sysaux01.dbf
24   5        PDB_PLG:USERS        NO      /u03/app/oracle/oradata/PDB_PLG/users01.dbf
25   1243     PDB_PLG:EXAMPLE      NO      /u03/app/oracle/oradata/PDB_PLG/example01.dbf

The SYSTEM tablespace which is supposed to contain only links (my user schemas don't have a lot of objects) is the same size as the root. This is bad. Let's look at the detail:

SQL> select *
  from (select nvl(pdb_name,'CDB$ROOT') pdb_name,owner,segment_type,bytes from cdb_segments 
  left outer join dba_pdbs using(con_id))
  pivot (sum(bytes/1024/1024) as "MB" for (pdb_name) 
  in ('CDB$ROOT' as "CDB$ROOT",'PDB_TTS' as PDB_TTS,'PDB_PLG' as PDB_PLG))
  order by greatest(nvl(PDB_TTS_MB,0),nvl(PDB_PLG_MB,0))-least(nvl(PDB_TTS_MB,0),nvl(PDB_PLG_MB,0)) 
  desc fetch first 20 rows only;

OWNER                SEGMENT_TYPE       CDB$ROOT_MB PDB_TTS_MB PDB_PLG_MB
-------------------- ------------------ ----------- ---------- ----------
SYS                  TABLE                      539         96        540
SYS                  INDEX                      187        109        195
SYS                  LOBSEGMENT                 117        105        118
SYS                  TABLE PARTITION             17          1         12
SYSTEM               INDEX                       10          1         10
SYS                  SYSTEM STATISTICS                                  8
SYSTEM               TABLE                        8          1          8
SYS                  LOBINDEX                    12          7         13
SYS                  INDEX PARTITION              9          0          6
SYSTEM               LOBSEGMENT                   5          0          5
APEX_040200          LOBSEGMENT                  80         74         80
SYSTEM               INDEX PARTITION              4                     4
SYSTEM               TABLE PARTITION              3                     3
SYS                  TABLE SUBPARTITION           2                     2
SYS                  CLUSTER                     52         50         52
SYS                  LOB PARTITION                3          1          2
SYSTEM               LOBINDEX                     2          0          2
APEX_040200          TABLE                      100         99        100
XDB                  TABLE                        7          6          7
AUDSYS               LOB PARTITION                1          0          1

20 rows selected.

Here I've compared the dictionary sizes. While the PDB_TTS table segments are below 100MB, the PDB_PLG is the same size as the root. The noncdb_to_pdb has updated OBJ$ but did not delete the rows reclaim space from other tables (see update 2).

Which tables?

SQL> select *
   from (select nvl(pdb_name,'CDB$ROOT') pdb_name,owner,segment_type,segment_name,bytes 
   from cdb_segments left outer join dba_pdbs using(con_id) 
   where owner='SYS' and segment_type in ('TABLE'))
   pivot (sum(bytes/1024/1024) as "MB" for (pdb_name) 
   in ('CDB$ROOT' as "CDB$ROOT",'PDB_TTS' as PDB_TTS,'PDB_PLG' as PDB_PLG))
   order by greatest(nvl(PDB_TTS_MB,0),nvl(PDB_PLG_MB,0))-least(nvl(PDB_TTS_MB,0),nvl(PDB_PLG_MB,0))
   desc fetch first 20 rows only;

OWNER             SEGMENT_TYPE       SEGMENT_NAME                   CDB$ROOT_MB PDB_TTS_MB PDB_PLG_MB
----------------- ------------------ ------------------------------ ----------- ---------- ----------
SYS               TABLE              IDL_UB1$                               288          3        288
SYS               TABLE              SOURCE$                                 51          2         52
SYS               TABLE              IDL_UB2$                                32         13         32
SYS               TABLE              ARGUMENT$                               13          0         13
SYS               TABLE              IDL_CHAR$                               11          3         11

The IDL_UB1$ is the table that contains all the pcode for pl/sql. All those wrapped dbms_ packages are there. And we don't need them in the PDB: we have link to the root which has exactly the same version.

Conclusion

My conclusion is that I'll not advise to use using noncdb_to_pdb. First, that script doing a lot of internal stuff scares me. I prefer to start that new implementation of the dictionary with a clean one.

But now that I made this test, I've two additional reasons to avoid it. First, it's not faster - except if you have a lot of objects. And the main goal is to reduce the total space by having the oracle packages stored only once. And this is cleary not done by the noncdb_to_pdb.

However, that conclusion is only for small databases. If you a database with a huge number of objects and pl/sql packages, then the overhead to keep the dictionary objects will not be very significant. And the TSS solution will be longer because it has to import all metadata. So there is still a case for noncdb_to_pdb. But test is before. And be sure to have a large shared pool for the recompile step.

Update 1: I forgot to add another reason to be very careful with noncdb_to_pdb from Bertrand Drouvot in his post about optimizer_adaptive_features huge negative impact on it.

Update 2: From a comment on OTN forum I changed the sentence about deleted rows because it was wrong. In fact, rows are deleted when the objects are recompiled:

SQL> select name,count(*) from containers(IDL_UB1$) left outer join v$containers using(con_id) group by name order by 1;

NAME                             COUNT(*)
------------------------------ ----------
CDB$ROOT                            53298
PDB1                                 6457
PDB_PLG                              6432
PDB_TTS                              6354

SQL> select name,count(*) from containers(SOURCE$) left outer join v$containers using(con_id) group by name order by 1;

NAME                             COUNT(*)
------------------------------ ----------
CDB$ROOT                           327589
PDB1                                73055
PDB_PLG                             20306
PDB_TTS                             17753
The issue is only that space is still allocated. And you can't SHRINK those objects because SYSTEM is DMT, and anyway the large tables contain LONG, and finally:
SQL> alter table sys.IDL_UB1$ shrink space;
alter table sys.IDL_UB1$ shrink space
*
ERROR at line 1:
ORA-65040: operation not allowed from within a pluggable database
Of course, the space can be reused, but do you expect to add 200MB of compiled pl/sql in future releases?

WSFC with SQL Server: Manual or automatic failover? That is the question

Mon, 2014-12-01 00:05

During the night, you receive an alert concerning your SQL Server failover cluster or your availability groups. You're in panic because the message displayed is "a failover has occured .. see the log for more details" ...

So you try to keep quiet and after connecting to your environment, you are not able to find anything ... What has happened? Maybe someone has triggered a failover manually and you are not aware of it. I'm sure that by reading the previous sentences, many of you will find the situation familiar, but the real question is: Is it possible to distinguish a manual failover from an automatic failover with a Windows failover cluster?

The answer is yes and one way to find out the response is to take a look at the cluster.log. In fact, you have a record entry that clearly identifies a manual failover of resources:

 

[RCM] rcm::RcmApi::MoveGroup: (, 1, 0, MoveType::Manual )

 

As a reminder, this is the resource control monitor [RCM] that is responsible for performing actions according to the state of a resource. In fact, when you trigger a manual failover, the MoveGroup API is called with a identified parameter MoveType::Manual

Let me know if you find a other way of discovering a manual failover :-)

Happy failover (or not)!

Security improvements in MySQL 5.7

Sun, 2014-11-30 05:24

If you have a look on the last mysql 5.7.4 version or later you will probably see that there are several security improvements. The list of added security features and improvements can be seen on the following page: http://dev.mysql.com/doc/refman/5.7/en/mysql-nutshell.html

There are three main improvements that are shortly described in this blog:

1. Nonempty plugin column
2. Password lifetime policy
3. mysql_install_db secured

 

Nonempty plugin column

As of MySQL 5.7.2, the server requires account rows in the mysql.user table to have a nonempty plugin column value and disables accounts with an empty value. The following error will occor when trying to connect with user having empty plugin colum:

 

2014-11-30T10:41:04.943384Z 2 [Note] Access denied for user 'sbtest'@'localhost' (using password: YES)

 

If the user is connected when you update the plugin column, MySQL will behave as described below:

1. The user connect to the database:

 

mysql -u sbtest -p --socket=/u00/app/mysql/admin/mysqld5/socket/mysqld5.sock
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 9
Server version: 5.7.4-m14 MySQL Community Server (GPL)Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>

2. Update column with root user:

mysql> update mysql.user set plugin='' where user='sbtest';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0


3. With the root user you flush the privileges:

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)


4. The following message will appear in the sbtest session:

2014-11-30T22:08:16.652477Z 8 [Warning] User entry 'sbtest'@'localhost' has an empty plugin value. The user will be ignored and no one can login with this user anymore.



Password Lifetime policy

Since mysql 5.7.4 MySQL enables database administrators to expire account passwords manually and to establish a policy for automatic password expiration. How does it work ?

Two new columns have been added to MySQL:

 

| password_last_changed | timestamp            | YES | | NULL |
| password_lifetime     | smallint(5) unsigned | YES | | NULL |

 

These two columns allow to see when password has been changed and to set a password lifetime.

You can establish a global password policy by setting the variable default_password_lifetime in the option file. By default this variable is set to 360. It means that all users will have to change their password once per year. A value of 0 disables automatic password expiration.

As stated in the documentation, the global password expliration policy can be overridden as desired for individual accounts using the ALTER USER statement.


Example:

mysql> ALTER USER 'sbtest'@'localhost' PASSWORD EXPIRE INTERVAL 90 DAY;
Query OK, 0 rows affected (0.00 sec)


A client session operates in restricted mode if the account password has been expired. In restricted mode, operations performed in the session result in an error until the user issues a SET PASSWORD statement to establish a new account password:

 

mysql> alter user 'sbtest'@'localhost' password expire interval 1 day;

[mysqld5] mysql -u sbtest -p

mysql> show databases;
ERROR 1820 (HY000): You must SET PASSWORD before executing this statement

mysql> set password=password('sbtest');
Query OK, 0 rows affected (0.01 sec)

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| sysbench           |
+--------------------+
2 rows in set (0.00 sec)

 

To remove the password expiration policy simple use "expire never" as presented in the following example:

mysql>alter user 'sbtest'@'localhost' password expire never;

 

mysql_install_db secured

MySQL deployments installed using mysql_install_db now are secure by default. The following changes have been implemented as the default deployment characteristics:

The installation process creates only a single root account and not anymore anonymous-user accounts.

Example on MySQL 5.7.4:

 

mysql> select user,password, host from mysql.user;

+-------------+-------------------------------------------+-----------+
| user        | password                                  | host      |
+-------------+-------------------------------------------+-----------+
| root        | *7D2ABFF56C15D67445082FBB4ACD2DCD26C0ED57 | localhost |
+-------------+-------------------------------------------+-----------+
1 rows in set (0.00 sec)

 

Example on mysql 5.6.20:

 

mysql> select user,password, host from mysql.user;

+-------------+-------------------------------------------+----------------+
| user        | password                                  | host           |
+-------------+-------------------------------------------+----------------+
| root        | *7D2ABFF56C15D67445082FBB4ACD2DCD26C0ED57 | localhost      |
| root        |                                           | thinkpad-t540p |
| root        |                                           | 127.0.0.1      |
| root        |                                           | ::1            |
|             |                                           | localhost      |
|             |                                           | thinkpad-t540p |
+-------------+-------------------------------------------+----------------+
6 rows in set (0.01 sec)

 

After the installation you can also note that there is no "test" database anymore.

Example on MySQL 5.7.4:


mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
+--------------------+
3 rows in set (0.00 sec)


Example on MySQL 5.6.20:

mysql> show databases;

+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+
4 rows in set (0.00 sec)

 

Conclusion

Oracle did some significative security improvements on MySQL 5.7 version. This improvements will help database administrators to deploy MySQL with a better security level than with previous versions. Some improvements have also been done on the mysql_secure_installation script. The list of improvement and additional features can be seen on the following URL: http://dev.mysql.com/doc/refman/5.7/en/mysql-secure-installation.html

When Oracle resets session statistics

Sat, 2014-11-29 15:59

During our Oracle 12c New Features workshop I had a very good question about whether the session statistics are reset or not when doing ALTER SESSION SET CONTAINER. My initial thought was that they were not reset because it's the same session (same SID and SERIAL#). But when I'm not 100% sure about something, I test it. And once again, it proves that even the instructor can learn something new when giving a workshop, thanks to the great interaction with the participants.

My test was very simple, querying the 'logon' statistics after an ALTER SESSION SET CONTAINER and I came with the following tweet:

Quizz: How can I be connected and get 0 logons from V$MYSTAT ? pic.twitter.com/YZPQNU8FiH

— Franck Pachot (@FranckPachot) November 26, 2014

Of course that needs more investigation. This is about session statistics. What about session events? and session time model?

SQL> connect / as sysdba
Connected.

SQL> select name,value from v$mystat join v$statname using(statistic#) where name like 'logon%';

NAME                                     VALUE
----------------------------------- ----------
logons cumulative                            1
logons current                               1

I'm connected to the root container. I generate a bit of activity (using dbms_system.wait_for_event for fake i/o activity):

SQL> exec for i in 1..1e5 loop dbms_system.wait_for_event('db file sequential read', 0 , 0 ); end loop;
PL/SQL procedure successfully completed.

SQL> exec for i in 1..100 loop dbms_system.wait_for_event('db file sequential read', 0 , 1 ); end loop;
PL/SQL procedure successfully completed.

SQL> declare s date:=sysdate; begin loop exit when sysdate>s+60/24/60/60; end loop; end;
  2  /
PL/SQL procedure successfully completed.

and here are my session statistics:

Time Model show 60 seconds of CPU and the 100 seconds of I/O is in the DB time (time model values are in microseconds):

SQL> select stat_name,value from v$sess_time_model where sid=sys_context('userenv','sid') and stat_name in ('DB time','DB CPU');

STAT_NAME                                VALUE
----------------------------------- ----------
DB time                              165084940
DB CPU                                61119000

Session Events shows those 100 seconds of I/O and the longest call took 1 second (values are in centisecond when not with 'MICRO'):

SQL> select event,total_waits,time_waited_micro,max_wait from v$session_event where sid=sys_context('userenv','sid');

EVENT                          TOTAL_WAITS TIME_WAITED_MICRO   MAX_WAIT
------------------------------ ----------- ----------------- ----------
Disk file operations I/O                19              1407          0
db file sequential read             100100         104060605        101
SQL*Net message to client               33               123          0
SQL*Net message from client             32             13842          0

Session Statistics where time is in centiseconds show 60 seconds of CPU and 100 + 60 of DB time:

SQL> select name,value from v$mystat join v$statname using(statistic#) where name = 'DB time' or name like 'CPU%' or name like 'SQL*Net%client';

NAME                                     VALUE
----------------------------------- ----------
CPU used when call started                6118
CPU used by this session                  6118
DB time                                  16513
user I/O wait time                       10406
SQL*Net roundtrips to/from client           33

Ok, so that is consistent. And this has to be consistent as we often compare information for those 3 sources.

Let's now change to another container with ALTER SESSION SET CONTAINER:

SQL> show con_id

CON_ID
------------------------------
1

SQL> alter session set container=PDB1;

Session altered.

and look at the same statistics now:

SQL> select stat_name,value from v$sess_time_model where sid=sys_context('userenv','sid') and stat_name in ('DB time','DB CPU');

STAT_NAME                                VALUE
----------------------------------- ----------
DB time                              103907514
DB CPU                                    2000
SQL> select event,total_waits,time_waited_micro,max_wait from v$session_event where sid=sys_context('userenv','sid');

EVENT                          TOTAL_WAITS TIME_WAITED_MICRO   MAX_WAIT
------------------------------ ----------- ----------------- ----------
Disk file operations I/O                25              2084          0
db file sequential read             100100         104060605        101
SQL*Net message to client               42               180          0
SQL*Net message from client             41             17774          0
SQL> select name,value from v$mystat join v$statname using(statistic#) where name = 'DB time' or name like 'CPU%' or name like 'SQL*Net%client';

NAME                                     VALUE
----------------------------------- ----------
CPU used when call started                  11
CPU used by this session                    11
DB time                                      8
user I/O wait time                           0
SQL*Net roundtrips to/from client            5

All the session statistics have been reset and now have low values. And only them: events and time model still show cumulative values from the begining of the session.

So this unfortunately brings inconsistency. Here:

  • session statistic DB time cannot be compared with time model
  • SQL*Net roundtrips cannot be compared to the count of 'SQL*net message to client' event
  • CPU info from session statistics cannot be compared with the time model
  • session statistic wait time cannot be compared with session event time
and this how I can be connected and see 0 logons for my session:

SQL> select name,value from v$mystat join v$statname using(statistic#) where name like 'logon%';

NAME                                     VALUE
----------------------------------- ----------
logons cumulative                            0
logons current                               0

Now, as we are talking about resetting statistics, there is something else we can do. As you see above the session keeps track of the longest time for each wait event. Here above, the maximum I/O time was 1 second. We can reset that if we want:

SQL> exec dbms_system.kcfrms;
PL/SQL procedure successfully completed.

we still have all cumulative values for the session, but the high water mark of wait event duration has been reset:

SQL> select event,total_waits,time_waited_micro,max_wait from v$session_event where sid=sys_context('userenv','sid');

EVENT                          TOTAL_WAITS TIME_WAITED_MICRO   MAX_WAIT
------------------------------ ----------- ----------------- ----------
Disk file operations I/O                29              2263          0
db file sequential read             100100         104060605          0
SQL*Net message to client               49               209          0

Finally, I made other tests that I do not reproduce here:

If you ALTER SESSION SET CONTAINER with your current container then statistics are not reset.

If you ALTER SESSION SET CURRENT_SCHEMA the statistics are not reset.

So I'll stay with a reconnect when I want to reset all session statistics with wait events and time model as well.

Windows Server 2012 R2: solving .NET Framework 3.5 installation problems

Thu, 2014-11-27 20:57

I faced a problem at a customer site last week when I tried to install the .NET Framework 3.5 - a prerequisite for installing SQL Server 2012 on a Windows Server 2012 R2. I opened the Server Manager and then navigated to the Manage, Add Roles and Features section:

 

b2ap3_thumbnail_pic01.jpg

 

I selected the .NET Framework 3.5 Features option:

 

b2ap3_thumbnail_pic02.jpg

 

I specified an alternate source path:

 

b2ap3_thumbnail_pic03.jpg

 

... and surprise! Even though an ISO of Windows Server 2012 R2 was mapped to my D: drive, the installation failed with this strange error: "The source file could not be found..."

After some investigations, I found that this problem is quite common and that Microsoft has published a fix ... which unfortunately does not work for me!

I tried the same installation with different ways: command prompt, PowerShel l... but absolutely NO RESULT.

I finally decided to open a PowerShell console to check the Windows Features available on my server with the cmdlet Get-WindowsFeature:

 

b2ap3_thumbnail_Pic1.jpg

 

Strangely, the status of the .NET Framework 3.5 is not showing "Available", but "Removed"!

 

b2ap3_thumbnail_Pic2.jpg

 

So, how do I change this state from removed to available?

After some investigations and after having tried some fixes provided by persons who faced to the same problem as me, I finally found the Standalone Offline Installer tool that solved my problem by enabling the .NET Framework 3.5 (many thanks to Abbodi1406).

I downloaded this exe file and executed it on my server.

An installer screen appeared:

 

b2ap3_thumbnail_pic4.jpg

 

After clicking on the Next button, a command prompt screen appeared which showed the completion state of the process.

pic6.jpg

 

As soon as the process was finished, I went back to my PowerShell screen to check if my .NET Framework 3.5 is now available - by running my PowerShell cmdlet Get-WindowsFeature:

 

b2ap3_thumbnail_pic10.jpg

 

The .NET Framework 3.5 now was available and I as able to restart the installation process from the beginning by navigating to the server manager, selecting the concerned feature and giving the alternate source path.

pic8.jpg

I finally succeded in installing my .NET Framework 3.5!

I hope that my blog post will help some of you to resolve this installation problem ;-)