This month we started to give our Oracle Tuning Workshop. And with a new workshop comes new questions. We advise to give to the optimizer the most accurate statsistics that we can have. That suggests that WORKLOAD statistics are better than NOWORKLOAD ones because they gather the average number of blocks read in multiblock read, rather than using default values. But then, the question is: which time period do you choose to gather workload statistics, and with which interval duration?
Last week I gave a workshop about SQL Server 2014 and the new features. The first day we worked on new In-memory OLTP and different topics such as the new internal storage, the new transaction processing behavior or the new checkpointing process. During this day, one of the attendees asked me about the memory management with In-Memory OLTP feature. It was a very interesting question but unfortunately I didn’t have the time to discuss about it with him, so I decided to publish something concerning this topic. This subject may be extensive and time consuming, so I will try to give only a good overview to understand correctly how memory management works against memory-optimized objects and how important the monitoring aspect is in this particular context.
First of all, keep in mind that memory-optimized tables are memory-oriented feature. It means that memory-optimized structures (indexes and data rows) will reside exclusively in memory. This is by design and this point is very important. Let me explain why later in this blog post.
For the moment, let’s focus on memory aspects of this new feature. In-memory OLTP is not different from other memory consumers on SQL Server. Indeed, In-memory OLTP objects have their own memory clerk MEMORYCLERK_XTP. Let’s have a look at the sys.dm_os_memory_clerks DMV to show information concerning In-Memory OLTP allocated memory.
pages_kb/1024 as size_MB
from sys.dm_os_memory_clerks where type like '%xtp%';
In my case we may notice that the database dbi_hk (DB_ID = 24) contains memory-optimized objects with a dedicated memory clerk on it. Others xtp memory clerks are dedicated for system threads (first line) and DAC (last line) but let’s focus on my user database memory clerk which has 2336MB of page memory allocated.
On my lab environment, I have only one memory-optimized table named bigTransactionHistory_xtp inside the dbi_hk database. Let’s have a look at the new DMV sys.dm_db_xtp_table_memory_stats to show memory information for this table:
SELECT object_name(object_id) AS table_name, memory_allocated_for_indexes_kb / 1024 as mem_alloc_index_mb, memory_allocated_for_table_kb / 1024 as mem_alloc_table_mb, memory_used_by_indexes_kb / 1024 as mem_used_index_mb, memory_used_by_table_kb / 1024 as mem_used_table_mb, (memory_allocated_for_table_kb + memory_allocated_for_indexes_kb) / 1024 as mem_alloc_total_mb, (memory_used_by_table_kb + memory_used_by_indexes_kb) /1024 as mem_used_total_mb FROM sys.dm_db_xtp_table_memory_stats where object_id = object_id('bigTransactionHistory_xtp'); go
We may expect to retrieve the same amount of memory page allocated here and in the dedicated memory clerk of the dbi_hk database. This is approximatively the case. The difference we found concerns probably memory allocated for system internal structures. We may have a look at the concerned DMV sys.dm_db_xtp_memory_consumers but I will focus on it in a next blog post.
At this point we know where to find information concerning the memory consumption for memory-optimized objects but I still have one question in mind: how does SQL Server memory manager deal with memory concurrent activities between memory-optimized tables and their disk-based table counterparts? Like any other memory consumer, the in-memory OLTP engine responds to memory-pressure, but to a limited degree because memory consumed by data and indexes can’t be released even under memory pressure.
To deal correctly with In-Memory OLTP engine and others consumers we have to turn on the resource governor (RG) side. Indeed, by default all databases are mapped to the default resource pool regardless the RG is enabled. In the same way, workloads issued from both disk-based tables and memory-optimized tables will run concurrently on the default resource pool if any special configuration is performed. In such case, RG will use an internal threshold for In-Memory OLTP to avoid conflicts over pool usage. The threshold is depending on the memory size configured for SQL Server and especially to the target commit memory for the SQL Server instance. You can refer to the Microsoft documentation here for more details.
So, in my case the max memory setting value is configured to 6144MB and the target committed memory is as follows:
select committed_target_kb / 1024 as committed_target_mb from sys.dm_os_sys_info;
According the Microsoft documentation (cf. link above) the percent available for in-memory tables will be 70% or 0.7 * 4898 = 3429MB. I may retrieve this information by using the DMV related on the RG. You can find an original version of this script on MSSQLTIPs.com website.
;with cte as ( select RP.pool_id , RP.Name , RP.min_memory_percent , RP.max_memory_percent , cast(RP.max_memory_kb / 1024. as numeric(12, 2)) AS max_memory_mb , cast(RP.used_memory_kb / 1024. as numeric(12, 2)) AS used_memory_mb , cast(RP.target_memory_kb / 1024. as numeric(12,2)) AS target_memory_mb, cast(SI.committed_target_kb / 1024. as numeric(12, 2)) AS committed_target_mb from sys.dm_resource_governor_resource_pools RP cross join sys.dm_os_sys_info SI ) select c.pool_id , c.Name , c.min_memory_percent , c.max_memory_percent , c.max_memory_mb , c.used_memory_mb , c.target_memory_mb , c.committed_target_mb, CAST(c.committed_target_mb * case when c.committed_target_mb then 0.7 when c.committed_target_mb < 16384 then 0.75 when c.committed_target_mb < 32768 then 0.8 when c.committed_target_mb then 0.85 when c.committed_target_mb > 98304 then 0.9 end * c.max_memory_percent / 100 as numeric(12,2)) as [Max_for_InMemory_Objects_mb], CAST(c.committed_target_mb * case when c.committed_target_mb then 0.7 when c.committed_target_mb < 16384 then 0.75 when c.committed_target_mb < 32768 then 0.8 when c.committed_target_mb then 0.85 when c.committed_target_mb > 98304 then 0.9 end * c.max_memory_percent / 100 as numeric(12,2)) - c.used_memory_mb as Free_for_InMemory_Objects_mb FROM cte c; go
Ok I retrieve (approximatively) this value by looking at the Max_for_InMemory_Objects_mb column on the default pool line record. Notice that we have already 2008MB used in the default resource pool.
At this point the In-memory OLTP and the disk-based OLTP run concurrently on the same resource pool and of course, this is not a recommended situation. Indeed, we may be in a situation where In-Memory OLTP consumes all the available memory from this pool. In such situation, SQL Server will be forced to flush data pages from disk-based tables and you know the performance impact of this process.
Go ahead and let’s create an issue you can faced with In-Memory OLTP and misconfigured environments. First we decrease the max memory setting value to 4096MB and then we load another bunch of data into bigTransactionHistory_xtp table to consume an important part of the available memory dedicated to memory-optimized objects in the default resource pool. Finally let’s have again a look at the RG memory configuration by using the previous script. We have now a good picture of changes applied after our reconfiguration:
As expected, several values have changed for target memory, memory available for memory-optimized tables and memory used memory into the default resource pool. The new available memory value for the resource pool is now 1605MB (3891MB – 2286MB). I let you think about a bad situation where your memory-optimized table will consume all available memory inside the default resource pool in order of magnitude … the consequences are obvious (even if they depend on the context): probably a lot of memory pressures between the buffer pool consumer and In-Memory OLTP consumer and in the worst case a potential OOM issue like as follows:
After loading data into the bigTransactionHistory_xtp we can notice we have consumed all available memory for In-memory objects into the default resource pool. However as said earlier, RG guarantees a certain amount of memory for disk-based tables.
Ok now let’s simulate a crash recovery scenario by restarting the SQL Server instance. In my case the SQL Server engine service didn’t restart correctly… ouch... What’s going on? Of course my first though was to take a look directly on the error log of my SQL Server instance. The first error message I encountered was as follows:
Ok... it seems there is an issue during the dbi_hk recovery database process. In fact during the recovery process one step consists in building the index structure and link the data rows to this structure. But you can see that this step fails with an OOM (Out Of Memory) issue.
In this second part, we have interesting information concerning our OOM issue. First of all in the “process / system counts” section we may notice that SQL Server had to deal with internal memory pressures (process physical memory low = 1) so we can exclude external memory pressure. Then, in the “memory manager” section we have two additional sections Last OOM Factor and Page Alloc Potential. The former confirms an OOM (out of memory) issue into the Memory manager. The latter shows a negative value that indicates that the buffer pool does not have any free memory so our assumption that it was an internal memory pressure is correct. As a reminder Page Alloc Potential is similar to Stolen Potential in previous versions of SQL Server.
Let’s continue and point out the memory clerks which are responsible for the memory pressure. By investigating down into the log file, I found two relevant memory clerks with a lot of pages allocated as shown above:
As expected, the first memory clerk concerns In-memory OLTP (XTP as Extreme Transaction Processing) and the second is related on the log pool manager that is heavily used during recovery processing. The both memory clerks, at the time of the OOM issue, have a total size of 3.7GB. This does not leave much room for the caches left in the default resource pool. Finally the end of the error log contains the following error messages that confirm that SQL Server is missing memory for its default resource pool.
According to the Microsoft documentation that’s the resolution of OOM issues with In-Memory table’s scenarios, the number of solutions are very limited. In my case, I started the SQL Server engine with –f parameter to load minimal configuration and then I increased the amount of memory dedicated to In-memory OLTP by increasing the max server memory option in the server side. This fix will avoid to face the same issue on the next restart of my SQL Server engine service.
Is it possible to fix definitely this OOM condition? The response is yes and we have to configure a resource pool with memory limitations and bind it with our memory-optimized database. This is another story and I let you check the Microsoft document! My intention in this blog post is only to create awareness of the importance of a good memory management with new In-memory OLTP feature.
Recently, I have installed an Oracle 184.108.40.206 RDBMS on a server, and I wanted to install the latest Patch Set Update (PSU) 220.127.116.11.4 from October 2014. During the execution, I fell on the error "Archive not applied" and the job failed. This is how to quickly fix this issue.
While installing the PSU 18.104.22.168.4 from October 2014 on my Oracle 22.214.171.124 database server, I experienced the following error:
The following actions have failed:
Archive not applied /oracle/soft/126.96.36.199/psu 188.8.131.52.4/19791364/19121551/17478514/files/lib/libserver11.a/kjb.o to /u00/app/oracle/product/11.2.0/db_4_0/lib/libserver11.a... ''
Archive not applied /oracle/soft/184.108.40.206/psu 220.127.116.11.4/19791364/19121551/17478514/files/lib/libpls11.a/pevmexe.o to /u00/app/oracle/product/11.2.0/db_4_0/lib/libpls11.a... ''
Archive not applied /oracle/soft/18.104.22.168/psu 22.214.171.124.4/19791364/19121551/17478514/files/lib/libclient11.a/knoggcap.o to /u00/app/oracle/product/11.2.0/db_4_0/lib/libclient11.a... ''
Archive not applied /oracle/soft/126.96.36.199/psu 188.8.131.52.4/19791364/19121551/17478514/files/lib/libgeneric11.a/qcd.o to /u00/app/oracle/product/11.2.0/db_4_0/lib/libgeneric11.a... ''
Archive not applied /oracle/soft/184.108.40.206/psu 220.127.116.11.4/19791364/19121551/17478514/files/lib/libpls11_pic.a/pevmexe_pic.o to /u00/app/oracle/product/11.2.0/db_4_0/lib/libpls11_pic.a... ''
Archive not applied /oracle/soft/18.104.22.168/psu 22.214.171.124.4/19791364/19121551/17478514/files/lib/libxml11.a/lpxm.o to /u00/app/oracle/product/11.2.0/db_4_0/lib/libxml11.a... ''
Do you want to proceed? [y|n]
Typically, this error appears when there are permission issues in the Oracle directories. Here, the /u00/app/oracle/product/11.2.0/db_4_0/lib directory seemed to by the cause of my error. So I checked my environment. By invoking OPatch with the Oracle user, access should be gained to that directory:
oracle@srvora01 ~]$ id
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba)
I am connected as "oracle" user, member of the "oinstall" group.
[oracle@srvora01 ~]$ ls -ld /u00/app/oracle/product/11.2.0/db_4_0/lib
drwxr-xr-x 4 oracle oinstall 12288 Jan 16 13:10 /u00/app/oracle/product/11.2.0/db_4_0/lib/oracle/soft/126.96.36.199/psu 188.8.131.52.4/19791364/19121551/
As you can see, my "lib" directory is owned by "oracle" with the "oinstall" group. So there are definitely no issues with permissions here :-(
I was very surpised during a few minutes! But I discovered an issue in the path of the patch I wanted to deploy:
[oracle@srvora01 19121551]$ pwd
Do you see my mistake? I used a space character in the name of the directory containing the patch to apply. And OPatch does not like space characters, definitely...
Once my mistake discovered, I just had to rename my directory and run the OPatch again:
[oracle@srvora01 ~]$ mv "/oracle/soft/184.108.40.206/psu\ 220.127.116.11.4/" "/oracle/soft/18.104.22.168/psu_22.214.171.124.4/"
[oracle@vmreforadg01 ~]$ cd /oracle/soft/126.96.36.199/psu_188.8.131.52.4/19791364/19121551
[oracle@vmreforadg01 19121551]$ opatch apply
Oracle Interim Patch Installer version 184.108.40.206.6
Copyright (c) 2013, Oracle Corporation. All rights reserved.
The patch was installed successfully:
[oracle@vmreforadg01 19121551]$ opatch lsinventory
Oracle Interim Patch Installer version 220.127.116.11.6
Copyright (c) 2013, Oracle Corporation. All rights reserved.
Oracle Home : /u00/app/oracle/product/11.2.0/db_4_0
Central Inventory : /u00/app/oraInventory
from : /u00/app/oracle/product/11.2.0/db_4_0/oraInst.loc
OPatch version : 18.104.22.168.6
OUI version : 22.214.171.124.0
Log file location : /u00/app/oracle/product/11.2.0/db_4_0/cfgtoollogs/opatch/opatch2015-01-16_13-33-02PM_1.log
Lsinventory Output file location : /u00/app/oracle/product/11.2.0/db_4_0/cfgtoollogs/opatch/lsinv/lsinventory2015-01-16_13-33-02PM.txt
Installed Top-level Products (1):
Oracle Database 11g 126.96.36.199.0
There are 1 product(s) installed in this Oracle Home.
Interim patches (1) :
Patch 19121551 : applied on Fri Jan 16 13:10:20 CET 2015
Unique Patch ID: 17949166
Patch description: "Database Patch Set Update : 188.8.131.52.4 (19121551)"
Created on 6 Oct 2014, 10:07:57 hrs PST8PDT
Sub-patch 18522509; "Database Patch Set Update : 184.108.40.206.3 (18522509)"
Sub-patch 18031668; "Database Patch Set Update : 220.127.116.11.2 (18031668)"
Sub-patch 17478514; "Database Patch Set Update : 18.104.22.168.1 (17478514)"
17288409, 17205719, 17811429, 17754782, 17726838, 13364795, 17311728
17284817, 17441661, 13645875, 18199537, 16992075, 16542886, 17446237
14565184, 17071721, 17610798, 17375354, 17449815, 17397545, 19463897
18230522, 17235750, 16360112, 13866822, 17982555, 17478514, 12905058
14338435, 13944971, 16929165, 12747740, 17546973, 14054676, 17088068
18264060, 17343514, 17016369, 17042658, 14602788, 14657740, 17332800
19211724, 13951456, 16315398, 17186905, 18744139, 16850630, 17437634
19049453, 18673304, 17883081, 18641419, 17296856, 18262334, 17006183
18277454, 17232014, 16855292, 10136473, 17705023, 17865671, 18554871
19121551, 17588480, 17551709, 17344412, 17842825, 18681862, 17390160
13955826, 13609098, 18139690, 17501491, 17239687, 17752121, 17299889
17602269, 18673325, 17313525, 17242746, 19544839, 17600719, 18191164
17571306, 19466309, 17951233, 18094246, 17165204, 17011832, 17040527
16785708, 16180763, 17477958, 17174582, 17465741, 18522509, 17323222
19463893, 16875449, 16524926, 17237521, 17596908, 17811438, 17811447
18031668, 16912439, 16494615, 18061914, 17545847, 17082359, 19554106
17614134, 17341326, 17891946, 19458377, 17716305, 17752995, 16392068
19271443, 17767676, 17614227, 17040764, 17381384, 18973907, 18673342
14084247, 17389192, 17006570, 17612828, 17721717, 13853126, 18203837
17390431, 17570240, 14245531, 16043574, 16863422, 19727057, 17468141
17786518, 17037130, 17267114, 18203838, 16198143, 16956380, 17478145
14829250, 17394950, 17027426, 16268425, 18247991, 19584068, 14458214
18436307, 17265217, 13498382, 16692232, 17786278, 17227277, 16042673
16314254, 17443671, 16228604, 16837842, 17393683, 17787259, 18009564
15861775, 16399083, 18018515, 16472716, 17050888, 14010183, 17325413
16613964, 17080436, 17036973, 17761775, 16721594, 18280813, 15979965
18203835, 17297939, 16731148, 17811456, 14133975, 17385178, 17586955
16450169, 17655634, 9756271, 17892268, 17648596, 16220077, 16069901
11733603, 16285691, 17587063, 18180390, 17393915, 18096714, 17238511
17824637, 14285317, 19289642, 14764829, 18328509, 17622427, 16943711
17346671, 18996843, 14852021, 17783588, 16618694, 17672719, 17546761
In europe we have accents and non US7ASCII characters. We need special characterset. I'm not talking about Unicode here that solves all the problems. If you have a Java application, you have no problem: it's Unicode. You can store all characters in one multi-byte characterset. But for other applications, on Windows, you have 2 possible charactersets for Western Europe WE8MSWIN1252 and WE8PC850. WE8MSWIN1252 is the one that is set by default in the registry, but is it the right one?
After giving my session about SQL Server AlwaysOn and availability groups at the last French event “Les journées SQL Server 2014”, I had several questions concerning the port conflict issues, particularly the differences that exist between FCIs and availability groups (AAGs) on this subject.
In fact, in both cases, we may have port conflicts depending on which components that are installed on each cluster node. Fundamentally, FCIs and AAGs are both clustered-based features but each of them use the WSFC differently: SQL Server FCIs are “cluster-aware” services while AAGs use standalone instances by default (using of clustered instances with AAGs is possible but this scenario is relatively uncommon and it doesn’t change in any way the story).
First of all, my thinking is based on the following question: Why does having an availability group listener on the same TCP port than an SQL Server instance (but on a different process) cause a conflict issue whereas having both SQL Server FCIs with the same port is working fine?
Let’s begin with the SQL Server FCIs. When you install two SQL Server FCIs (on the same WSFC), you can configure the same listen port for the both instances and it works perfectly right? Why? The main reason is that each SQL Server FCI has its dedicated virtual IP address and as you know, a process can open a socket to a particular IP address on a specific port. However, two or more processes that attempt to open a socket on the same specific port and on the same IP address will result to a conflict. For instance, in my case, I have two SQL Server FCIs - SQLCLUST-01\SQL01 and SQLCLUST-02\SQL02 – that respectively listen on the same TCP port number: 1490. Here the picture of netstat –ano command output
Notice that each SQL Server process listens to its IP address and only to this one. We can confirm this by taking a look at each SQL Server error log.
Now let’s continue with the availability groups. The story is not the same because in most scenarios, we use standalone instances and by default they listen on all available IP addresses. In my case, this time I have two standalone instances – MSSQLSERVER (default) and APP - that listen respectively on the TCP port 1433 and 1438. By looking at the netstat –ano output we can notice that each process listen on all available IP addresses (LocalAddress = 0.0.0.0)
We can also verify the SQL Server error log of each standalone instance (default and APP)
At this point I am sure you are beginning to understand the issue you may have with availability groups and listeners. Let’s try to create a listener for an availability group with the default instances (MSSQLSERVER). My default instances on each cluster node listen on the port 1433 whereas the APP instances listen on the port 1438 as showed on the above picture. If I attempt to create my listener LST-DUMMY on the port 1433 it will be successful because my availability group and my default instances are on the same process.
Notice that the listener LST-DUMMY listens to the same port than the default instance and both are on the same process (PID = 1416). Of course if I try to change the TCP port number of my listener with 1438, SQL Server will raise the well-known error message with id 19486.
USE [master] GO ALTER AVAILABILITY GROUP [dummy] MODIFY LISTENER N'LST-DUMMY'(PORT=1438); GO
Msg 19486, Level 16, State 1, Line 3 The configuration changes to the availability group listener were completed, but the TCP provider of the instance of SQL Server failed to listen on the specified port [LST-DUMMY:1438]. This TCP port is already in use. Reconfigure the availability group listener, specifying an available TCP port. For information about altering an availability group listener, see the "ALTER AVAILABILITY GROUP (Transact-SQL)" topic in SQL Server Books Online.
The response becomes obvious now. Indeed, the SQL Server instance APP listens on TCP port 1438 for all available IP addresses (including the IP address of the listener LST-DUMMY).
You don't trust me? Well, I can prove it by connecting directly to the SQL Server named instance APP with the IP address of the listener LST-DUMMY - 192.168.0.35 - and the TCP port of the named instance – 1438 -
- Having several SQL Server FCI that listen on the same port is not a problem because they can open a socket on their distinct IP address. However you can face port conflicts in the case you have also a standalone instance installed on one of the cluster node.
- Having an availability group with a listener that listen on the same TCP port than the standalone instance on the same process will not result to a TCP port conflict.
- Having an availability group with a listener that listen on the same TCP port than the standalone instance on a different process will result to a TCP port conflict. In this case each SQL Server process will attempt to open a socket on the same TCP port and on the same address IP.
Hope it helps!
Bonne nouvelle pour les francophones: le SOUG-R est de plus en plus actif.
Le 21 mai 2015 une journée performance est organisée sur Lausanne.
in my previous post I've reproduced an issue where some bulk insert into a GTT was generating too much undo. But the real case I encountered was worse: the insert was reading lot of UNDO. And lot of this undo was read from disk.
This post is about the case I had where a GTT was generating too much undo. The effects were amplified by the fact that the database was in flashback logging (see Jonathan Lewis answer in my question on OTN forum about that, but that's for the next post.
Recently I had the opportunity to audit a SQL Server database hosted on a Hyper-V 2012 cluster. I noticed that the guest operating system had the Power Plan configured to High performance. This is great thing but when I talked to the system administrator to verify if the same option is turned on on the Hyper-V operating system, this was unfortunately not the case.
As a reminder, the power policy setting has no effect on the guest operating system in case of virtual environments and we always have to verify if this option is configured correctly at the right level.
I performed a quick demonstration to my customer by using the SuperPI benchmark tool that is pretty simple: it calculates pi to a specific number of digits by using one thread and for my purpose it's sufficient.
--> Let's have the situation when Power Saver is enabled on the Hyper-V side and High performance turned on on the guest side. Then let's run SuperPI tool with 512K of digit to compute:
Here the time taken by the guest to calculate pi:
Now let's change the story by reversing the power settings value: High performance on the Hyper-V side and Power Saver on the guest side. Then we can do the same benchmark test:
5,688 seconds for this test against 13,375 seconds for the first test - 57% of improvement .. not so bad :-) but let's have a more suitable situation. Indeed in most configurations power management setting is configured to Balanced by default and my customer asked me if there is a noticable difference if we leave the default configuration. In order to justify my recommandation we performed the same test but this time I decided to change the number of digits to compute to simulate a more realistic OLTP transaction (short and requiere all CPU resources during a short time). The table lists and compare the both results:
Settings Duration (s) Hyper - V : Load balancing 0.219 Hyper - V : High performance 0.141
We can notice a 64% of CPU time improvement in the context of my customer! So after that, my customer was convinced to change this setting and I hope it is the same for you! Of course with long running queries that consume a lot of CPU resources during a long time the difference may be less discernible because the processor wake-up time is very small compared to the total worker time consumed by them.
Keep in mind that changing Power Management state from the guest has no effect on virtualized environment. You must take care of this setting directly on the hypervisor.
Happy virtualization !!
In a previous blog post called Backup a SQL Server database from On-Premise to Azure, I presented the different tools to backup your on-premise databases on Azure Storage. SQL Server Managed Backup to Windows Azure was one of these tools.
In my opinion, Managed Backup is a great tool. That is why I decided to dedicate an entire blog to this feature.
Understanding Managed Backup
Managed Backup is a new feature introduced in SQL Server 2014 working with Windows Azure. This feature allows to manage and automate SQL Server backups (from your on-premise or Azure SQL Server instance), configurable by … script only (T-SQL or PowerShell)!
Microsoft recommends to use Managed Backup for Windows Azure virtual machines.
Managed backup only works with user databases in Full or Bulk-logged Recovery Model, and can only perform Full and Log backups.
SQL Backups supports a point in time restore, and are stored following a retention period. This setting indicates the desired lifespan of a backup stored in Azure Storage. Once the period is reached, the backup is deleted.
SQL Backups are scheduled following the transaction workload of the database.
A full database backup is scheduled when:
- The Managed backup feature is enabled for the first time
- The log growth is 1 GB or larger
- The last full database is older than 1 week
- The log chain is broken
A transaction log backup is scheduled when:
- No log backup history is available
- The log space is 5 MB or larger
- The last log backup is older than 2 hours
- A full database has been performed
Configuring Managed Backup
First, you need to activate SQL Server Agent service in order to use the feature.
In this example, I have 3 user databases as follows:
Data Files Location
Managed Backup can be enabled at the instance level or database level.
If you decide to activate the feature at the instance level, the configuration will be set for all user databases of your instance (even for databases added after the configuration).
On the other hand, you can activate the feature for specific user databases. If the feature is also configured at the instance level, it will be overridden by the configuration at the database level.
To configure the feature, you must provide a set of parameters:
- The URL of the Azure Storage
- The retention period in days
- The credential name
- The encryption algorithm
If the encryption algorithm is not set to ‘NO_ENCRYPTION’, you also need to provide these parameters:
- The encryptor type
- The encryptor name
Moreover, when you configure your Managed Backup, you need to specify if you want to activate your Managed Backup.
You can perform a database backup with COPY_ONLY. To do this, you need to use 'smart_admin.sp_backup_on_demand' stored procedure, by specifying the database name.
However, this stored procedure will use the configuration of the Managed Backup at the database level. That means you must configure and enable the Managed Backup for your database.
We need to create a credential in order to be able to connect to Azure Storage:
CREATE CREDENTIAL dbiservices
WITH IDENTITY = 'dbiservices',
SECRET = 'password'
Let’s configure our Managed Backup at the instance level:
@enable_backup = 0,
@storage_url = 'https://dbiservices.blob.core.windows.net',
@retention_days = 15,
@credential_name = 'dbiservices';
@encryption_algorithm = 'NO_ENCRYPTION';
If I want to display the instance configuration:
SELECT * FROM smart_admin.fn_backup_instance_config();
Here is the result:
We will override the Managed Backup configuration for ‘hybriddb’ database:
@database_name = 'hybriddb',
@enable_backup = 0,
@credential_name = 'dbiservices',
@storage_url = 'https://dbiservices.blob.core.windows.net,
@retention_days = 25,
@encryption_algorithm = 'NO_ENCRYPTION';
If I want to display the database configuration of all databases of the instance:
SELECT db_name, is_managed_backup_enabled, retention_days, storage_url, encryption_algorithm
Here is the result:
Notice that ‘AdventureWorks2012’ database has ‘is_managed_backup_enabled’ set to ‘NULL’. Indeed, this database is not sensitive to Managed Backup because it has his Recovery Model set to Simple.
Now, I activate the Managed Backup at the instance level:
@enable_backup = 1;
Now, I activate the Managed Backup for ‘hybriddb’ database:
@database_name = 'hybriddb',
If I explore Azure Storage, I can find my backups:
As I said in Introduction, Managed Backup is a great feature. Easily and quicly, you can configure and enable backups for your user databases.
However, it has some serious limitations... We can expect Managed Backup to be extended to system databases. Moreover, we can also expect Managed Backup to allow backups from user databases in Simple Recovery Model.
Furthermore, this feature is only available to Azure Storage. Indeed, I would like to choose my storage destination. I do not understand why we cannot back up to local disks for example.
This is a second part of the previous post about metadata link. I've shown how a sharing=metadata function becomes a sharing=none function when it is changed in the pdb - i.e when not having the same DDL, not having a different signature.
Here is another experimentation doing the opposite: change the function in root and see what happens in the pdb. Again playing with internals in order to understand the 'upgrade by unplug-plug' feature available in 12c multi-tenant (and single-tenant).
Of course, playing with internals is fun ;)
But it's not only that. We will implement CDB databases (multi-tenant, and single-tenant as I think non-CDB will be deprecated in future versions). We will operate pluggable databases (clone, upgrade, downgrade,...), encounter issues, etc. Before doing that on critical environments, I need to be confident about the multitenant implementation. I'ts new, it's undocumented, and - at the first sight - it looks like a 'hack' in the way the dictionary is working for years.
calendar today tomorrow message ----------------- ------ --------- -------------- Arabic Hijrah 1436 1436 English Hijrah 1436 1436 Gregorian 2014 2015 Happy New Year Japanese Imperial 0026 0027 Happy New Year Persian 1393 1393 ROC Official 0103 0104 Happy New Year Thai Buddha 2557 2558 Happy New Year
I've described Oracle 12c metadata and object links internals in a previous post. But before that, the first time I investigated on it, I made a wrong assumption because I was looking at AUDIT_ACTIONS which is not correctly implemented. That investigation came from a question on dba-village. And recently Ivica Arsov (@IvicaArsov) has made an interesting comment about AUDIT_ACTIONS object link table, so I'll explain here what is special with it.AUDIT_ACTIONS
Here is how is defined AUDIT_ACTIONS:
SQL> select object_name,object_type,sharing from dba_objects where object_name in ('DBA_AUDIT_TRAIL','AUDIT_ACTIONS') order by object_name,object_type; OBJECT_NAME OBJECT_TYPE SHARING -------------------- --------------- ------------- AUDIT_ACTIONS SYNONYM METADATA LINK AUDIT_ACTIONS TABLE OBJECT LINK DBA_AUDIT_TRAIL SYNONYM METADATA LINK DBA_AUDIT_TRAIL VIEW METADATA LINKIt's a sharing=object table so you expect that the data is common to all containers. And we will also query a view that reads that table - DBA_AUDIT_TRAIL.
Then let's query the table from CDB$ROOT and from a PDB and check from ROWID if we read the same rows:
SQL> alter session set container=CDB$ROOT; Session altered. SQL> select rowid,action,name,dbms_rowid.rowid_to_absolute_fno(rowid,'SYS','AUDIT_ACTIONS') file_id from AUDIT_ACTIONS where action=3; ROWID ACTION NAME FILE_ID ------------------ ---------- ------- ---------- AAABG7AABAAACo5AAD 3 SELECT 1 SQL> alter session set container=PDB1; Session altered. SQL> select rowid,action,name,dbms_rowid.rowid_to_absolute_fno(rowid,'SYS','AUDIT_ACTIONS') file_id from AUDIT_ACTIONS where action=3; ROWID ACTION NAME FILE_ID ------------------ ---------- ------- ---------- AAABG5AABAAAA3pAAD 3 SELECT 8The rows are not coming from the same file, but from the local SYSTEM tablespace of each container. This is a proof that this OBJECT LINK table is not common at all. DBA_AUDIT_TRAIL
Now I want to check what happens when we query through the view. I don't have the ROWID so let's update the table in the PDB so that we can distinguish rows coming from CDB$ROOT and from PDB1:
SQL> update AUDIT_ACTIONS set name='select' where action=3; 1 row updated. SQL> select rowid,action,name from AUDIT_ACTIONS where action=3; ROWID ACTION NAME ------------------ ---------- ------- AAABG5AABAAAA3pAAD 3 select SQL> select distinct dbid,action,action_name from DBA_AUDIT_TRAIL; DBID ACTION ACTION_NAME ---------- ---------- ---------------------------- 314687597 3 selectOk. I've changed one 'ACTION_NAME' to lowercase - only in the PDB1. And when I query through the view I see the local row. This definitly prooves that the implementation of AUDIT_ACTIONS is not achieving the goal of multinenant dictionary: store common oracle objects only in CDB$ROOT to avoid duplication and faciliate upgrade. Note that it is not a big problem anyway as it is just a 200 rows table. DBA_CPOOL_INFO
In order to show the normal behaviour of object links I'll do the same with DBA_CPOOL_INFO which is a view over SYS.CPOOL$. I've described this behaviour previously by creating my own objects but here I'll show how it is used to store the DRCP information which is at CDB level. Here are the involved table and views:
SQL> select object_name,object_type,sharing from dba_objects where object_name in ('CPOOL$','INT$DBA_CPOOL_INFO','DBA_CPOOL_INFO') order by object_name,object_type; OBJECT_NAME OBJECT_TYPE SHARING -------------------- --------------- ------------- CPOOL$ TABLE OBJECT LINK DBA_CPOOL_INFO SYNONYM METADATA LINK DBA_CPOOL_INFO VIEW METADATA LINK INT$DBA_CPOOL_INFO VIEW OBJECT LINKCPOOL$ is defined with sharing=object. An internal view INT$DBA_CPOOL_INFO is defined on it with sharing=object as well. And finally that view is exposed through DBA_CPOOL_INFO. As before, I check the ROWID of CPOOL$ row from CDB$ROOT and PDB1:
SQL> alter session set container=CDB$ROOT; Session altered. SQL> select rowid,minsize,dbms_rowid.rowid_to_absolute_fno(rowid,'SYS','CPOOL$') file_id from SYS.CPOOL$; ROWID MINSIZE FILE_ID ------------------ ---------- ---------- AAABz5AABAAADb5AAA 4 1 SQL> alter session set container=PDB1; Session altered. SQL> select rowid,minsize,dbms_rowid.rowid_to_absolute_fno(rowid,'SYS','CPOOL$') file_id from SYS.CPOOL$; ROWID MINSIZE FILE_ID ------------------ ---------- ---------- AAABz3AABAAABQJAAA 4 8
So this is the same as we have seen before: an OBJECT LINK has its data in each PDB.
But what is different here is the view charing which is sharing=object. Let's query that view after changing the value in PDB1:
SQL> update SYS.CPOOL$ set minsize=0; 1 row updated. SQL> select rowid,minsize,dbms_rowid.rowid_to_absolute_fno(rowid,'SYS','CPOOL$') file_id from SYS.CPOOL$; ROWID MINSIZE FILE_ID ------------------ ---------- ---------- AAABz3AABAAABQJAAA 0 8 SQL> select minsize from INT$DBA_CPOOL_INFO; MINSIZE ---------- 4 SQL> select minsize from DBA_CPOOL_INFO; MINSIZE ---------- 4Now we have a view which will always show the CDB$ROOT rows, even when we are in a PDB container. We still have rows in the PDB containers, but they will not be used. Once again, this defeats the goal of deduplication, but this is a very small table. AWR tables
The main advantage of multitenant dictionary architecture is with the big tables storing data which is common in the whole CDB, such as the AWR data:
SQL> alter session set container=CDB$ROOT; Session altered. SQL> select con_id,count(*) from containers(WRH$_SQLTEXT) group by con_id; CON_ID COUNT(*) ---------- ---------- 1 5549 SQL> alter session set container=PDB1; Session altered. SQL> select count(*) from WRH$_SQLTEXT; COUNT(*) ---------- 0This information - stored only from CDB$ROOT - is shared in all PDB through the OBJECT LINK view.
Ivica Arsov (@IvicaArsov) has made an interesting comment about AUDIT_ACTIONS object link table. I'll blog about it soon, but in the meantime when checking its definition in cataudit.sql it came upon the following:
/* SCHEMA SYNONYMS will be added in 12g */ -- insert into audit_actions values (222, 'CREATE SCHEMA SYNONYM'); -- insert into audit_actions values (224, 'DROP SCHEMA SYNONYM');
which caught my attention.
This blog aims to make a performance comparison between the different MySQL versions/editions and also comparing the differents MySQL forks such as Percona Server and MariaDB. Indeed number of improvements as been done to innodb storage engine in the last MySQL versions. You can find below some of the performance improvements applied to InnoDB these last years (non exhaustive list):
1. New compact storage format which can save up to 20% of the disk space required in previous MySQL/InnoDB versions.
2. Faster recovery from a failed or aborted ALTER TABLE.
3. Faster implementation of TRUNCATE TABLE.
1. MySQL Enterprise Thread Pool, As of MySQL 5.5.16, MySQL Enterprise Edition distributions include a thread pool plugin that provides an alternative thread-handling model designed to reduce overhead and improve performance.
2. Changes to the InnoDB I/O subsystem enable more effective use of available I/O capacity. The changes also provide more control over configuration of the I/O subsystem.
1. Improvements to the algorithms for adaptive flushing make I/O operations more efficient and consistent under a variety of workloads. The new algorithm and default configuration values are expected to improve performance and concurrency for most users. Advanced users can fine-tune their I/O responsiveness through several configuration options.
2. InnoDB has several internal performance enhancements, including reducing contention by splitting the kernel mutex, moving flushing operations from the main thread to a separate thread, enabling multiple purge threads, and reducing contention for the buffer pool on large-memory systems.
3. You can now set the InnoDB page size for uncompressed tables to 8KB or 4KB, as an alternative to the default 16KB. This setting is controlled by the innodb_page_size configuration option. You specify the size when creating the MySQL instance. All InnoDB tablespaces within an instance share the same page size. Smaller page sizes can help to avoid redundant or inefficient I/O for certain combinations of workload and storage devices, particularly SSD devices with small block sizes.
1. In MySQL 5.7.2, InnoDB buffer pool dump and load operations are enhanced. A new system variable, innodb_buffer_pool_dump_pct, allows you to specify the percentage of most recently used pages in each buffer pool to read out and dump. When there is other I/O activity being performed by InnoDB background tasks, InnoDB attempts to limit the number of buffer pool load operations per second using the innodb_io_capacity setting.
2. As of MySQL 5.7.4, InnoDB supports multiple page cleaner threads for flushing dirty pages from buffer pool instances. A new system variable, innodb_page_cleaners, is used to specify the number of page cleaner threads. The default value of 1 maintains the pre-MySQL 5.7.4 configuration in which there is a single page cleaner thread. This enhancement builds on work completed in MySQL 5.6, which introduced a single page cleaner thread to offload buffer pool flushing work from the InnoDB master thread.
You can find an exhaustive performance improvement list on:
This test won't take into consideration all new possible optimizations provided through new variables and functionnalities. The aim of this one is simply to demonstrate the performance improvement with a non optimized but consistent configuration. In this context, a limited set of variables available in all MySQL versions (since version 5.0) have been set up.
This test is obvisously not representative of your own environnement (hardware, queries, database schema, storage engine, data type, etc..). Therefore you probably won't have the same performance behavior.
MySQL performance test Hardware configuration
This test has been done with sysbench 0.5, it has been run on a laptop equiped with a Processor Intel(R) Core(TM) i7-4700MQ CPU @ 2.40GH and 16Go RAM. The data are stored on a Samsung SSD 840 PRO Series.
First step: Installation
The first step consists in installing several different MySQL versions. Thanks to mysql_multi I've been able to run the following versions in parallel:
These servers have been setup with the same settings. However depending on the MySQL version, the default MySQL settings are different. For instance, on MySQL 5.0.15 the default value for global variable innodb_buffer_pool_size is 8388608 wheras on MySQL 5.1.73 the default value is 134217728. The default MySQL version settings have not been changed.
The only variables which have been set up are the following:
- max_connections = 8000
- open_files_limit = 8192
max_connections: The maximum permitted number of simultaneous client connections
table_open_cache: (or table_cache): The number of open tables for all threads:
open_files_limit: The number of files that the operating system permits mysqld to open. The value of this variable at runtime is the real value permitted by the system and might be different from the value you specify at server startup.
The OFA (Optimal Flexible Architecture) directory structure has been used to install the MySQL Servers.
You can find below an example of this structure:
port = 33001
mysqladmin = /u00/app/mysql/product/mysql-5.0.15/bin/mysqladmin
mysqld = /u00/app/mysql/product/mysql-5.0.15/bin/mysqld
socket = /u00/app/mysql/admin/mysqld1/socket/mysqld1.sock
pid-file = /u00/app/mysql/admin/mysqld1/socket/mysqld1.pid
log-error = /u00/app/mysql/admin/mysqld1/log/mysqld1.err
datadir = /u01/mysqldata/mysqld1
basedir = /u00/app/mysql/product/mysql-5.0.15
Second step: Test preparation
Once all MySQL Server installed and running, the second step is to prepare the table containing the records where the queries will be performed. In this test I decided to create only one table. This one is automatically named sbtest1 by sysbench. Notice that it is possible to create several tables by using “oltp-table-count” parameter.
The number of rows in this table is specified by the parameter “oltp-table-size”. This test table will contain 20'000'000 rows. The test mode is OLTP. According to sysbench documentation, this test mode was written to benchmark a real database performance.
At the prepare stage the following table is created:
mysql> desc sbtest1;
| Field | Type | Null | Key | Default | Extra |
| id | int(10) unsigned | NO | PRI | NULL | auto_increment |
| k | int(10) unsigned | NO | MUL | 0 | |
| c | char(120) | NO | | | |
| pad | char(60) | NO | | | |
Each record contains random strings in the fields c and pad and random integers between 1 and oltp-table-size in the field k as presented in the following picture:
Sysbench prepare script:
In order to be sure to have the same set of data on each server a MySQL dump has been done on the server after the first load. This dump has been imported on each server.
Third step: Running the test
The test has been run with different number of threads in order to understand how the different version/edition and fork of MySQL scale depending on the number of threads. The parameter max-request limits the total number of requests. The OLTP test mode (oltp.lua) has been written to improve performance's benchmarking of database servers by providing a realistic scenario of an OLTP database.
In order to ensure correct results, avoiding any side effects due to external process and ensuring consistent results over time, the benchmark has been run twice.
Fourth step: Collecting results
All the results have been collected in an excel sheet and the following graph directly comes from these results:
1. innodb has been improved over time in regards of scalability and the tests results tempt to proove that. The performance with 64 threads are radically different depending on the MySQL Version:
MySQL 5.0.15 – 1237 tps
MySQL 5.1.73 – 1818 tps
MySQL 5.5.39 - 2978 tps
MySQL 5.6.20 – 2801 tps
MySQL 5.6.21 – 2830 tps
MySQL 5.7.4 – 2725 tps
Percona 5.6.21 – 2853 tps
Mariadb 10.0.15 – 2941 tps
2. For application using only one thread the peformance between MySQL version (with default settings) is more or less equivalent (+/-10%):
MySQL 5.0.15 – 163 tps
MySQL 5.1.73 – 158 tps
MySQL 5.5.39 - 150 tps
MySQL 5.6.20 – 145 tps
MySQL 5.6.21 – 149 tps
MySQL 5.7.4 – 145 tps
Percona 5.6.21 – 145 tps
Mariadb 10.0.15 – 143 tps
3. For large number of threads it definitively worth to use pool of threads plugin from Percona. During these tests a improvement factor of x30 has been observed. Unfortunately I didn't see any performance improvement with MySQL 5.6.21 with the thread_pool plugin and thread_pool_size parameter set to 36 ( Best performances with Sysbench according to http://dev.mysql.com/doc/refman/5.6/en/thread-pool-tuning.html) . Regarding Percona I set up the parameter thread_pool_high_prio_mode to transactions. You can find below the results with 4096 thread:
MySQL 5.0.15 – error
MySQL 5.1.73 – 3.97 tps
MySQL 5.5.39 - 9.05 tps
MySQL 5.6.20 – 9.29 tps
MySQL 5.6.21 – 9.07 tps
MySQL 5.6.21 pool of thread plugin – 8.75
MySQL 5.7.4 – 5.64 tps
Percona 5.6.21 – 9.83 tps
Percona 5.6.21 pool of thread plugin – 295.4 tps
Mariadb 10.0.15 – 8.04 tps
It is interesting to notice that performance degradation can occur with the thread pool plugin activated for MySQL and for Percona. This performance degradation has been observed for a number of thread between 16 and 128 for Percona and 32 and 512 with MySQL.
These results tempt to prove that last MySQL releases perform better than older ones especially with several threads (64 threads in this case). The only exception is MySQL 5.7.4 which is a development release.
Applications using only one thread won't benefit from a huge performance improvement with the last MySQL versions. However enhancements provided in last versions such as ONLINE DDL, faster deadlock detection, dynamic innodb_buffer_pool_size parameter, etc, etc.. will for sure save you lots of time.
MySQL forks such as Percona and MariaDB, perform as MySQL Server. In addition I didn't observe any performance difference between MySQL Enterprise Edition and MySQL Community Edition. It is interesting to notice that thread pool plugin provided by Percona provide a huge performance improvement with large number of threads compared to standard behavior.
Regarding MySQL Enterprise Edition I haven't been able to see any performance improvement with MySQL Thread Pool plugin activated even with large number of threads. This is perhaps due to a misconfiguration from my side... however I presented these results to an Oracle MySQL specialist present on the Oracle UKOUG booth and he hasn't been able to find any error in my configuration.
The end of the year approachs and soon it will be time to celebrate Christmas with your family and friends. At dbi services, we wish you a merry christmas via SQL Server with the following script, to execute in your SQL Server Management Studio for example (supported only by SQL Server 2012 and SQL Server 2014). You will find a suprise in the result pane :-)
SET NOCOUNT ON; IF EXISTS(SELECT * FROM tempdb.sys.objects WHERE name LIKE N'#SURPRISE%') DROP TABLE #SURPRISE; GO CREATE TABLE #SURPRISE ( col1 CHAR(290) ); GO ------------------------------ INSERT #SURPRISE VALUES (REPLICATE(CHAR(111),16)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1), 50, SPACE(1) + CHAR(165)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(77) + CHAR(101) + CHAR(114) + CHAR(114) + CHAR(121)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(165)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(246)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(149)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(165)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(67) + CHAR(104) + CHAR(114) + CHAR(105) + CHAR(115) + CHAR(116) + CHAR(109) + CHAR(97) + CHAR(115) ); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(165)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(42)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(42)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(95)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(124)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(124)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(95)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(95)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(95)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(95)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(95)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(42)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(42)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(186)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(186)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(42)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(47)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(47)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, + CHAR(92)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(186)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(182)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(124)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(164)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(164)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(2) + CHAR(124)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(91)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(93)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(124)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + REPLICATE(CHAR(42), 19)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(3) + CHAR(100) + CHAR(98) + CHAR(105)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(115) + CHAR(101) + CHAR(114) + CHAR(118) + CHAR(105) + CHAR(99) + CHAR(101) + CHAR(115)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(5) + CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, REPLICATE(CHAR(111), 17)); ------------------------------ DECLARE @SQL VARCHAR(100); SELECT @SQL = 'DBCC TRACEON(3604); DBCC PAGE (''tempdb'', 1, ' + CAST(allocated_page_page_id AS VARCHAR(20)) + ', 1)' FROM sys.dm_db_database_page_allocations(2, OBJECT_ID('tempdb..#SURPRISE'), 0, NULL, DEFAULT) WHERE is_iam_page = 0 AND is_allocated = 1; EXEC(@SQL);
Merry Christmas to all of you!
A new functionality, introduced with SQL Server 2014, enables to store data files from a SQL Server database in Windows Azure Storage. In this posting, I will show how it works.
Accessing Azure Storage
The Azure Storage account is named “dbiservices”, and is composed of the “datafiles” container. This container does not contain blobs yet.
If you are not familiar with the Windows Azure Storage terms, you should read this Introduction to Microsoft Azure Storage.
In my example, I want to grant to share an access to my “datafiles” container without having to expose my account key. I teherfore need to generate a Shared Access Signature.
Shared Access Signature Generation
For this part, I will use a Third Party tool called Azure Storage Explorer.
As soon as you have installed and launched this software, you must register you Storage Account:
You must enter your Storage account name and your Storage account key. The key has been erased voluntary in this example.
As a Best Practice, I advise to enter your Secondary Access Key and not your Primary Access Key. Indeed, the Secondary Access Key is commonly used as a temporary key, and can be regenerated if necessary.
To generate my Shared Access Signature, I have to edit the Security of my container:
I select the permissions and the duration linked to my Shared Access Signature:
I generate a Shared Access Signature available one week with all permissions. The Shared Access Signature generated is an URI related to the container. I voluntary deleted partially the URI.
You need to copy the URI from “sv=” to the end.
I need to create a new Credential in SQL Server Management Studio:
The name of my Credential is the URI location of my container, and the Password is the Shared Access Signature previously created.
Creating a database with Data Files in Azure Storage
I will perform two examples: first I will create a new database directly in Azure Storage, then I will migrate an existing database to Azure Storage.
Hybrid database creation
I execute a script to create a database with its data files in Azure Storage:
If we refresh the “datafiles” container in Azure Storage explorer, we can see the datafiles previously created:
On-premise database creation
Now, I execute a script to create an on-premise database:
Then, I take the database offline:
I upload the data files in Azure Storage using Azure Storage Explorer tool:
Then, we need to alter the onpremisedb database to reference the datafiles moved to the Azure Storage:
And now, I bring the database online:
But the following error occurred:
To understand the origin of the problem, let’s see the datafiles in the Azure Storage Explorer:
This new feature offers some advantages such as high availability or easy migration.
But on the other hand, you cannot use it on a existing database, which is a serious drawback.
Furthermore, I do not believe that this feature would be used with on-premsie SQL Server databases, due to the latency. But I think it can be used with a virtual machine running in Azure.
I'll present here 3 ways to run a query for each result of another query. Let's take an exemple: get all executions plan (select from dbms_xplan.display_cursor) for each of my queries (identified from v$sql). The 90's way was to run the first query, which generates the second queries into a spool file, and execute that file. Here are easier ways, some of them coming from 12c new features lateral join and implicit statement result.