Skip navigation.

Yann Neuhaus

Syndicate content
All blog entries from http://www.dbi-services.com/
Updated: 17 hours 40 min ago

SQL Server: Online index rebuild & minimally logged operations

Fri, 2015-01-30 01:37

A couple of days ago, I encountered an interesting case with a customer concerning a rebuild index operation on a datawarehouse environment. Let me introduce the situation: a “usual DBA day” with an almost usual error message found in your dedicated mailbox: “The transaction log for database 'xxx' is full”. After checking the concerned database, I notice that its transaction log has grown up and has fulfilled the entire volume. In the same time, I also identify the root cause of our problem: an index rebuild operation performed last night that concerns a big index (approximately 20 GB in size) on a fact table. On top of all, the size of the transaction log before raising the error message was 60 GB.

As you know, on datawarehouse environment, the database recovery model is usually configured either to SIMPLE or BULK_LOGGED to minimize write operations of bulk activity and of course the concerned database meets this requirement. According to the Microsoft document we could expect to get minimally logged records for index rebuild operations (ALTER INEX REBUILD) regardless the offline / online mode used to rebuild the index. So why the transaction log has grown heavily in this case?

To get a response we have first to take a look at the rebuild index tool used by my customer: the OLA script with INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE values for FragmentationHigh parameter. Don't worry OLA scripts work perfectly and the truth is out there :-) In the context of my customer, rebuild indexes online was permitted because the edition of the concerned SQL Server instance was Enterprise and this is precisely where we have to investigate here.

Let me demonstrate with a pretty simple example. On my lab environment I have a SQL Server 2014 instance with Enterprise edition. This instance hosts the well-known AdventureWorks2012 database with a dbo.bigTransactionHistory_rs1 table (this table is derived from the original script provided by Adam Machanic).

Here the current size of the AdventureWorks2012 database:

 

select        name as logical_name,        physical_name,        size / 128 as size_mb,        type_desc,        cast(FILEPROPERTY(name, 'SpaceUsed') * 100. / size as decimal(5, 2)) as [space_used_%] from sys.database_files

 

blog_28_1_database_situation

 

and here the size of the dbo.bigTransactionHistory_rs1 table:

 

exec sp_spaceused@objname = N'dbo.bigTransactionHistory_rs1'; go

 

blog_28_2_bigTransactionHistory_rs1_situation

 

Total used size: 1.1 GB

Because we are in SIMPLE recovery model, I will momentary disable the checkpoint process in order to have time to get log records inside the transaction log by using the traceflag 3505

 

dbcc traceon(3505, -1);


Case 1: ALTER REBUID INDEX OFFLINE


alter index pk_bigTransactionHistory on dbo.bigTransactionHistory_rs1 rebuild with (online = off, maxdop = 1); go

 

Let's check the size of transaction log of the AdventureWorks2012 database

 

blog_28_3_database_situation_after_rebuild_offline

 

Case 2: ALTER REBUID INDEX ONLINE


-- to initiate a tlog truncation before rebuilding the same index online Checkpoint;   alter index pk_bigTransactionHistory on dbo.bigTransactionHistory_rs1 rebuild with (online = off, maxdop = 1); go

 

Let's check again the size of the transaction log of the AdventureWorks2012 database:

 

blog_28_3_database_situation_after_rebuild_online

 

It is clear that we have an obvious difference of size concerning the transaction log for each operation.

- offline: 4096 * 0.35% = 14MB - online: 4096 * 5.63% = 230MB.

Stay curious and let's have a look deeper at the records written inside the transaction log for each mode by using the undocumented function sys.fn_dblog() as follows:

 

select        COUNT(*) as nb_records,        SUM([Log Record Length])/ 1024 as kbytes from sys.fn_dblog(NULL, NULL); go

 

Offline Online  blog_28_4_tlog_detail_offline_mode  blog_28_4_tlog_detail_online_mode

 

As expected we may notice a lot of records with index rebuild online operation comparing to the index rebuild offline operation (x21)

Let's continue looking at the operations performed by SQL Server during the index rebuild operation in both cases:

 

select        Operation,        COUNT(*) as nb_records,        SUM([Log Record Length])/ 1024 as kbytes from sys.fn_dblog(NULL, NULL) group by Operation order by kbytes desc go

 

Offline Online blog_28_5_tlog_detail_offline_mode_2 blog_28_5_tlog_detail_online_mode_2

 

The above picture is very interesting because we may again see an obvious difference between each mode. For example, if we consider the operations performed in the second case (on the right, some of them doesn't concern bulk activity as LOP_MIGRATE_LOCKS, LOP_DELETE_ROWS, LOP_DELETE_SPLITS, LOP_MODIFY_COLUMS an unknown allocation unit, which probably concerns the new structure. At this point I can't confirm it (I don't show here all details o these operations. I let you see by yourself). Furthermore, in the first case (on the left), the majority of operations concerns only LOP_MODIFY_OPERATION on the PFS page (context).

Does it mean that the online mode doesn't use minimaly mechanism for the whole rebuild process? I retrieved an interesting response from this Microsoft KB which confirms my suspicion.

Online Index Rebuild is a fully logged operation on SQL Server 2008 and later versions, whereas it is minimally-logged in SQL Server 2005. The change was made to ensure data integrity and more robust restore capabilities.

However I guess we don't have the same behavior than the FULL recovery model here. Indeed, there still exists a difference between SIMPLE / BULK_LOGGED and FULL recovery models in term of amount of log records generated. Here a picture of the transaction log size after rebuilding the big index online in full recovery model in my case:

 

blog_28_3_database_situation_after_rebuild_online_full_recovery

 

Ouch! 230MB (SIMPLE / BULK-LOGGED) vs 7GB  (FULL). It is clear that using FULL recovery model with rebuild index online operations will have a huge impact on the transaction log compared to the SIMPLE / BULK-LOGGED recovery model. So the solution in my case consisted in switching to offline mode or at least reducing the online operation for the concerned index.

Happy maintenance!

Birst: an attractive "all-in-one" Business Intelligence solution

Thu, 2015-01-29 09:02

The review of the Gartner Magic Quadrant for Business Intelligence and Analytics Platforms has revealed a new challenger that could become one of the leader of this market: Birst - an interesting "all-in-one" BI solution.

 

Gartner_magic_quadrant_bi_2014_20150123-152555_1.jpg

 

Birst is an american company based in the Silicon Valley near San Francisco. Its SaaS (Software as a Service) BI solution mixes simplicity and power.

Just a little reminder before going forward: a complete Business Intelligence solution has two kind of tools:

 

b2ap3_thumbnail_BI_schema.jpg


  • Back-end BI tools: these tools are used to load and transform the data before using it for reporting
  • Front-end BI tools: these tools are used by end user to do the reporting (creating reports, dashboards, drill down reports …)

So what are the assets of Birst’s BI Solution?


Regarding the ergonomics of the solution

All in One: All the BI leader solutions on the market are using more than one applications for back-end and front-end tools. Birst is the only one to provide all the tools using one interface.

Online product: Birst is an online solution. No installation on a device is needed. You can save your data, reports, and dashboard in your own "space" (in the example above, the name of the space is "SupremEats DEV) located in the cloud.

 

Regarding the back-end tool (ETL, semantic layer, …)

Cloud connectivity. As a cloud-based solution, Birst can load data from the cloud using special connectors. So you can very easilymix  your own business data with cloud data.

Tasks automation. The Birst admin application (used to design the data model) uses a lot of automated tasks especially regarding the generation of the star schema or business model:

  • Automatic generation from the joins in regard to the different facts and dimension tables
  • Automatic generation from the time dimension (year, quarter, month, ...)
  • Automatic generation from the measures with context (e. g. sum of revenue, max revenue, min revenue, ...)

Time development reduction. The data modelling tasks are very time consuming in a Business Intelligence project. The automation of such tasks can very much decrease the time of development. Of course, Birst has the possibility to create or transform its own schema like a traditional ETL tool.

 

Regarding the front-end tool (report, dashboard, navigation)

Intuitive interface. For the end-user, the simplicity of the report / dashboard creation interface is another advantage of this solution. All the creation steps are guided by an assistant. If you compare it with other products like QlikView or Webi from SAP Business Objects, the interface is easy to learn:


b2ap3_thumbnail_Qlik_BO_Birst_Comparison.jpg


Powerful reporting tool. However, it remains a high-performance product with the possibility of creating professional and complex reports or dashboards:

 

b2ap3_thumbnail_Birst_reports_examples_1.jpgb2ap3_thumbnail_Birst_reports_examples_2.jpgb2ap3_thumbnail_Birst_reports_examples_3.jpg

Multi saving format options: Dashboards and reports can be saved using different format (saving in PDF format, XLS export, creation of a Microsoft Powerpoint, presentation ...).

Scheduling option for the end user: The end user has the possibility to publish his reports using schedule options.

b2ap3_thumbnail_Birst_scheduling_options.jpg

Conclusion

Birst could become a future leader in BI solutions. Combining both simplicity and power, Birst can seduce a lot of mid-sized enterprises or business units within large enterprises with small- or medium-sized BI budgets.

Documentum Multiple ADTS - Ratio of rendition creations between instances (Part 2)

Thu, 2015-01-29 02:15

This is the second part of my previous blog concerning rendition creation ratio between two ADTS servers. In this part I will talk about another way of getting this ratio. In addition this one doesn't require to enable auditing and this way is preferable concerning database space footprint.

DMR_CONTENT

This method uses objects that already exist in the docbase and are populated each time a rendition is done. Because in fact, it is the content of the rendition itself! Fortunately the server where the content has been created is listed in the object.

You just have to adapt the following query and execute it:

select set_client,count(*) as val from dmr_content where full_format = 'pdf' and set_client like '%chbsmv_dmrsp%' group by set_client;

ADTS_dmr_content_ratio_blur.PNG

And now you have all your ADTS servers listed with the total rendition they did.

Flush one SQL statement to hard parse it again

Wed, 2015-01-28 22:12

If you want a statement to be hard parsed on its next execution, you can flush the shared pool, but you don't want all the cursors to be hard parsed. Here is how to flush only one statement, illustrated with the case where it can be useful.
During the performance training, here is how I introduce Adaptive Cursor Sharing, here is how I show the bind variable peeking problem that is well known by everyone that was DBA at the times of 9iR2 upgrades.

I've a customer table with very few ones born before 30's and lot of ones born in 70's. Optimal plan is index access for those from 1913 and full table scan for those born in 1971.

I've an histogram on that column so the optimizer can choose the optimal plan, whatever the value is. But I'm a good developer and I'm using bind variables in order to avoid to parse and waste shared pool memory for each value.

Here is my first user that executes the query for the value 1913

SQL> execute :YEAR:=1913;
PL/SQL procedure successfully completed.

SQL> select cust_gender,count(*) from CUSTOMERS where cust_year_of_birth=:YEAR group by cust_gender;

C   COUNT(*)
- ----------
M          4
F          1

Here is the execution plan:
SQL> select * from table(dbms_xplan.display_cursor(format=>'iostats last +peeked_binds'));

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------
SQL_ID  dpxj8c5y81bdr, child number 0
-------------------------------------
select cust_gender,count(*) from CUSTOMERS where
cust_year_of_birth=:YEAR group by cust_gender

Plan hash value: 790974867

------------------------------- ------------ --------------------------------------
| Id  | Operation               | Name       | Starts | E-Rows | A-Rows | Buffers |
------------------------------- ------------ --------------------------------------
|   0 | SELECT STATEMENT        |            |      1 |        |      2 |       7 |
|   1 |  HASH GROUP BY          |            |      1 |      2 |      2 |       7 |
|   2 |   TABLE ACCESS BY INDEX | CUSTOMERS  |      1 |      5 |      5 |       7 |
|*  3 |    INDEX RANGE SCAN     | DEMO_CUST_ |      1 |      5 |      5 |       2 |
------------------------------- ------------ --------------------------------------
And thanks to the '+peeked_binds' I know that it has been optimized for 1913
Peeked Binds (identified by position):
--------------------------------------
   1 - :1 (NUMBER): 1913

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("CUST_YEAR_OF_BIRTH"=:YEAR)
I've the right plan, optimal for my value.

But I've used bind variables in order to share my cursor. Others will execute the same with other values. They will soft parse only and share my cursor. Look at it:

SQL> execute :YEAR:=1971;
PL/SQL procedure successfully completed.

SQL> select cust_gender,count(*) from CUSTOMERS where cust_year_of_birth=:YEAR group by cust_gender;

C   COUNT(*)
- ----------
M        613
F        312
Look at the plan, it's the same:
SQL> select * from table(dbms_xplan.display_cursor(format=>'iostats last +peeked_binds'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------
SQL_ID  dpxj8c5y81bdr, child number 0
-------------------------------------
select cust_gender,count(*) from CUSTOMERS where
cust_year_of_birth=:YEAR group by cust_gender

Plan hash value: 790974867

------------------------------- --------------- --------------------------------------
| Id  | Operation               | Name          | Starts | E-Rows | A-Rows | Buffers |
------------------------------- --------------- --------------------------------------
|   0 | SELECT STATEMENT        |               |      1 |        |      2 |     228 |
|   1 |  HASH GROUP BY          |               |      1 |      2 |      2 |     228 |
|   2 |   TABLE ACCESS BY INDEX | CUSTOMERS     |      1 |      5 |    925 |     228 |
|*  3 |    INDEX RANGE SCAN     | DEMO_CUST_YEA |      1 |      5 |    925 |       4 |
------------------------------- --------------- --------------------------------------

Peeked Binds (identified by position):
--------------------------------------
   1 - :1 (NUMBER): 1913

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("CUST_YEAR_OF_BIRTH"=:YEAR)
The plan is optimized for 1913, estimating 5 rows (E-Rows) but now returning 925 rows (A-Rows). That may be bad. Imagine a nested loop planned for few rows but finally running on million of rows...

The goal of this post is not to show Adaptive Cursor Sharing that may solve the issue once the problem has occured. And Adaptive Cursor Sharing do not work in all contexts (see Bug 8357294: ADAPTIVE CURSOR SHARING DOESN'T WORK FOR STATIC SQL CURSORS FROM PL/SQL)

The goal is to answer to a question I had during the workshop: Can we flush one cursor in order to have it hard parsed again ? It's a good question and It's a good idea to avoid to flush the whole shared pool!

This is not new (see here, here, here, here, here,...). But here is the query I use to quickly flush a statement with its sql_id.

I have the following cursor in memory:

SQL> select child_number,address,hash_value,last_load_time from v$sql where sql_id='dpxj8c5y81bdr';

CHILD_NUMBER ADDRESS          HASH_VALUE LAST_LOAD_TIME
------------ ---------------- ---------- -------------------
           0 00000000862A0E08 2088807863 2015-01-29/14:56:46
and I flush it with dbms_shared_pool.purge:
SQL> exec for c in (select address,hash_value,users_executing,sql_text from v$sqlarea where sql_id='dpxj8c5y81bdr') loop sys.dbms_shared_pool.purge(c.address||','||c.hash_value,'...'); end loop;

PL/SQL procedure successfully completed.
I've 3 remarks about it:

1. If the cursor is currently running, the procedure will wait.

2. In 10g you have to set the following event for your session:

alter session set events '5614566 trace name context forever';

3. The '...' is anything you want which is not a P,Q,R,T which are used for Procedures, seQences, tRigger, Type. Anything else is for cursors. Don't worry, this is in the doc.

Ok, the cursor is not there anymore:

SQL> select child_number,address,hash_value,last_load_time from v$sql where sql_id='dpxj8c5y81bdr';
no rows selected
And the next execution will optimize it for its peeked bind value:
SQL> execute :YEAR:=1971;
PL/SQL procedure successfully completed.

SQL> select cust_gender,count(*) from CUSTOMERS where cust_year_of_birth=:YEAR group by cust_gender;

C   COUNT(*)
- ----------
M          4
F          1

SQL> select * from table(dbms_xplan.display_cursor(format=>'iostats last +peeked_binds'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------
SQL_ID  dpxj8c5y81bdr, child number 0
-------------------------------------
select cust_gender,count(*) from CUSTOMERS where
cust_year_of_birth=:YEAR group by cust_gender

Plan hash value: 1577413243

-----------------------------------------------------------------------------
| Id  | Operation          | Name      | Starts | E-Rows | A-Rows | Buffers |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |           |      1 |        |      2 |     233 |
|   1 |  HASH GROUP BY     |           |      1 |      2 |      2 |     233 |
|*  2 |   TABLE ACCESS FULL| CUSTOMERS |      1 |    925 |      5 |     233 |
-----------------------------------------------------------------------------

Peeked Binds (identified by position):
--------------------------------------
   1 - :1 (NUMBER): 1971

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("CUST_YEAR_OF_BIRTH"=:YEAR)

Here is the cursor that have been re-loaded, re-parsed, and re-optimized:
SQL> select child_number,address,hash_value,last_load_time from v$sql where sql_id='dpxj8c5y81bdr';

CHILD_NUMBER ADDRESS          HASH_VALUE LAST_LOAD_TIME
------------ ---------------- ---------- -------------------
           0 00000000862A0E08 2088807863 2015-01-29/14:56:49
That's the right plan. A full table scan when I want to read lot of rows.

Don't take it wrong. This is not a solution. It's just a quick fix when a plan has gone wrong because the first execution was done by a special value. We flush the plan and expect that the following execution is done with a regular value.
You probably have the sql_id as you have seen a long running query with a bad plan. Here is the way to flush all its children - ready to copy/paste in case of emergency:

set serveroutput on
begin
 for c in (select address,hash_value,users_executing,sql_text from v$sqlarea where sql_id='&sql_id') 
 loop 
  dbms_output.put_line(c.users_executing||' users executing '||c.sql_text);
  sys.dbms_shared_pool.purge(c.address||','||c.hash_value,'...'); 
  dbms_output.put_line('flushed.');
 end loop;
end;
/

Stay with non-CDB or go to CDB?

Wed, 2015-01-28 10:19

This is a question that starts to be raised quite frequently. Oracle released the multitenant architecture 1.5 year ago. And now says that the previous architecture - known as non-CDB - is deprecated. What does it mean? Do we have to go to the CDB architecture even if we dont want to use/pay multi-tenant?

When to gather workload system statistics?

Fri, 2015-01-23 08:40

This month we started to give our Oracle Tuning Workshop. And with a new workshop comes new questions. We advise to give to the optimizer the most accurate statsistics that we can have. That suggests that WORKLOAD statistics are better than NOWORKLOAD ones because they gather the average number of blocks read in multiblock read, rather than using default values. But then, the question is: which time period do you choose to gather workload statistics, and with which interval duration?

Memory management, OOM issues with SQL Server 2014 In-memory OLTP

Thu, 2015-01-22 08:47

Last week I gave a workshop about SQL Server 2014 and the new features. The first day we worked on new In-memory OLTP and different topics such as the new internal storage, the new transaction processing behavior or the new checkpointing process. During this day, one of the attendees asked me about the memory management with In-Memory OLTP feature. It was a very interesting question but unfortunately I didn’t have the time to discuss about it with him, so I decided to publish something concerning this topic. This subject may be extensive and time consuming, so I will try to give only a good overview to understand correctly how memory management works against memory-optimized objects and how important the monitoring aspect is in this particular context.

First of all, keep in mind that memory-optimized tables are memory-oriented feature. It means that memory-optimized structures (indexes and data rows) will reside exclusively in memory. This is by design and this point is very important. Let me explain why later in this blog post.

For the moment, let’s focus on memory aspects of this new feature. In-memory OLTP is not different from other memory consumers on SQL Server. Indeed, In-memory OLTP objects have their own memory clerk MEMORYCLERK_XTP. Let’s have a look at the sys.dm_os_memory_clerks DMV to show information concerning In-Memory OLTP allocated memory.


select        [type],        name,        memory_node_id,        page_size_in_bytes,        pages_kb/1024 as size_MB from sys.dm_os_memory_clerks where type like '%xtp%'; go

 

blog_27_1_memory_clerk


In my case we may notice that the database dbi_hk (DB_ID = 24) contains memory-optimized objects with a dedicated memory clerk on it. Others xtp memory clerks are dedicated for system threads (first line) and DAC (last line) but let’s focus on my user database memory clerk which has 2336MB of page memory allocated.

On my lab environment, I have only one memory-optimized table named bigTransactionHistory_xtp inside the dbi_hk database. Let’s have a look at the new DMV sys.dm_db_xtp_table_memory_stats to show memory information for this table:

 

SELECT        object_name(object_id) AS table_name,        memory_allocated_for_indexes_kb / 1024 as mem_alloc_index_mb,        memory_allocated_for_table_kb / 1024 as mem_alloc_table_mb,        memory_used_by_indexes_kb / 1024 as mem_used_index_mb,        memory_used_by_table_kb / 1024 as mem_used_table_mb,        (memory_allocated_for_table_kb + memory_allocated_for_indexes_kb) / 1024 as mem_alloc_total_mb,        (memory_used_by_table_kb + memory_used_by_indexes_kb) /1024 as mem_used_total_mb FROM sys.dm_db_xtp_table_memory_stats where object_id = object_id('bigTransactionHistory_xtp'); go

 

 

blog_27_2_xtp_table_memory_stats


We may expect to retrieve the same amount of memory page allocated here and in the dedicated memory clerk of the dbi_hk database. This is approximatively the case. The difference we found concerns probably memory allocated for system internal structures. We may have a look at the concerned DMV sys.dm_db_xtp_memory_consumers but I will focus on it in a next blog post.

At this point we know where to find information concerning the memory consumption for memory-optimized objects but I still have one question in mind: how does SQL Server memory manager deal with memory concurrent activities between memory-optimized tables and their disk-based table counterparts? Like any other memory consumer, the in-memory OLTP engine responds to memory-pressure, but to a limited degree because memory consumed by data and indexes can’t be released even under memory pressure.

To deal correctly with In-Memory OLTP engine and others consumers we have to turn on the resource governor (RG) side. Indeed, by default all databases are mapped to the default resource pool regardless the RG is enabled. In the same way, workloads issued from both disk-based tables and memory-optimized tables will run concurrently on the default resource pool if any special configuration is performed. In such case, RG will use an internal threshold for In-Memory OLTP to avoid conflicts over pool usage. The threshold is depending on the memory size configured for SQL Server and especially to the target commit memory for the SQL Server instance. You can refer to the Microsoft documentation here for more details.

So, in my case the max memory setting value is configured to 6144MB and the target committed memory is as follows:

 

select        committed_target_kb / 1024 as committed_target_mb from sys.dm_os_sys_info;

 

blog_27_3_target_commit_memory

 

According the Microsoft documentation (cf. link above) the percent available for in-memory tables will be 70% or 0.7 * 4898 = 3429MB. I may retrieve this information by using the DMV related on the RG. You can find an original version of this script on MSSQLTIPs.com website.

 

;with cte as (        select                RP.pool_id ,              RP.Name ,              RP.min_memory_percent ,              RP.max_memory_percent ,              cast(RP.max_memory_kb / 1024. as numeric(12, 2)) AS max_memory_mb ,              cast(RP.used_memory_kb / 1024. as numeric(12, 2)) AS used_memory_mb ,              cast(RP.target_memory_kb / 1024. as numeric(12,2)) AS target_memory_mb,              cast(SI.committed_target_kb / 1024. as numeric(12, 2)) AS committed_target_mb    from sys.dm_resource_governor_resource_pools RP    cross join sys.dm_os_sys_info SI ) select        c.pool_id ,        c.Name ,        c.min_memory_percent ,        c.max_memory_percent ,        c.max_memory_mb ,        c.used_memory_mb ,        c.target_memory_mb ,        c.committed_target_mb,        CAST(c.committed_target_mb * case when c.committed_target_mb then 0.7                                                                when c.committed_target_mb < 16384 then 0.75                                                                when c.committed_target_mb < 32768 then 0.8                                                                when c.committed_target_mb then 0.85                                                                when c.committed_target_mb > 98304 then 0.9                                                       end * c.max_memory_percent / 100 as numeric(12,2)) as [Max_for_InMemory_Objects_mb],        CAST(c.committed_target_mb * case when c.committed_target_mb then 0.7                                                                when c.committed_target_mb < 16384 then 0.75                                                                when c.committed_target_mb < 32768 then 0.8                                                                when c.committed_target_mb then 0.85                                                                when c.committed_target_mb > 98304 then 0.9                                                       end * c.max_memory_percent / 100 as numeric(12,2)) - c.used_memory_mb as Free_for_InMemory_Objects_mb FROM cte c; go

 

blog_27_4_RG_memory_available_for_xtp_tables


Ok I retrieve (approximatively) this value by looking at the Max_for_InMemory_Objects_mb column on the default pool line record. Notice that we have already 2008MB used in the default resource pool.

At this point the In-memory OLTP and the disk-based OLTP run concurrently on the same resource pool and of course, this is not a recommended situation. Indeed, we may be in a situation where In-Memory OLTP consumes all the available memory from this pool. In such situation, SQL Server will be forced to flush data pages from disk-based tables and you know the performance impact of this process.

Go ahead and let’s create an issue you can faced with In-Memory OLTP and misconfigured environments. First we decrease the max memory setting value to 4096MB and then we load another bunch of data into bigTransactionHistory_xtp table to consume an important part of the available memory dedicated to memory-optimized objects in the default resource pool. Finally let’s have again a look at the RG memory configuration by using the previous script. We have now a good picture of changes applied after our reconfiguration:

 

blog_27_5_RG_memory_available_for_xtp_tables 

As expected, several values have changed for target memory, memory available for memory-optimized tables and memory used memory into the default resource pool. The new available memory value for the resource pool is now 1605MB (3891MB – 2286MB). I let you think about a bad situation where your memory-optimized table will consume all available memory inside the default resource pool in order of magnitude … the consequences are obvious (even if they depend on the context): probably a lot of memory pressures between the buffer pool consumer and In-Memory OLTP consumer and in the worst case a potential OOM issue like as follows:

 

blog_27_6_RG_default_pool_oom_condition

 

After loading data into the bigTransactionHistory_xtp we can notice we have consumed all available memory for In-memory objects into the default resource pool. However as said earlier, RG guarantees a certain amount of memory for disk-based tables.

 

blog_27_6_RG_memory_available_oom_condition


Ok now let’s simulate a crash recovery scenario by restarting the SQL Server instance. In my case the SQL Server engine service didn’t restart correctly… ouch... What’s going on? Of course my first though was to take a look directly on the error log of my SQL Server instance. The first error message I encountered was as follows:

 

blog_27_7_xtp_start_instance_error_fail_page_allocation

 

Ok... it seems there is an issue during the dbi_hk recovery database process. In fact during the recovery process one step consists in building the index structure and link the data rows to this structure. But you can see that this step fails with an OOM (Out Of Memory) issue.

 

blog_27_8_xtp_start_instance_oom_1


blog_27_8_xtp_start_instance_oom_2


In this second part, we have interesting information concerning our OOM issue. First of all in the “process / system counts” section we may notice that SQL Server had to deal with internal memory pressures (process physical memory low = 1) so we can exclude external memory pressure. Then, in the “memory manager” section we have two additional sections Last OOM Factor and Page Alloc Potential. The former confirms an OOM (out of memory) issue into the Memory manager. The latter shows a negative value that indicates that the buffer pool does not have any free memory so our assumption that it was an internal memory pressure is correct. As a reminder Page Alloc Potential is similar to Stolen Potential in previous versions of SQL Server.

Let’s continue and point out the memory clerks which are responsible for the memory pressure. By investigating down into the log file, I found two relevant memory clerks with a lot of pages allocated as shown above:

 

blog_27_9_xtp_clerk_status


blog_27_9_xtp_clerk_status_2


As expected, the first memory clerk concerns In-memory OLTP (XTP as Extreme Transaction Processing) and the second is related on the log pool manager that is heavily used during recovery processing. The both memory clerks, at the time of the OOM issue, have a total size of 3.7GB. This does not leave much room for the caches left in the default resource pool. Finally the end of the error log contains the following error messages that confirm that SQL Server is missing memory for its default resource pool.

 

blog_27_9_xtp_clerk_status_3


According to the Microsoft documentation that’s the resolution of OOM issues with In-Memory table’s scenarios, the number of solutions are very limited. In my case, I started the SQL Server engine with –f parameter to load minimal configuration and then I increased the amount of memory dedicated to In-memory OLTP by increasing the max server memory option in the server side. This fix will avoid to face the same issue on the next restart of my SQL Server engine service.

Is it possible to fix definitely this OOM condition? The response is yes and we have to configure a resource pool with memory limitations and bind it with our memory-optimized database. This is another story and I let you check the Microsoft document! My intention in this blog post is only to create awareness of the importance of a good memory management with new In-memory OLTP feature.

Happy configuration!

Oracle OPatch and "Archive not applied" error

Mon, 2015-01-19 18:10

Recently, I have installed an Oracle 11.2.0.4 RDBMS on a server, and I wanted to install the latest Patch Set Update (PSU) 11.2.0.4.4 from October 2014. During the execution, I fell on the error "Archive not applied" and the job failed. This is how to quickly fix this issue.

While installing the PSU 11.2.0.4.4 from October 2014 on my Oracle 11.2.0.4 database server, I experienced the following error:

 

The following actions have failed:
Archive not applied /oracle/soft/11.2.0.4/psu 11.2.0.4.4/19791364/19121551/17478514/files/lib/libserver11.a/kjb.o to /u00/app/oracle/product/11.2.0/db_4_0/lib/libserver11.a... ''
Archive not applied /oracle/soft/11.2.0.4/psu 11.2.0.4.4/19791364/19121551/17478514/files/lib/libpls11.a/pevmexe.o to /u00/app/oracle/product/11.2.0/db_4_0/lib/libpls11.a... ''
Archive not applied /oracle/soft/11.2.0.4/psu 11.2.0.4.4/19791364/19121551/17478514/files/lib/libclient11.a/knoggcap.o to /u00/app/oracle/product/11.2.0/db_4_0/lib/libclient11.a... ''
Archive not applied /oracle/soft/11.2.0.4/psu 11.2.0.4.4/19791364/19121551/17478514/files/lib/libgeneric11.a/qcd.o to /u00/app/oracle/product/11.2.0/db_4_0/lib/libgeneric11.a... ''
Archive not applied /oracle/soft/11.2.0.4/psu 11.2.0.4.4/19791364/19121551/17478514/files/lib/libpls11_pic.a/pevmexe_pic.o to /u00/app/oracle/product/11.2.0/db_4_0/lib/libpls11_pic.a... ''
Archive not applied /oracle/soft/11.2.0.4/psu 11.2.0.4.4/19791364/19121551/17478514/files/lib/libxml11.a/lpxm.o to /u00/app/oracle/product/11.2.0/db_4_0/lib/libxml11.a... ''

 

Do you want to proceed? [y|n]
n

 

Typically, this error appears when there are permission issues in the Oracle directories. Here, the /u00/app/oracle/product/11.2.0/db_4_0/lib directory seemed to by the cause of my error. So I checked my environment. By invoking OPatch with the Oracle user, access should be gained to that directory:

 

oracle@srvora01 ~]$ id
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba)

 

I am connected as "oracle" user, member of the "oinstall" group.

 

[oracle@srvora01 ~]$ ls -ld /u00/app/oracle/product/11.2.0/db_4_0/lib
drwxr-xr-x 4 oracle oinstall 12288 Jan 16 13:10 /u00/app/oracle/product/11.2.0/db_4_0/lib/oracle/soft/11.2.0.4/psu 11.2.0.4.4/19791364/19121551/

 

As you can see, my "lib" directory is owned by "oracle" with the "oinstall" group. So there are definitely no issues with permissions here :-(

I was very surpised during a few minutes! But I discovered an issue in the path of the patch I wanted to deploy:

 

[oracle@srvora01 19121551]$ pwd
/oracle/soft/11.2.0.4/psu 11.2.0.4.4/19791364/19121551/

 

Do you see my mistake? I used a space character in the name of the directory containing the patch to apply. And OPatch does not like space characters, definitely...

 

Once my mistake discovered, I just had to rename my directory and run the OPatch again:

 

[oracle@srvora01 ~]$ mv "/oracle/soft/11.2.0.4/psu\ 11.2.0.4.4/" "/oracle/soft/11.2.0.4/psu_11.2.0.4.4/"

 

[oracle@vmreforadg01 ~]$ cd /oracle/soft/11.2.0.4/psu_11.2.0.4.4/19791364/19121551
[oracle@vmreforadg01 19121551]$ opatch apply
Oracle Interim Patch Installer version 11.2.0.3.6
Copyright (c) 2013, Oracle Corporation.  All rights reserved.
...

 

The patch was installed successfully:

 

[oracle@vmreforadg01 19121551]$ opatch lsinventory
Oracle Interim Patch Installer version 11.2.0.3.6
Copyright (c) 2013, Oracle Corporation.  All rights reserved.
Oracle Home       : /u00/app/oracle/product/11.2.0/db_4_0
Central Inventory : /u00/app/oraInventory
   from           : /u00/app/oracle/product/11.2.0/db_4_0/oraInst.loc
OPatch version    : 11.2.0.3.6
OUI version       : 11.2.0.4.0
Log file location : /u00/app/oracle/product/11.2.0/db_4_0/cfgtoollogs/opatch/opatch2015-01-16_13-33-02PM_1.log
Lsinventory Output file location : /u00/app/oracle/product/11.2.0/db_4_0/cfgtoollogs/opatch/lsinv/lsinventory2015-01-16_13-33-02PM.txt
--------------------------------------------------------------------------------
Installed Top-level Products (1):
Oracle Database 11g                                                  11.2.0.4.0
There are 1 product(s) installed in this Oracle Home.
Interim patches (1) :
Patch  19121551     : applied on Fri Jan 16 13:10:20 CET 2015
Unique Patch ID:  17949166
Patch description:  "Database Patch Set Update : 11.2.0.4.4 (19121551)"
   Created on 6 Oct 2014, 10:07:57 hrs PST8PDT
Sub-patch  18522509; "Database Patch Set Update : 11.2.0.4.3 (18522509)"
Sub-patch  18031668; "Database Patch Set Update : 11.2.0.4.2 (18031668)"
Sub-patch  17478514; "Database Patch Set Update : 11.2.0.4.1 (17478514)"
   Bugs fixed:
     17288409, 17205719, 17811429, 17754782, 17726838, 13364795, 17311728
     17284817, 17441661, 13645875, 18199537, 16992075, 16542886, 17446237
     14565184, 17071721, 17610798, 17375354, 17449815, 17397545, 19463897
     18230522, 17235750, 16360112, 13866822, 17982555, 17478514, 12905058
     14338435, 13944971, 16929165, 12747740, 17546973, 14054676, 17088068
     18264060, 17343514, 17016369, 17042658, 14602788, 14657740, 17332800
     19211724, 13951456, 16315398, 17186905, 18744139, 16850630, 17437634
     19049453, 18673304, 17883081, 18641419, 17296856, 18262334, 17006183
     18277454, 17232014, 16855292, 10136473, 17705023, 17865671, 18554871
     19121551, 17588480, 17551709, 17344412, 17842825, 18681862, 17390160
     13955826, 13609098, 18139690, 17501491, 17239687, 17752121, 17299889
     17602269, 18673325, 17313525, 17242746, 19544839, 17600719, 18191164
     17571306, 19466309, 17951233, 18094246, 17165204, 17011832, 17040527
     16785708, 16180763, 17477958, 17174582, 17465741, 18522509, 17323222
     19463893, 16875449, 16524926, 17237521, 17596908, 17811438, 17811447
     18031668, 16912439, 16494615, 18061914, 17545847, 17082359, 19554106
     17614134, 17341326, 17891946, 19458377, 17716305, 17752995, 16392068
     19271443, 17767676, 17614227, 17040764, 17381384, 18973907, 18673342
     14084247, 17389192, 17006570, 17612828, 17721717, 13853126, 18203837
     17390431, 17570240, 14245531, 16043574, 16863422, 19727057, 17468141
     17786518, 17037130, 17267114, 18203838, 16198143, 16956380, 17478145
     14829250, 17394950, 17027426, 16268425, 18247991, 19584068, 14458214
     18436307, 17265217, 13498382, 16692232, 17786278, 17227277, 16042673
     16314254, 17443671, 16228604, 16837842, 17393683, 17787259, 18009564
     15861775, 16399083, 18018515, 16472716, 17050888, 14010183, 17325413
     16613964, 17080436, 17036973, 17761775, 16721594, 18280813, 15979965
     18203835, 17297939, 16731148, 17811456, 14133975, 17385178, 17586955
     16450169, 17655634, 9756271, 17892268, 17648596, 16220077, 16069901
     11733603, 16285691, 17587063, 18180390, 17393915, 18096714, 17238511
     17824637, 14285317, 19289642, 14764829, 18328509, 17622427, 16943711
     17346671, 18996843, 14852021, 17783588, 16618694, 17672719, 17546761
--------------------------------------------------------------------------------
OPatch succeeded.

NLS_LANG on Windows in Europe: WE8MSWIN1252 or WE8PC850?

Wed, 2015-01-14 23:57

In europe we have accents and non US7ASCII characters. We need special characterset. I'm not talking about Unicode here that solves all the problems. If you have a Java application, you have no problem: it's Unicode. You can store all characters in one multi-byte characterset. But for other applications, on Windows, you have 2 possible charactersets for Western Europe WE8MSWIN1252 and WE8PC850. WE8MSWIN1252 is the one that is set by default in the registry, but is it the right one?

SQL Server 2014: FCIs, availability groups, and TCP port conflict issues

Mon, 2015-01-12 23:26

After giving my session about SQL Server AlwaysOn and availability groups at the last French event “Les journées SQL Server 2014”, I had several questions concerning the port conflict issues, particularly the differences that exist between FCIs and availability groups (AAGs) on this subject.

In fact, in both cases, we may have port conflicts depending on which components that are installed on each cluster node. Fundamentally, FCIs and AAGs are both clustered-based features but each of them use the WSFC differently: SQL Server FCIs are “cluster-aware” services while AAGs use standalone instances by default (using of clustered instances with AAGs is possible but this scenario is relatively uncommon and it doesn’t change in any way the story).

First of all, my thinking is based on the following question: Why does having an availability group listener on the same TCP port than an SQL Server instance (but on a different process) cause a conflict issue whereas having both SQL Server FCIs with the same port is working fine?

Let’s begin with the SQL Server FCIs. When you install two SQL Server FCIs (on the same WSFC), you can configure the same listen port for the both instances and it works perfectly right? Why? The main reason is that each SQL Server FCI has its dedicated virtual IP address and as you know, a process can open a socket to a particular IP address on a specific port. However, two or more processes that attempt to open a socket on the same specific port and on the same IP address will result to a conflict. For instance, in my case, I have two SQL Server FCIs - SQLCLUST-01\SQL01 and SQLCLUST-02\SQL02 – that respectively listen on the same TCP port number: 1490. Here the picture of netstat –ano command output

 

blog_26_-_netstat_ano_-_1

 

Notice that each SQL Server process listens to its IP address and only to this one. We can confirm this by taking a look at each SQL Server error log.

 

blog_26_-_sqlclust01_sql01_error_log_-_2

 

...

 

blog_26_-_sqlclust02_sql02_error_log_-_3

 

Now let’s continue with the availability groups. The story is not the same because in most scenarios, we use standalone instances and by default they listen on all available IP addresses. In my case, this time I have two standalone instances – MSSQLSERVER (default) and APP - that listen respectively on the TCP port 1433 and 1438. By looking at the netstat –ano output we can notice that each process listen on all available IP addresses (LocalAddress = 0.0.0.0)

 

blog_26_-_netstat_ano_-_4

 

We can also verify the SQL Server error log of each standalone instance (default and APP)

 

blog_26_-_sql141_error_log_-_5

 

...

 

blog_26_-_sql141_app_error_log_-_6

 

At this point I am sure you are beginning to understand the issue you may have with availability groups and listeners. Let’s try to create a listener for an availability group with the default instances (MSSQLSERVER). My default instances on each cluster node listen on the port 1433 whereas the APP instances listen on the port 1438 as showed on the above picture. If I attempt to create my listener LST-DUMMY on the port 1433 it will be successful because my availability group and my default instances are on the same process.

 

blog_26_-_netstat_ano_-_7

 

Notice that the listener LST-DUMMY listens to the same port than the default instance and both are on the same process (PID = 1416). Of course if I try to change the TCP port number of my listener with 1438, SQL Server will raise the well-known error message with id 19486.

 

USE [master] GO ALTER AVAILABILITY GROUP [dummy] MODIFY LISTENER N'LST-DUMMY'(PORT=1438); GO

 

Msg 19486, Level 16, State 1, Line 3 The configuration changes to the availability group listener were completed, but the TCP provider of the instance of SQL Server failed to listen on the specified port [LST-DUMMY:1438]. This TCP port is already in use. Reconfigure the availability group listener, specifying an available TCP port. For information about altering an availability group listener, see the "ALTER AVAILABILITY GROUP (Transact-SQL)" topic in SQL Server Books Online.

 

The response becomes obvious now. Indeed, the SQL Server instance APP listens on TCP port 1438 for all available IP addresses (including the IP address of the listener LST-DUMMY).

 

blog_26_-_netstat_ano_-_8

 

You don't trust me? Well, I can prove it by connecting directly to the SQL Server named instance APP with the IP address of the listener LST-DUMMY - 192.168.0.35 - and the TCP port of the named instance – 1438 -

 

blog_26_-_sqlcmd_-_9

 

To summarize:

  • Having several SQL Server FCI that listen on the same port is not a problem because they can open a socket on their distinct IP address. However you can face port conflicts in the case you have also a standalone instance installed on one of the cluster node.
  • Having an availability group with a listener that listen on the same TCP port than the standalone instance on the same process will not result to a TCP port conflict.
  • Having an availability group with a listener that listen on the same TCP port than the standalone instance on a different process will result to a TCP port conflict. In this case each SQL Server process will attempt to open a socket on the same TCP port and on the same address IP.

Hope it helps!

SOUG-Romand: Journée performance le 21 mai

Mon, 2015-01-12 10:19

(english below)

Bonne nouvelle pour les francophones: le SOUG-R est de plus en plus actif.

Le 21 mai 2015 une journée performance est organisée sur Lausanne.

Flashback logging overhead: 'db file sequential read' on UNDO tablespace

Fri, 2015-01-09 13:09

in my previous post I've reproduced an issue where some bulk insert into a GTT was generating too much undo. But the real case I encountered was worse: the insert was reading lot of UNDO. And lot of this undo was read from disk.

Insert into GTT: bulk with APPEND_VALUES

Fri, 2015-01-09 13:04

This post is about the case I had where a GTT was generating too much undo. The effects were amplified by the fact that the database was in flashback logging (see Jonathan Lewis answer in my question on OTN forum about that, but that's for the next post.

Don't forget to configure Power Management settings on Hyper-V

Tue, 2015-01-06 02:51

Recently I had the opportunity to audit a SQL Server database hosted on a Hyper-V 2012 cluster. I noticed that the guest operating system had the Power Plan configured to High performance. This is great thing but when I talked to the system administrator to verify if the same option is turned on on the Hyper-V operating system, this was unfortunately not the case. 

As a reminder, the power policy setting has no effect on the guest operating system in case of virtual environments and we always have to verify if this option is configured correctly at the right level. 

I performed a quick demonstration to my customer by using the SuperPI benchmark tool that is pretty simple: it calculates pi to a specific number of digits by using one thread and for my purpose it's sufficient. 

--> Let's have the situation when Power Saver is enabled on the Hyper-V side and High performance turned on on the guest side. Then let's run SuperPI tool with 512K of digit to compute:

 

blog_25_-_superpi_calculation_-_power_saving

 

Here the time taken by the guest to calculate pi:

 

blog_25_-_superpi_calculation_-_power_saving_-_result

 

 

Now let's change the story by reversing the power settings value: High performance on the Hyper-V side and Power Saver on the guest side. Then we can do the same benchmark test:

 

blog_25_-_superpi_calculation_-_high_perf_-_result

 

 5,688 seconds for this test against 13,375 seconds for the first test - 57% of improvement .. not so bad :-) but let's have a more suitable situation. Indeed in most configurations power management setting is configured to Balanced by default and my customer asked me if there is a noticable difference if we leave the default configuration. In order to justify my recommandation we performed the same test but this time I decided to change the number of digits to compute to simulate a more realistic OLTP transaction (short and requiere all CPU resources during a short time). The table lists and compare the both results:

 

Settings Duration (s) Hyper - V : Load balancing 0.219 Hyper - V : High performance  0.141 

 

We can notice a 64% of CPU time improvement in the context of my customer! So after that, my customer was convinced to change this setting and I hope it is the same for you! Of course with long running queries that consume a lot of CPU resources during a long time the difference may be less discernible because the processor wake-up time is very small compared to the total worker time consumed by them.

Keep in mind that changing Power Management state from the guest has no effect on virtualized environment. You must take care of this setting directly on the hypervisor.

Happy virtualization !!

Managed Backup with SQL Server 2014

Mon, 2015-01-05 21:37

In a previous blog post called Backup a SQL Server database from On-Premise to Azure, I presented the different tools to backup your on-premise databases on Azure Storage. SQL Server Managed Backup to Windows Azure was one of these tools.

In my opinion, Managed Backup is a great tool. That is why I decided to dedicate an entire blog to this feature.

 

Understanding Managed Backup

Managed Backup is a new feature introduced in SQL Server 2014 working with Windows Azure. This feature allows to manage and automate SQL Server backups (from your on-premise or Azure SQL Server instance), configurable by … script only (T-SQL or PowerShell)!

Microsoft recommends to use Managed Backup for Windows Azure virtual machines.

Managed backup only works with user databases in Full or Bulk-logged Recovery Model, and can only perform Full and Log backups.

SQL Backups supports a point in time restore, and are stored following a retention period. This setting indicates the desired lifespan of a backup stored in Azure Storage. Once the period is reached, the backup is deleted.

SQL Backups are scheduled following the transaction workload of the database.

A full database backup is scheduled when:

  • The Managed backup feature is enabled for the first time
  • The log growth is 1 GB or larger
  • The last full database is older than 1 week
  • The log chain is broken

A transaction log backup is scheduled when:

  • No log backup history is available
  • The log space is 5 MB or larger
  • The last log backup is older than 2 hours
  • A full database has been performed

 

Configuring Managed Backup

First, you need to activate SQL Server Agent service in order to use the feature.

In this example, I have 3 user databases as follows:

 

Database Named

Recovery Model

Data Files Location

AdventureWorks2012

Simple

On-premise

AdventureWorks2014

Full

On-premise

Hybriddb

Bulk-logged

Azure Storage

 

Managed Backup can be enabled at the instance level or database level.

If you decide to activate the feature at the instance level, the configuration will be set for all user databases of your instance (even for databases added after the configuration).

On the other hand, you can activate the feature for specific user databases. If the feature is also configured at the instance level, it will be overridden by the configuration at the database level.

To configure the feature, you must provide a set of parameters:

  • The URL of the Azure Storage
  • The retention period in days
  • The credential name
  • The encryption algorithm

If the encryption algorithm is not set to ‘NO_ENCRYPTION’, you also need to provide these parameters:

  • The encryptor type
  • The encryptor name

Moreover, when you configure your Managed Backup, you need to specify if you want to activate your Managed Backup.

 

You can perform a database backup with COPY_ONLY. To do this, you need to use 'smart_admin.sp_backup_on_demand' stored procedure, by specifying the database name.

However, this stored procedure will use the configuration of the Managed Backup at the database level. That means you must configure and enable the Managed Backup for your database.

 

We need to create a credential in order to be able to connect to Azure Storage:

 

CREATE CREDENTIAL dbiservices

WITH IDENTITY = 'dbiservices',

SECRET = 'password'

 

Let’s configure our Managed Backup at the instance level:

 

USE msdb;

GO

EXEC smart_admin.sp_set_db_backup

    @enable_backup = 0,

    @storage_url = 'https://dbiservices.blob.core.windows.net',

    @retention_days = 15,

    @credential_name = 'dbiservices';

    @encryption_algorithm = 'NO_ENCRYPTION';

 

 

 

 

 

 

 

 

 

If I want to display the instance configuration:

 

USE msdb;

GO

SELECT * FROM smart_admin.fn_backup_instance_config();

 

Here is the result:

 

display-configuration.png

 

We will override the Managed Backup configuration for ‘hybriddb’ database:

 

USE msdb;

GO

EXEC smart_admin.sp_set_db_backup

    @database_name = 'hybriddb',

    @enable_backup = 0,

    @credential_name = 'dbiservices',

    @storage_url = 'https://dbiservices.blob.core.windows.net,

    @retention_days = 25,

    @encryption_algorithm = 'NO_ENCRYPTION';

 

If I want to display the database configuration of all databases of the instance:

 

USE msdb;

SELECT db_name, is_managed_backup_enabled, retention_days, storage_url, encryption_algorithm

FROM smart_admin.fn_backup_db_config(NULL)

 

Here is the result:

 

diplay-databases-configuration.png

 

Notice that ‘AdventureWorks2012’ database has ‘is_managed_backup_enabled’ set to ‘NULL’. Indeed, this database is not sensitive to Managed Backup because it has his Recovery Model set to Simple.

 

Now, I activate the Managed Backup at the instance level:

 

USE msdb;

GO

EXEC smart_admin.sp_set_db_backup

    @enable_backup = 1;

GO

 

Now, I activate the Managed Backup for ‘hybriddb’ database:

 

USE msdb;

GO

EXEC smart_admin.sp_set_db_backup

    @database_name = 'hybriddb',

    @enable_backup

 

If I explore Azure Storage, I can find my backups:

 

backups.png

 

 

Conclusion

As I said in Introduction, Managed Backup is a great feature. Easily and quicly, you can configure and enable backups for your user databases.

However, it has some serious limitations... We can expect Managed Backup to be extended to system databases. Moreover, we can also expect Managed Backup to allow backups from user databases in Simple Recovery Model.

Furthermore, this feature is only available to Azure Storage. Indeed, I would like to choose my storage destination. I do not understand why we cannot back up to local disks for example.

Oracle multitenant dictionary: upgrade

Mon, 2015-01-05 16:10

This is a second part of the previous post about metadata link. I've shown how a sharing=metadata function becomes a sharing=none function when it is changed in the pdb - i.e when not having the same DDL, not having a different signature.

Here is another experimentation doing the opposite: change the function in root and see what happens in the pdb. Again playing with internals in order to understand the 'upgrade by unplug-plug' feature available in 12c multi-tenant (and single-tenant).

Oracle multitenant dictionary: metadata links

Mon, 2015-01-05 16:02

As in previous posts (1 and 2) I'm still playing with multitenant dictionary. You may wonder why I'm spending time to do unsupported things that we will never be able to use in real life.

Of course, playing with internals is fun ;)

But it's not only that. We will implement CDB databases (multi-tenant, and single-tenant as I think non-CDB will be deprecated in future versions). We will operate pluggable databases (clone, upgrade, downgrade,...), encounter issues, etc. Before doing that on critical environments, I need to be confident about the multitenant implementation. I'ts new, it's undocumented, and - at the first sight - it looks like a 'hack' in the way the dictionary is working for years.

Happy New Year 27, 104, 2015 and 2558

Wed, 2014-12-31 06:25

calendar          today  tomorrow  message
----------------- ------ --------- --------------
Arabic Hijrah     1436   1436
English Hijrah    1436   1436
Gregorian         2014   2015      Happy New Year
Japanese Imperial 0026   0027      Happy New Year
Persian           1393   1393
ROC Official      0103   0104      Happy New Year
Thai Buddha       2557   2558      Happy New Year

Oracle multitenant dictionary: object links

Mon, 2014-12-29 15:28

I've described Oracle 12c metadata and object links internals in a previous post. But before that, the first time I investigated on it, I made a wrong assumption because I was looking at AUDIT_ACTIONS which is not correctly implemented. That investigation came from a question on dba-village. And recently Ivica Arsov (@IvicaArsov) has made an interesting comment about AUDIT_ACTIONS object link table, so I'll explain here what is special with it.

AUDIT_ACTIONS

Here is how is defined AUDIT_ACTIONS:

SQL> select object_name,object_type,sharing from dba_objects where object_name in ('DBA_AUDIT_TRAIL','AUDIT_ACTIONS') order by object_name,object_type;

OBJECT_NAME          OBJECT_TYPE     SHARING
-------------------- --------------- -------------
AUDIT_ACTIONS        SYNONYM         METADATA LINK
AUDIT_ACTIONS        TABLE           OBJECT LINK
DBA_AUDIT_TRAIL      SYNONYM         METADATA LINK
DBA_AUDIT_TRAIL      VIEW            METADATA LINK
It's a sharing=object table so you expect that the data is common to all containers. And we will also query a view that reads that table - DBA_AUDIT_TRAIL.

Then let's query the table from CDB$ROOT and from a PDB and check from ROWID if we read the same rows:

SQL> alter session set container=CDB$ROOT;
Session altered.

SQL> select rowid,action,name,dbms_rowid.rowid_to_absolute_fno(rowid,'SYS','AUDIT_ACTIONS') file_id from AUDIT_ACTIONS where action=3;

ROWID                  ACTION NAME       FILE_ID
------------------ ---------- ------- ----------
AAABG7AABAAACo5AAD          3 SELECT           1

SQL> alter session set container=PDB1;
Session altered.

SQL> select rowid,action,name,dbms_rowid.rowid_to_absolute_fno(rowid,'SYS','AUDIT_ACTIONS') file_id from AUDIT_ACTIONS where action=3;

ROWID                  ACTION NAME       FILE_ID
------------------ ---------- ------- ----------
AAABG5AABAAAA3pAAD          3 SELECT           8

The rows are not coming from the same file, but from the local SYSTEM tablespace of each container. This is a proof that this OBJECT LINK table is not common at all.

DBA_AUDIT_TRAIL

Now I want to check what happens when we query through the view. I don't have the ROWID so let's update the table in the PDB so that we can distinguish rows coming from CDB$ROOT and from PDB1:

SQL> update AUDIT_ACTIONS set name='select' where action=3;

1 row updated.

SQL> select rowid,action,name from AUDIT_ACTIONS where action=3;

ROWID                  ACTION NAME
------------------ ---------- -------
AAABG5AABAAAA3pAAD          3 select

SQL> select distinct dbid,action,action_name from DBA_AUDIT_TRAIL;

      DBID     ACTION ACTION_NAME
---------- ---------- ----------------------------
 314687597          3 select

Ok. I've changed one 'ACTION_NAME' to lowercase - only in the PDB1. And when I query through the view I see the local row. This definitly prooves that the implementation of AUDIT_ACTIONS is not achieving the goal of multinenant dictionary: store common oracle objects only in CDB$ROOT to avoid duplication and faciliate upgrade. Note that it is not a big problem anyway as it is just a 200 rows table.

DBA_CPOOL_INFO

In order to show the normal behaviour of object links I'll do the same with DBA_CPOOL_INFO which is a view over SYS.CPOOL$. I've described this behaviour previously by creating my own objects but here I'll show how it is used to store the DRCP information which is at CDB level. Here are the involved table and views:

SQL> select object_name,object_type,sharing from dba_objects where object_name in ('CPOOL$','INT$DBA_CPOOL_INFO','DBA_CPOOL_INFO') order by object_name,object_type;

OBJECT_NAME          OBJECT_TYPE     SHARING
-------------------- --------------- -------------
CPOOL$               TABLE           OBJECT LINK
DBA_CPOOL_INFO       SYNONYM         METADATA LINK
DBA_CPOOL_INFO       VIEW            METADATA LINK
INT$DBA_CPOOL_INFO   VIEW            OBJECT LINK
CPOOL$ is defined with sharing=object. An internal view INT$DBA_CPOOL_INFO is defined on it with sharing=object as well. And finally that view is exposed through DBA_CPOOL_INFO.

As before, I check the ROWID of CPOOL$ row from CDB$ROOT and PDB1:
SQL> alter session set container=CDB$ROOT;
Session altered.

SQL> select rowid,minsize,dbms_rowid.rowid_to_absolute_fno(rowid,'SYS','CPOOL$') file_id from SYS.CPOOL$;

ROWID                 MINSIZE    FILE_ID
------------------ ---------- ----------
AAABz5AABAAADb5AAA          4          1

SQL> alter session set container=PDB1;
Session altered.

SQL> select rowid,minsize,dbms_rowid.rowid_to_absolute_fno(rowid,'SYS','CPOOL$') file_id from SYS.CPOOL$;

ROWID                 MINSIZE    FILE_ID
------------------ ---------- ----------
AAABz3AABAAABQJAAA          4          8

So this is the same as we have seen before: an OBJECT LINK has its data in each PDB.

But what is different here is the view charing which is sharing=object. Let's query that view after changing the value in PDB1:

SQL> update SYS.CPOOL$ set minsize=0;
1 row updated.

SQL> select rowid,minsize,dbms_rowid.rowid_to_absolute_fno(rowid,'SYS','CPOOL$') file_id from SYS.CPOOL$;

ROWID                 MINSIZE    FILE_ID
------------------ ---------- ----------
AAABz3AABAAABQJAAA          0          8

SQL> select minsize from INT$DBA_CPOOL_INFO;

   MINSIZE
----------
         4

SQL> select minsize from DBA_CPOOL_INFO;

   MINSIZE
----------
         4

Now we have a view which will always show the CDB$ROOT rows, even when we are in a PDB container. We still have rows in the PDB containers, but they will not be used. Once again, this defeats the goal of deduplication, but this is a very small table.

AWR tables

The main advantage of multitenant dictionary architecture is with the big tables storing data which is common in the whole CDB, such as the AWR data:

SQL> alter session set container=CDB$ROOT;
Session altered.

SQL> select con_id,count(*) from containers(WRH$_SQLTEXT) group by con_id;

    CON_ID   COUNT(*)
---------- ----------
         1       5549

SQL> alter session set container=PDB1;

Session altered.

SQL> select count(*) from WRH$_SQLTEXT;

  COUNT(*)
----------
         0

This information - stored only from CDB$ROOT - is shared in all PDB through the OBJECT LINK view.

Creating a schema synonym in Oracle - an unsupported feature

Sun, 2014-12-28 14:59

Ivica Arsov (@IvicaArsov) has made an interesting comment about AUDIT_ACTIONS object link table. I'll blog about it soon, but in the meantime when checking its definition in cataudit.sql it came upon the following:

/* SCHEMA SYNONYMS will be added in 12g */
-- insert into audit_actions values (222, 'CREATE SCHEMA SYNONYM');
-- insert into audit_actions values (224, 'DROP SCHEMA SYNONYM');

which caught my attention.