Feed aggregator

Experiencing update statistics on a big table with circuitous ways

Yann Neuhaus - Wed, 2018-01-03 14:56

This is my first blog of this new year and since a while by the way. Indeed, last year, I put all my energy to realign my Linux skills with the new Microsoft’s strategy that opens SQL Server to Open Source world. But at the same time, I did a lot of interesting stuff at customers shops and I decided to write about one  of them to begin this new year 2018.

blog 124 - 0 - stats

In this blog post, I will highlight a distinctive approach, according to me, to optimize an update statistics operation for one particular and big table. I already had to manage such maintenance tasks in one of my previous jobs as DBA and I continue to learn more about it but from a different position now.  The fact is as consultant, I usually try to provide to my customer the best approach regarding both the issue and the context. In reality, from my experience, the latter is much more important than you may think and sometimes we have to consider different other ways to get the expected outcome. I think this is not a necessarily a bad thing because following a different path (not the best) may reveal different other interesting options we may consider to make our final recipe.

Let’s go back to my customer case and let set the context. One big database (1.7TB) on SQL Server 2014 SP2 and an update statistics operation that is part of a global database maintenance strategy and takes a very long time (roughly 30h in the best-case scenario). We identified the biggest part of the execution time is related to one big and non-partitioned table (let’s say dbo.bigtable) with the following figures: 148 millions of rows / 222GB in size / 119 columns / 94 statistics / 14 indexes. Regarding the two last figures, the majority of statistics we re in fact auto-generated by the SQL Server engine over the time and from different application release campaigns. Furthermore, it is worth mentioning that the DBA had to use a customized sample value (between 10 and 50 percent) to minimize the impact of update statistics operation for this particular table regarding the table size and the configured maintenance windows timeframe.

 

My first and obvious approach

My first approach consisted in warning the database administrator about the number of statistics on this table (probably a lot of them are not in use anymore?) as well as the development team about the bad designed table. In addition, the number of rows in this table may also indicate that it contains a lot of archive data and we may reduce the global size by using archiving processes (why not built-in partitioning features and incremental statistics because we’re running with enterprise edition?). However, reviewing the model was not an option for my customer because it will require a lot of work and the DBA wasn’t keen on the idea of archiving data (business constraints) or removing auto generated statistics on this specific table. So, what I considered a good approach was not a success for adoption and the human aspect was definitely a big part of it. In a nutshell, at this stage a standby situation …

 

When a high-performance storage comes into rescue …

In parallel my customer was considering to replace his old storage by a new one and Pure Storage was in the loop. Pure Storage is one of the flash storage providers on the market and the good news is I already was in touch with @scrimi78 (Systems Engineer at Pure Storage) in Switzerland. During this project, we had a lot of interesting discussions and interaction s about Pure Storage products and I appreciated his availability to provide technical documentation and explanation. At the same time, they lent us generously a Pure Storage to play with snapshot volumes that will be used for database refresh between a production and dev environments.

In the context of my customer, we already were aware of the poor performance capabilities of the old storage and the replacement by a Pure Storage // M20 model was very beneficial for the database environments as shown by the following figures:

I only put the IO-related statistics of the production environment we had during the last year and we may notice a significant drop of average time after moving the production database files on the new storage layout. Figures are by year and month.

blog 124 - 1 - wait stats

We noticed the same from file IO statistics figures about the concerned database.

blog 124 - 3 - file io stats

Very impressive isn’t it? But what about our update statistics here? We naturally observed a significant drop in execution time to 6 hours (80% of improvement) because generally speaking this an IO-bound operation and especially in our case. The yellow columns represent operations we had to stop manually to avoid impacting the current application workload (> 35hours of execution time). You may also notice we changed the sample value to 100 percent after installing the Pure Storage // 20 model compared to previous runs with a sample value of 50 percent.

blog 124 - 3 - update stats execution time

 

Changing the initial way to update statistics for this table …

The point here is we know that we may now rely on the storage performance to update statistics efficiently and why not to push the limit of the storage by changing the way of running our update statistics operation – basically sequentially by default with one statistic at time. So, we decided to write a custom script to carry out the update operation in parallel to boost the overall execution performance. Since SQL Server 2014 SP1 CU6 we may benefit from an improved support for parallel statistics by using the trace flag 7471 that changes the locking behavior such that SQL Server engine no longer acquires X LOCK on UPDSTATS resource on this table. The script consists in creating a pool of parallel SQL jobs that update one particular statistic on a single table. I put it below if you want to use it but in a meantime, let’s say it is also possible to go through an interesting smart alternative solution based on the service broker capabilities here. What is certain is we will integrate one or other version – with some adjustments – to you DMK management kit tool. Here my proposal based on concurrent SQL jobs (feel free to comment):

SET NOCOUNT ON;

DECLARE 
	@database_name sysname = N'AdventureWorks2012', -- Target database
	@table_name sysname = N'bigTransactionHistory', -- Target table
	@schema_name sysname = N'dbo',                  -- Target schema
	@batch_upd_stats INT = 3,                       -- Number of simultaneous jobs
	@stat_sample INT = 0, 					        -- 0 = default sample rate value | 100 = FULLSCAN | Otherwise WITH SAMPLE @stat_sample PERCENT
	@debug BIT = 1									-- 0 = debug mode disabled -| 1 - Debug mode | 99 - Verbose mode
	

-- @stats_sample variable table
-- Will contains statistics with custom sampling rate value
-- Otherwise will use the default sample rate value from SQL Server
-- You may also use an user table for more flexibility that that will 
-- be used from this script 
DECLARE @stats_sample TABLE
(
	column_name sysname,
	stats_name sysname,
	sample_stat tinyint
);

INSERT @stats_sample VALUES ('TransactionID', '_WA_Sys_00000001_4CC05EF3', 100),
						    ('ProductID', '_WA_Sys_00000002_4CC05EF3', 100);



-- working parameters
DECLARE
	@nb_jobs_running INT = 0,
	@count INT = 0,
	@i INT,
	@j INT,
	@max INT,
	@sqlcmd NVARCHAR(MAX) = N'',
	@job_name_pattern sysname,
	@start_date DATETIME2(0),
	@end_date DATETIME2(0),
	@stats_name sysname,
	@sample_stat int
	;


-- Creation of update stat jobs
IF OBJECT_ID('tempdb..#update_stats_tt', 'U') IS NOT NULL
	DROP TABLE #update_stats_tt;

SELECT 
	id = IDENTITY(INT, 1,1),
	s.name AS [schema_name],
	t.name AS table_name,
	st.name AS stats_name,
	sample_stat,
	'USE [msdb]
	
	DECLARE @jobId BINARY(16);
	
	EXEC  msdb.dbo.sp_add_job @job_name=N''UPDATE_STATS_' + @schema_name + '_' + @table_name + '_' + st.name + ''', 
			@enabled=1, 
			@notify_level_eventlog=0, 
			@notify_level_email=2, 
			@notify_level_netsend=2, 
			@notify_level_page=2, 
			@delete_level=0, 
			@category_name=N''[Uncategorized (Local)]'', 
			@owner_login_name=N''sa'', @job_id = @jobId OUTPUT
	--select @jobId
	
	EXEC msdb.dbo.sp_add_jobserver @job_name=N''UPDATE_STATS_' + @schema_name + '_' + @table_name + '_' + st.name + ''', @server_name = N''' + @@SERVERNAME + '''
	
	EXEC msdb.dbo.sp_add_jobstep @job_name=N''UPDATE_STATS_' + @schema_name + '_' + @table_name + '_' + st.name + ''', @step_name=N''UPDATE STATS'', 
		@step_id=1, 
		@cmdexec_success_code=0, 
		@on_success_action=1, 
		@on_fail_action=2, 
		@retry_attempts=0, 
		@retry_interval=0, 
		@os_run_priority=0, @subsystem=N''TSQL'', 
		@command=N''UPDATE STATISTICS ' + QUOTENAME(s.name) + '.' + QUOTENAME(t.name) + ' ' + st.name + CASE COALESCE(sample_stat, @stat_sample, 0)
																											WHEN 0 THEN ' '
																											WHEN 100 THEN ' WITH FULLSCAN'
																											ELSE ' WITH SAMPLE ' + CAST(COALESCE(sample_stat, @stat_sample, 0) AS VARCHAR(15)) + ' PERCENT'
																										END + ''', 
		@database_name=N''' + @database_name + ''', 
		@flags=0

	EXEC msdb.dbo.sp_update_job @job_name=N''UPDATE_STATS_' + @schema_name + '_' + @table_name + '_' + st.name + ''', 
			@enabled=1, 
			@start_step_id=1, 
			@notify_level_eventlog=0, 
			@notify_level_email=2, 
			@notify_level_netsend=2, 
			@notify_level_page=2, 
			@delete_level=0, 
			@description=N'''', 
			@category_name=N''[Uncategorized (Local)]'', 
			@owner_login_name=N''sa'', 
			@notify_email_operator_name=N'''', 
			@notify_netsend_operator_name=N'''', 
			@notify_page_operator_name=N''''
	' AS upd_stats_cmd
INTO  #update_stats_tt
FROM 
	sys.stats AS st
JOIN 
	sys.tables AS t ON st.object_id = t.object_id
JOIN
	sys.schemas AS s ON s.schema_id = t.schema_id
LEFT JOIN 
	@stats_sample AS ss ON ss.stats_name = st.name
WHERE 
	t.name =  @table_name
	AND s.name = @schema_name
ORDER BY 
	stats_id;

IF @debug = 99
	SELECT * FROM #update_stats_tt;

-- Enable traceflag 7471 to allow U lock while stat is updating
PRINT '--> Enable trace flag 7471 during update stats operation';

SET @sqlcmd = N'DBCC TRACEON(7471, -1);' + CHAR(13)
EXEC sp_executesql @sqlcmd;

PRINT '-----------------------------------';

SET @start_date = CURRENT_TIMESTAMP;

SET @max = (SELECT MAX(id) FROM #update_stats_tt);
SET @i = 0;

IF @debug = 99
	SELECT 'DEBUG -->  @max (nb stats) : ' + CAST(@max AS VARCHAR(15));

-- Entering to the loop ...
WHILE (@i <= @max OR @nb_jobs_running <> 0)
BEGIN

	SET @j = @i + 1;

	IF @debug = 99
	BEGIN
		SELECT 'DEBUG -->  @i : ' + CAST(@i AS VARCHAR(15));
		SELECT 'DEBUG --  @j = @i + 1 : ' + CAST(@j AS VARCHAR(15));
	END

	-- Computing number of update stats jobs to create
	-- regarding both the max configured of simulataneous jobs and current running jobs
	SET @count = @batch_upd_stats - @nb_jobs_running;

	IF @debug = 99
		SELECT 'DEBUG --  @count : ' + CAST(@count AS VARCHAR(15));

	-- Here we go ... creating update stats sql_jobs
	WHILE (@j <= @i + @count)
	BEGIN

		SET @sqlcmd = '';
		SET @stats_name = NULL;
		SET @sample_stat = NULL;
		SET @sqlcmd = NULL;

		SELECT 
			@stats_name = stats_name,
			@sample_stat = sample_stat,
			@sqlcmd = upd_stats_cmd + CHAR(13) + '---------------------' + CHAR(13)
		FROM 
			#update_stats_tt
		WHERE 
			id = @j;

		IF @debug = 99
		BEGIN
			SELECT 'DEBUG --  @j loop : ' + CAST(@j AS VARCHAR(15));
			SELECT @stats_name, @sample_stat
		END

		IF @debug = 1
			PRINT 'UPDATE STATISTICS ' + QUOTENAME(@schema_name) + '.' + QUOTENAME(@table_name) + ' ' + @stats_name + CASE COALESCE(@sample_stat, @stat_sample, 0)
																															WHEN 0 THEN ' '
																															WHEN 100 THEN ' WITH FULLSCAN'
																															ELSE ' WITH SAMPLE ' + CAST(COALESCE(@sample_stat, @stat_sample, 0) AS VARCHAR(15)) + ' PERCENT'
																														  END + '';
		IF @debug IN (0,1) 
		BEGIN
			PRINT '--> Create SQL job UPDATE_STATS_' + @schema_name + '_' + @table_name + '_' + @stats_name + '';
			
			EXEC sp_executesql @sqlcmd;
		END

		SET @j += 1;

	END

	-- We need to rewind by 1 to target the next stat to update
	SET @j -= 1;

	PRINT '-----------------------------------';

	-- Start all related update stats jobs 
	SET @sqlcmd = N'';

	SELECT @sqlcmd += 'EXEC msdb.dbo.sp_start_job @job_name = ''UPDATE_STATS_' + @schema_name + '_' + @table_name + '_' + stats_name + ''';' + CHAR(13)
	FROM #update_stats_tt
	WHERE id BETWEEN (@i + 1) AND (@i + @count);

	IF @debug = 1
		PRINT @sqlcmd;
	
	IF @debug IN (0,1)
	BEGIN
		PRINT '--> Starting UPDATE_STATS_' + @schema_name + '_' + @table_name + ' jobs';
		EXEC sp_executesql @sqlcmd;
		PRINT '-----------------------------------';
	END

	-- Waiting 10 seconds before checking running jobs
	WAITFOR DELAY '00:00:10';

	-- Construction job pattern to the next steps - check running jobs and stop terminated jobs
	SET @job_name_pattern = 'UPDATE_STATS_' + @schema_name + '_' + @table_name + '_';
	
	IF @debug = 99
		SELECT 'DEBUG - @job_name_pattern = ' + @job_name_pattern

	SELECT 
		@nb_jobs_running = COUNT(*)
	FROM 
		msdb.dbo.sysjobactivity ja (NOLOCK)
	LEFT JOIN 
		msdb.dbo.sysjobhistory jh (NOLOCK) ON ja.job_history_id = jh.instance_id
	JOIN 
		msdb.dbo.sysjobs j  (NOLOCK) ON ja.job_id = j.job_id 
	JOIN 
		msdb.dbo.sysjobsteps js  (NOLOCK) ON ja.job_id = js.job_id AND ISNULL(ja.last_executed_step_id,0)+1 = js.step_id
	WHERE 
		ja.session_id = (
							SELECT TOP 1 session_id 
							FROM msdb.dbo.syssessions  (NOLOCK) 
							ORDER BY agent_start_date DESC)
		AND start_execution_date is not null
		AND stop_execution_date is NULL
		AND j.name LIKE @job_name_pattern + '%';

	IF @debug = 99
		SELECT 'DEBUG --  @nb_jobs_running : ' + CAST(@nb_jobs_running AS VARCHAR(15));
	
	IF @nb_jobs_running = @batch_upd_stats
		PRINT '--> All SQL jobs are running. Waiting for 5s ...';

	-- Waiting until at least one job is terminated ...
	WHILE (@nb_jobs_running = @batch_upd_stats)
	BEGIN

		-- Count nb of running jobs only
		SELECT 
			@nb_jobs_running = COUNT(*)
		FROM 
			msdb.dbo.sysjobactivity ja (NOLOCK)
		LEFT JOIN 
			msdb.dbo.sysjobhistory jh (NOLOCK) ON ja.job_history_id = jh.instance_id
		JOIN 
			msdb.dbo.sysjobs j  (NOLOCK) ON ja.job_id = j.job_id 
		JOIN 
			msdb.dbo.sysjobsteps js  (NOLOCK) ON ja.job_id = js.job_id AND ISNULL(ja.last_executed_step_id,0)+1 = js.step_id
		WHERE ja.session_id = (
								SELECT TOP 1 session_id 
								FROM msdb.dbo.syssessions  (NOLOCK) 
								ORDER BY agent_start_date DESC)
			AND start_execution_date is not null
			AND stop_execution_date is NULL
			AND j.name LIKE @job_name_pattern + '%'

		WAITFOR DELAY '00:00:05';

	END

	PRINT '-----------------------------------';

	-- Delete terminated SQL jobs 
	SET @sqlcmd = '';

	SELECT 
		@sqlcmd += 'EXEC msdb.dbo.sp_delete_job  @job_name = ''' + j.name + ''';' + CHAR(13)
	FROM 
		msdb.dbo.sysjobactivity ja (NOLOCK)
	LEFT JOIN 
		msdb.dbo.sysjobhistory jh (NOLOCK) ON ja.job_history_id = jh.instance_id
	JOIN 
		msdb.dbo.sysjobs j  (NOLOCK) ON ja.job_id = j.job_id 
	WHERE 
		j.name LIKE @job_name_pattern + '%'
		AND start_execution_date is not null
		AND stop_execution_date is not null

	IF @debug = 1
		PRINT @sqlcmd;

	IF @debug IN (0,1)
	BEGIN
		PRINT '--> Removing terminated UPDATE_STATS_' + @schema_name + '_' + @table_name + '_XXX jobs'
		EXEC sp_executesql @sqlcmd;
		PRINT '-----------------------------------';
	END

	SET @i = @j;

	IF @debug = 99
		SELECT 'DEBUG --  @i = @j : ' + CAST(@j AS VARCHAR(15));
	
END

-- Disable traceflag 7471 (only part of the update stats maintenance
PRINT '-----------------------------------';
PRINT '--> Disable trace flag 7471';
SET @sqlcmd = N'DBCC TRACEOFF(7471, -1);' + CHAR(13)
EXEC sp_executesql @sqlcmd;

SET @end_date = CURRENT_TIMESTAMP;

-- Display execution time in seconds
SELECT DATEDIFF(SECOND, @start_date, @end_date) AS duration_S;

 

We initially run the test on a QA environment with 4 VCPUs and 22GB of RAM – that was pretty close to the production environment. We noticed when we began to increase the number of parallel jobs over 3 we encountered RESOURCE_SEMAPHORE waits. This is because of memory grants required for each update statistics command . Unfortunately, no chance here to increase the amount of memory to push the limit further but we noticed a factor improvement of 1.5 in average (with still a sample of 100 percent).

blog 124 - 5 - update stats execution time

At this point I asked myself if we may rely only on the storage layout performance to update statistics. After all, we managed to reduce the execution time below to the maximum windows maintenance timeframe – fixed to 8 hours in your context.

Analyzing further the data distribution

Relying on the storage performance and the new SQL Server capabilities was a good thing for us but however I kept in mind that updating 94 statistics was probably not a smart idea because I was convicted a big part of them was pretty useless. There is no easy way to verify it because we had a mix of stored procedures and ad-hoc statements from different applications that refer to this database (let’s say we also have super users who run queries directly from SSMS). So I decided to put the question differently: If we may not remove some auto-generated statistics, do we have necessarily to update all of them with FULLSCAN for this specific table? What about data distribution for columns involved by auto generated statistics? In the context of the dbo.bigTable and regarding the number of rows we may easily run into cardinality estimation issues if the data distribution is not correctly represented especially in case of skewed data. Nevertheless, analyzing manually histogram steps of each statistic may be cumbersome and we decided to go through the stored procedures provided by Kimberly Tripp here. However, the version we got did not support analyzing columns not involved in an index as mentioned below:

-- Considering for v2
--	, @slowanalyze	char(5) = 'FALSE'		
-- No index that exists with this column as the 
-- high-order element...analyze anyway. 
-- NOTE: This might require MULTIPLE table scans. 
-- I would NOT RECOMMEND THIS. I'm not even sure why I'm allowing this...

 

We had to update a little bit of code to fix it but don’t get me wrong, the Kimberly’s recommendation still makes sense because in the background SQL Server order data from the concerned column to get a picture of the data distribution. Without any indexes on this concerned column, analyzing data distribution may be a very time and resource-consuming (including tempdb for sorting data) task especially when the table becomes big in size as illustrated by the following sample of code executed by SQL Server while updating statistics.

SELECT StatMan([SC0]) 
FROM 
(
	SELECT TOP 100 PERCENT [TransactionID] AS [SC0] 
	FROM [dbo].[bigTransactionHistory] WITH (READUNCOMMITTED)  
	ORDER BY [SC0] 
) AS _MS_UPDSTATS_TBL  OPTION (MAXDOP 16)

This was the case for my dbo.bigTable and I had to cut the operation into small pieces spread within several windows maintenance timeframes.

So, we have to add  to the sp_SQLskills_AnalyzeColumnSkew stored procedure the creation of temporary index on the concerned column before analyzing data. This compromise allowed us to gain a lot of time and was validated on the QA environment.

We tried different  combinations of parameters and we finally finished by using the following ones (@difference = 10000, @numofsteps = 5) according to our background when we faced query performance and cardinality estimation issues on this specific table. We got an interesting output as shown below:

blog 124 - 6 - update stats execution time

The number of columns with skewed data are low compared to those with data distributed uniformly. [Partial skew] value means we got only differences from the column [Nb steps (factor)] => 2.5 by default.

That was a pretty good news because for columns with no skew data we were able to consider updating them by either using the default sampling rate used by SQL Server (nonlinear algorithm under the control of the optimizer) or to specify a custom sampling rate value to make sure we are not scanning too much data. Regarding my context, mixing parallel jobs and a default sampling rate value for column statistics with no skewed data seems to be good enough (no query plan regression at this moment) but we will have probably to change in the future. Anyway, we managed to reduce the execution time to one hour as shown below:

blog 124 - 7 - update stats execution time

For columns with skewed data we are still keeping the FULLSCAN option and we plan to investigate filtered statistics to enhance further cardinality estimations in the future.

The bottom line of this story is that I probably never thought to go through all the aforementioned options if the customer accepted to follow my first proposal (who knows?). Technically and humanly speaking it was a good learning experience. Obviously, you would think it was not the best or the simplest approach and you would be right. Indeed, there are drawbacks here as adding overhead and complexity to write custom scripts and maintaining ndividual statistics over the time as well. But from my experience in a consulting world everything is not often black or white and we also have to compose with a plenty of customer’s context variations to achieve not necessarily what we consider the best but one satisfactory outcome for the business.

Happy new year 2018!

 

 

 

 

 

 

 

 

Cet article Experiencing update statistics on a big table with circuitous ways est apparu en premier sur Blog dbi services.

Oracle Database 18c is NOT an Autonomous Database!

Tim Hall - Wed, 2018-01-03 04:01

RANT WARNING!

From the get-go Oracle has been talking about the Autonomous Database as a cloud service based on Oracle Database 18c, but I can’t remember them once saying Oracle Database 18c is an Autonomous Database. There is a reason for that. It’s because it isn’t. It’s not meant to be. It’s basically a big patchset on what we already have. We currently have 12.2.0.1 and Oracle 18c is 12.2.0.2, with a different name because of the new yearly release cycle.

I am not disappointed by this, but I am disappointed at how lazy the Oracle blog-sphere has been in reporting this. I keep reading posts where people mention that 18c is autonomous and how 18c will be the death of DBAs. Come on people, we can do better than this. I know what’s going to happen. Oracle 18c will be released and when it doesn’t contain all the autonomous goodness people will lose their minds and say, I told you it wouldn’t work!

Let me say this loud and clear!

  • Autonomous Database : It’s a cloud service that happens to use the Oracle 18c database.
  • Oracle Database 18c : It’s not autonomous!

This is not the first time I’ve said this. I mentioned it here when I wrote about the Autonomous Database announcement. I even mentioned using the cloud service here when I did the hands-on lab at OOW17.

Can we please have a little integrity and stop this junk reporting, or have we also entered the post-truth, fake-news world too? If you’ve written a blog post describing Oracle 18c as autonomous, please go back and correct it because you are making yourself look foolish and misleading people.

Sorry for the rant, but I felt it had to be said!

Cheers

Tim…

Update: At least one person seems not to have understood this post. I am not saying the Autonomous Database Cloud Service isn’t autonomous. I’ve used it briefly and from what I can see it seems pretty cool. This post is totally about the misreporting of the Oracle Database 18c product by bloggers (and some news outlets), which is not and doesn’t claim to be autonomous.

Update 2: So now I’m getting people asking me if I’m sure about this. As I said, people are being mislead by rubbish reporting. Come on people, fix your stuff! If you don’t believe me read what Maria Colgan said when she retweeted my post here.

Update 3: Some extra bullet points above to make things even more explicit.

  • If you buy an Autonomous Database Cloud Service on Oracle Public Cloud or Cloud@Customer, that’s a service that specifically includes the words “Autonomous Database” in the name of the service, you are getting an Autonomous Database.
  • If you buy regular 18c DBaaS on Oracle Public Cloud or Cloud@Customer you are not getting an Autonomous Database.
  • If you install 18c yourself on any cloud provider, including Oracle Public Cloud or Cloud@Customer, you are not getting an Autonomous Database.
  • If you install 18c yourself on-prem you are not getting an Autonomous Database.
Oracle Database 18c is NOT an Autonomous Database! was first posted on January 3, 2018 at 11:01 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

DBMCLI Utility

Syed Jaffar - Wed, 2018-01-03 03:45
Exadata comes with two types of servers, cell (storage) and compute (db). Its important for Database Machine Administrators (DMAs) to manage and configure servers for various purposes. For cell server administration and server settings, we use the cellcli utility. With X6 and higher, the DBMCLI utility can be used to administrate and configure server settings on compute nodes.

The DBMCLI utility is the command-line administration tool for configuring database servers, and managing server environment. Like cellcli, DBMCLI also runs on each compute server to enable you to configure an individual database server. The DBMCLI can be used to start and stop the server, to manage server configuration information, and to enable or disable servers. The utility is preconfigured when Oracle Exadata Database Machine is shipped.

Note : If you have enabled Capacity on Demand (CoD) during Exadata implementation, using this utility you can increase the CPU count on demand.

 To invoke/start the utility on the $ prompt, just execute dbmcli command

$ dbmcli [port_number] [-n] [-m] [-xml] [-v | -vv | -vvv] [-x] [-e command]










 With dbmcli utility, you can retireve the server configuration details, metric information and configure SMTP server details, start / stop the server etc.




Fore more details, read the Oracle documentation:

https://docs.oracle.com/cd/E80920_01/DBMMN/exadata-dbmcli.htm#DBMMN22063

 

Oracle MOOC: Introduction to NodeJS Using Oracle Cloud (2018)

 Oracle Application Container Cloud Service lets you deploy Java SE, Node.js, PHP, Phyton, Ruby and Go applications to the Oracle Cloud. You can also Java EE web...

We share our skills to maximize your revenue!
Categories: DBA Blogs

No More Content Chaos – Fishbowl Helps Manufacturer Integrate Oracle WebCenter with Agility for Enterprise Asset Management and Multichannel Delivery

This post is a recap of a recent Oracle WebCenter Content digital asset management project we completed for a manufacturing customer. They have thousands of products, and each has its own set of images, illustrations, videos, and manuals. Over 20,000 different asset types like these supported multiple processes and are distributed across a variety of mediums.

Google Search Appliance (GSA) Replacement, Alternatives, and Migration
The Business Problem: Disparate Content Storage

The company was storing some of its assets within Oracle Universal Content Management (UCM). Others, however, were stored in employee laptops, file servers, and various other locations. The assets were also stored with multiple naming conventions. These factors made it difficult to find relevant assets for products and repurpose them across channels and teams, as well as collaborate on the assets internally or with partners. Employees involved in asset creation and management described it as “content chaos”.

Illustration 1: Content Creation Lifecycle
The company’s view on how content will be created and consumed. This includes the various audiences for the content illustrating the content types necessary (electronic and print) – technical specifications, price and availability, manuals, etc.

This disparate content storage architecture and inconsistent file naming also meant that there was limited visibility into when and where purchased assets like image libraries were being used. This made it difficult to share such investments throughout the company.

Selecting Oracle WebCenter Content for Asset Storage and Management

With these issues identified, the company set out to develop a strategy to implement a permissions-based, centralized document and digital asset management system.  They had been using Oracle UCM for some time, but their initial implementation was not architected in a way to best suit the needs to search, access, use, and share assets across the company.

The company had also recently deployed the Agility product information management (PIM) system. Agility is used to manage the fully lifecycle of the company’s products, including where and how they can be marketed and sold, as well as what they should cost. The company wanted to integrate Agility with Oracle UCM, so that they could relate products in Agility with associated assets in UCM. This would make it easier to support commerce channels, including the company’s website, print catalogs, and the future desire to enable the sales team to access the company’s full product line “binder” from their mobile devices. The company needed the content management system to be able to output multiple rendition types including videos and PDFs.

For these reasons, the company decided to upgrade Oracle UCM to WebCenter Content 12c. An added benefit of 12c was that it provided the ability to integrate with third-party audio and video transcoding systems. This enabled the company to leverage FFmpeg software, which records, converts, and streams audio and video data. This video rendition engine is an open source product and therefore free, which helped keep implementation costs down as no additional licenses needed to be purchased.

The company partnered with Fishbowl Solutions to perform the upgrade to WebCenter 12c and integrate the product with Agility. Fishbowl had engaged with this company previously to scope out the future use of Oracle UCM (WebCenter), as well as design how its digital asset management component could be architected to achieve the following:

  • Simplified Image Access and Control
    • Provide an easy-to-use system that minimizes non-value add tasks. Users should have simple input mechanisms, assets should be transformed transparently to the user, and end up distributed to the right channel based on product name and metadata values.
  • Simple Scalable Solution for More Effective Collateral Control Across All Brands
    • Simplified solution architecture that is scalable to the needs of the company’s brands and shared functions and clarifies/enforces the defined “fit for purpose” solutions.
  • Image and Referential Content are Appropriately Managed throughout the Lifecycle
    • Knowledge workers want a solution that manages data according to organizational policy and standards within the systems they use on a day-to-day basis and does not require significant additional effort.
Oracle WebCenter Content 12c Implementation and Agility Integration

The company created a “Portfolio Initiative Roadmap” identifying these 9 key initiatives required to build out their Enterprise Digital Asset Management Solution and Program:

  1. Establish Foundation
  2. Marketing and Creative Services Enablement
  3. Sales Enablement
  4. Training and Certification Enablement
  5. Engineering Enablement
  6. Archive and Access AP Images To/From Oracle WebCenter Content
  7. Creative Services Enablement
  8. Ecommerce Enablement
  9. Evolve Foundation

Fishbowl worked with the company to deliver roadmap item number one: Establish Foundation. With this initiative Fishbowl needed to first upgrade Oracle UCM to Oracle WebCenter 12c. This included migrating the creative assets into the new instance. In parallel with the content migration and upgrade, the Fishbowl team designed an integration between Agility and Oracle WebCenter. This integration would enable Agility’s product taxonomy and data model to be associated with WebCenter’s metadata fields. This essentially would enable metadata between the systems to be shared and synced, making it easier to standardize on how assets were named.

The Fishbowl team did an outstanding job researching, planning, troubleshooting and creating the migration schedule for the Oracle Universal Content Management to WebCenter Content 12c upgrade. We did encounter one issue the night of the release, but the team developed a resolution and was ready to attempt the upgrade within 3 days.  I had the utmost confidence their plan was solid and we would attempt the upgrade mid-week.  The next attempt went very smoothly and users were in WebCenter Content the next morning.

Manager, Digital Strategy, Manufacturing Company

This integration would also provide renditions of the assets stored in Oracle WebCenter to display in Agility. For example, photographs and images of the company’s products are rendered via Oracle WebCenter’s digital asset management component to produce various formats – high and low resolution, 1200 x 1200, etc. Since Agility would be used by many departments at the company, including marketing, creative services, sales, and engineering; it was important that various formats of the assets could be easily found. This would help accelerate the execution of campaigns through web and email channels, as well as when print-ready images were needed to create or update product installation manuals and catalogs that were stored as PDFs. Additionally, associating the assets with the products in Agility would enable them to be paired so that they could be part of the product’s lifecycle – when products were discontinued, associated assets could be removed from the system. The following graphic illustrates the Oracle WebCenter and Agility integration:

Illustration 2: WebCenter Content and Agility Integration  Fishbowl designed a filter to capture all of the IDs of content items checked into the PIM Asset profile and added these to the Send To PIM Table. A system event was then added in WebCenter which runs about every 5 minutes, and this checks assets that are in the Send To PIM Table and adds them to the PIM Table. After it is added to the PIM Table, an API call is made to the PIM system to pull the assets from that table and add them to the appropriate product with any relations associated to it (i.e. the high resolution image for a specific product). After it is added into the PIM system, an API call is made to WebCenter with the updated path of the asset.

Results

This company first invested in Oracle WebCenter about five years ago. Although the system was being used to store some technical and business documents, the feeling amongst senior leadership was that the return on that investment was minimal. It hadn’t reached widespread adoption, and it was viewed as a system that wasn’t streamlining business processes and therefore wasn’t saving the company money.

This company, like most WebCenter Content customers, wasn’t fully aware of all that the system had to offer. Oracle WebCenter Content’s capabilities had to be reimagined, and in this case it was the Manager of Digital Strategy at the company that found an ideal use case. He was aware of the “content chaos” that the marketing and creative services teams were experiencing, which was part of the reason campaigns for specific channels took a long time to launch, as well as how the assets for such campaigns couldn’t be repurposed easily.

With the implementation of Oracle WebCenter Content 12c, and its integration with Agility, that has all changed. WebCenter has been identified as the system to store and manage assets, and as those assets are checked in they are automatically linked to products in Agility. This means that employees across divisions and geographic locations can easily locate the acceptable product assets for use in marketing, sales, engineering, training, or installation purposes. Outdated assets can easily be removed from the system, either manually or based on a content lifecycle schedule. Furthermore, having a standardized repository for the assets will better ensure they are reused across divisions and geographic locations, and no longer does the unknown location of assets impede internal or external collaboration. Furthermore, the open-source FFMpeg video rendition engine didn’t require the purchase of additional licenses, reducing future license and maintenance costs. Overall, the WebCenter and Agility-based digital asset management system has provided the foundation to effectively and efficiently control and deliver assets to the company’s targeted commerce channels.

The Fishbowl consultants on this project have all proved to be a very valuable part of my digital content strategy. I am very happy with the level of support I have received and amazed by how quickly they are able to provide conceptual ideas, working POC’s and final deliverables. They have been instrumental in developing and implementing the integration between Oracle WebCenter’s digital asset management component and our new Agility PIM application. I view them as members of my team, and they were all key in the successful implementation of our digital content management system.

Manager, Digital Strategy, Manufacturing Company

Time running out on your GSA?

Our expert team knows both GSA and Mindbreeze. We’ll help you understand your options and design a migration plan to fit your needs.

The post No More Content Chaos – Fishbowl Helps Manufacturer Integrate Oracle WebCenter with Agility for Enterprise Asset Management and Multichannel Delivery appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Using an OpenLayers map to select countries in an Oracle JET application

Amis Blog - Tue, 2018-01-02 07:03

In a recent article I discussed how the open source geo, GIS and map library OpenLayers can be used in an Oracle JET application. That article shows how countries selected in a standard Oracle JET Checkbox Set component are represented on a world map. In this article, we take this one step further – or actually several steps. By adding interactivity to the map, we allow users to select a country on the world map and we notify JET components of this selection. The synchronization works both ways: if the user types the name of a country in a JET input text component, this country is highlighted on the world map. Note that the map has full zooming and panning capabilities.

Webp.net-gifmaker (2)

 

The Gif demonstrates how the map is first shown with France highlighted. Next, the user types Chile into the input-text component and when that change is complete, the map is synchronized: Chile is highlighted. The user then hovers over Morocco – and the informational DIV element shows that fact. No selection is made yet. Then the user mouse clicks on Morocco. The country is selected on the map and the JET input-text component is synchronized with the name of the country. Subsequently, the same is done with India: hover and then select. Note: the map can easily support multi-country selection; I had to turn off that option explicitly (default behavior is to allow it).

The challenges I faced when implementing this functionality:

  • add vector layer with countries (features)
  • highlight country when mouse is hovering over it
  • add select interaction to allow user to select a country
  • communicate country selection event in map to “regular” JET component
  • synchronize map with country name typed into JET component

The steps (assuming that the steps in this article are performed first):

  1. Create mapArea.html with input-text, informational DIV and map container (DIV)
  2. Create mapArea.js for the mapArea module
  3. Add a div with data bind for mapArea module to index.html
  4. Download a file in GEOJSON format with annotated geo-json geometries for the countries of the world
  5. Initialize the map with two layers – raster OSM world map and vector country shapes
  6. Add overlay to highlight countries that are hovered over
  7. Add Select Interaction – to allow selection of a country – applying a bold style to the selected country
  8. Update JET component from country selection
  9. Set country selection on map based on value [change]in JET component

And here is the code used to implement this: https://github.com/lucasjellema/jet-and-openlayers .

 

Create mapArea.html with input-text, informational DIV and map container (DIV)
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/openlayers/4.6.4/ol-debug.css" />
<h2>Select Country on Map</h2>
<div id="componentDemoContent" style="width: 1px; min-width: 100%;">
    <div id="div1">

        <oj-label for="text-input">Country</oj-label>
        <oj-input-text id="text-input" value="{{selectedCountry}}" on-value-changed="[[countryChangedListener]]"></oj-input-text>
    </div>
</div>
<div id="info"></div>
<div id="map2" class="map"></div>
Create ViewModel mapArea.js for the mapArea module

 

define(
    ['ojs/ojcore', 'knockout', 'jquery', 'ol', 'ojs/ojknockout', 'ojs/ojinputtext', 'ojs/ojlabel'],
    function (oj, ko, $, ol) {
        'use strict';
        function MapAreaViewModel() {
            var self = this;

            self.selectedCountry = ko.observable("France");
            self.countryChangedListener = function(event) {
            }
...
      }
        return new MapAreaViewModel();
    }
);
Add a DIV with data bind for mapArea module to index.html
    ...</header>
    <div role="main" class="oj-web-applayout-max-width oj-web-applayout-content">
    <div data-bind="ojModule:'mapArea'" /> 
    </div>
    <footer class="oj-web-applayout-footer" role="contentinfo">
    ...
Download a file in GEOJSON format with annotated geo-json geometries for the countries of the world

I downloaded a GEOJSON file with country data from GitHub: https://github.com/johan/world.geo.json and placed the file in the directory src\js\viewModels of my JET application:

image

 

Initialize the map with two layers – raster OSM world map and vector country shapes
        function MapAreaViewModel() {
            var self = this;

            self.selectedCountry = ko.observable("France");
            self.countryChangedListener = function(event) {
                // self.selectInteraction.getFeatures().clear();
                // self.setSelectedCountry(self.selectedCountry())                
            }


            $(document).ready
                (
                // when the document is fully loaded and the DOM has been initialized
                // then instantiate the map
                function () {
                    initMap();
                })


            function initMap() {
                var style = new ol.style.Style({
                    fill: new ol.style.Fill({
                        color: 'rgba(255, 255, 255, 0.6)'
                    }),
                    stroke: new ol.style.Stroke({
                        color: '#319FD3',
                        width: 1
                    }),
                    text: new ol.style.Text()
                });

                self.countriesVector = new ol.source.Vector({
                    url: 'js/viewModels/countries.geo.json',
                    format: new ol.format.GeoJSON()
                });
               self.map2 = new ol.Map({
                    layers: [
                        new ol.layer.Vector({
                            id: "countries",
                            renderMode: 'image',
                            source: self.countriesVector,
                            style: function (feature) {
                                style.getText().setText(feature.get('name'));
                                return style;
                            }
                        })
                        , new ol.layer.Tile({
                            id: "world",
                            source: new ol.source.OSM()
                        })
                    ],
                    target: 'map2',
                    view: new ol.View({
                        center: [0, 0],
                        zoom: 2
                    })
                });
         }//initMap

 

Add overlay to highlight countries that are hovered over

Note: this code is added to the initMap function:

                // layer to hold (and highlight) currently selected feature(s) 
                var featureOverlay = new ol.layer.Vector({
                    source: new ol.source.Vector(),
                    map: self.map2,
                    style: new ol.style.Style({
                        stroke: new ol.style.Stroke({
                            color: '#f00',
                            width: 1
                        }),
                        fill: new ol.style.Fill({
                            color: 'rgba(255,0,0,0.1)'
                        })
                    })
                });

                var highlight;
                var displayFeatureInfo = function (pixel) {

                    var feature = self.map2.forEachFeatureAtPixel(pixel, function (feature) {
                        return feature;
                    });

                    var info = document.getElementById('info');
                    if (feature) {
                        info.innerHTML = feature.getId() + ': ' + feature.get('name');
                    } else {
                        info.innerHTML = '&nbsp;';
                    }

                    if (feature !== highlight) {
                        if (highlight) {
                            featureOverlay.getSource().removeFeature(highlight);
                        }
                        if (feature) {
                            featureOverlay.getSource().addFeature(feature);
                        }
                        highlight = feature;
                    }

                };

                self.map2.on('pointermove', function (evt) {
                    if (evt.dragging) {
                        return;
                    }
                    var pixel = self.map2.getEventPixel(evt.originalEvent);
                    displayFeatureInfo(pixel);
                });

 

Add Select Interaction – to allow selection of a country – applying a bold style to the selected country

This code is based on this example: http://openlayers.org/en/latest/examples/select-features.html .

                // define the style to apply to selected countries
                var selectCountryStyle = new ol.style.Style({
                    stroke: new ol.style.Stroke({
                        color: '#ff0000',
                        width: 2
                    })
                    , fill: new ol.style.Fill({
                        color: 'red'
                    })
                });
                self.selectInteraction = new ol.interaction.Select({
                    condition: ol.events.condition.singleClick,
                    toggleCondition: ol.events.condition.shiftKeyOnly,
                    layers: function (layer) {
                        return layer.get('id') == 'countries';
                    },
                    style: selectCountryStyle

                });
                // add an event handler to the interaction
                self.selectInteraction.on('select', function (e) {
                    //to ensure only a single country can be selected at any given time
                    // find the most recently selected feature, clear the set of selected features and add the selected the feature (as the only one)
                    var f = self.selectInteraction.getFeatures()
                    var selectedFeature = f.getArray()[f.getLength() - 1]
                    self.selectInteraction.getFeatures().clear();
                    self.selectInteraction.getFeatures().push(selectedFeature);
                });

and just after the declaration of self.map2:

...
    self.map2.getInteractions().extend([self.selectInteraction]);

Update JET component from country selection

Add to the end of the select event handler of the selectInteraction:

                    var selectedCountry = { "code": selectedFeature.id_, "name": selectedFeature.values_.name };
                    // set name of selected country on Knock Out Observable
                   self.selectedCountry(selectedCountry.name);

Create

                self.setSelectedCountry = function (country) {
                    //programmatic selection of a feature
                    var countryFeatures = self.countriesVector.getFeatures();
                    var c = self.countriesVector.getFeatures().filter(function (feature) { return feature.values_.name == country });
                    self.selectInteraction.getFeatures().push(c[0]);
                }
Set country selection on map based on value [change]in JET component

Implement the self.countryChangedListener that is refered to in the mapArea.html file in the input-text componentL:

            self.countryChangedListener = function(event) {
                self.selectInteraction.getFeatures().clear();
                self.setSelectedCountry(self.selectedCountry())                
            }

Create the following listener (for the end of loading the GeoJSON data in the countriesVector); when loading is ready, the current country value in the selectedCountry observable backing the input-text component is used to select the initial country:

                var listenerKey = self.countriesVector.on('change', function (e) {
                    if (self.countriesVector.getState() == 'ready') {
                        console.log("loading dione");
                        // and unregister the "change" listener 
                        ol.Observable.unByKey(listenerKey);
                        self.setSelectedCountry(self.selectedCountry())
                    }
                });

 

References

GitHub Repo with the code (JET Application) : https://github.com/lucasjellema/jet-and-openlayers .

Countries GeoJSON file – https://github.com/johan/world.geo.json

Open Layers Example of Select Interaction – http://openlayers.org/en/latest/examples/select-features.html

Open Layers API – Vector: http://openlayers.org/en/latest/apidoc/ol.source.Vector.html

Event Listener for OpenLayers Vector with GEOJSON source – https://gis.stackexchange.com/questions/123149/layer-loadstart-loadend-events-in-openlayers-3/123302#123302

Animated Gif maker – http://gifmaker.me/

OpenLayers 3 : Beginner’s Guideby Thomas Gratier; Erik Hazzard; Paul Spencer, Published by Packt Publishing, 2015

OpenLayers Book – Handling Selection Events –  http://openlayersbook.github.io/ch11-creating-web-map-apps/example-08.html

The post Using an OpenLayers map to select countries in an Oracle JET application appeared first on AMIS Oracle and Java Blog.

Wrong Cell Server name on X6-2 Elastic Rack - Bug 25317550

Syed Jaffar - Tue, 2018-01-02 05:55
Two X6-2 Elastic Full capacity Exadata systems were deployed recently. Due to the following BUG, cell names were not properly updated with the client provided names after executing the applyElasticConfig.sh.

Bug 25317550 : OEDA FAILS TO SET CELL NAME RESULTING IN GRID DISK NAMES NOT HAVING RIGHT SUFFIX

Though this doesn't impact the operations, but, certainly will create confusion when multiple Exadata systems are deployed in the same data center, due to exact name of cell, cell disks, grid disks.

Note : Its highly recommended to validate the cell names after executing the applyElasticConfig.sh, before running the onecommand. If you encounter the similar problem, simply change the cell name with alter cell name=[correctname] and proceed with onecommand execution to avoid the BUG.

The default names looks like the below :

# dcli -g cell_group -l root 'cellcli -e list cell attributes name'
                        celadm1: ru02
                        celadm1: ru04
                        celadm3: ru06
                        celadm4: ru08
                        celadm5: ru10
                        celadm6: ru12



Changing the cell name to reflect the cell disk, grid disk names, you need to follow the below procedure:

The procedure below must be performed on all cells separately and sequentially(to avoid full downtime);

1) Change the cell name:
    cellcli> alter cell name=celadm5

 
2) Confirm griddisks can be taken offline.

    cellcli> list griddisk attributes name, ASMDeactivationOutcome, ASMModeStatus
            ASMDeactivationOutcome - Should be YES for all griddisks


3) Inactivate griddisk on that cell
    cellcli> alter griddisk all inactive

 
            Observation - IF any votesiks are in the storage server will relocate to any surviving storage servers.


4) Change cell disk name
            alter celldisk CD_00_ru10 name=CD_00_celadm5;    
            alter celldisk CD_01_ru10 name=CD_01_
celadm5;
            alter celldisk CD_02_ru10 name=CD_02_
celadm5;
            alter celldisk CD_03_ru10 name=CD_03_
celadm5;
            alter celldisk CD_04_ru10 name=CD_04_
celadm5;
            alter celldisk CD_05_ru10 name=CD_05_
celadm5;
            alter celldisk CD_06_ru10 name=CD_06_
celadm5;
            alter celldisk CD_07_ru10 name=CD_07_
celadm5;
            alter celldisk CD_08_ru10 name=CD_08_
celadm5;
            alter celldisk CD_09_ru10 name=CD_09_
celadm5;
            alter celldisk CD_10_ru10 name=CD_10_
celadm5;
            alter celldisk CD_11_ru10 name=CD_11_
celadm5;

5) Change Griddisk name using the below examples (do it for all grid disks, DATAC1, DBFS & RECOC1)
            alter GRIDDISK DATAC1_CD_00_ru10  name=DATAC1_CD_00_
celadm5;
            alter GRIDDISK DBFS_DG_CD_02_ru10 name=DBFS_DG_CD_02_
celadm5;
            alter GRIDDISK RECOC1_CD_11_ru10  name=RECOC1_CD_11_
celadm5;

6) Activate griddisk on that cell
            cellcli> ALTER GRIDDISK ALL ACTIVE;
       
        There are some important points to be noted after activating griddisks.


      a) asm disks path and name
       * griddisk name change is automatically getting relflected in asm disk path.
       * asm logical name is still referring old name.
      b) failgroup
       * failgroup name is changed and using the same old name.

7) Changing ASM logical name and failgroup name.

    * This can be achived by dropping asmdisk and adding back with correct name. The observation is failgroup name will get automatically changed when we adding
      back asm disks with correct name.
    * ASMCA is the best tool to drop and add back asm disks with 250+ rebalancing power limit.
   
    a) Drop asm disks and observations.
        * We need to make sure asmdisks can be dropped
             cellcli> list griddisk attributes name, ASMDeactivationOutcome, ASMModeStatus
                ASMDeactivationOutcome - Should be YES for all griddisks
        * Drop asmdisks using asmca or alter diskgroup 


        We can see asmdisk state will be dropping and there will be an ongoing rebalance operation.
   
        * ASM rebalance operation.
            We can see ongoing asm rebalance operation using below command and change the power to finish it fast.


                sqlplus / as asm


                sql> select * from v$asm_operation;
                sql> alter diskgroup DATAC1 rebalance power 256;


        * Once rebalance operation completed we can asm disk state as changed to noraml, name will become empty failgroup also changed with corret name.
       
a) ADD back asm disks and observations.

Adding back as well can be done by using asmca or asm diskgroup alter commands.
We need to make sure we are adding back with correct name in this case DATAC1_CD_00_RU10 should be added back DATAC1_CD_00_arb02celadm19

We can see ongoing asm rebalance operation using below command and change the power to finish it fast.


        sqlplus / as asm
        sql> select * from v$asm_operation;
   
8) Remaining cells

We can continue same operation for remaining cells and entire operation can be completed with out any downtime at database level.
Once we have completed we can see all votedisks as well relocated or renamed with new name.


References:
Bug 25317550 : OEDA FAILS TO SET CELL NAME RESULTING IN GRID DISK NAMES NOT HAVING RIGHT SUFFIX

I appreciate and thank my team member Khalid Kizhakkethil for doing this wonderful job and preparing the documentation.


 



New OA Framework 12.2.5 Update 18 Now Available

Steven Chan - Tue, 2018-01-02 05:33

Web-based content in Oracle E-Business Suite Release 12 runs on the Oracle Application Framework (also known as OA Framework, OAF, or FWK) user interface libraries and infrastructure. Since the initial release of Oracle E-Business Suite Release 12.2 in 2013, we have released a number of cumulative updates to Oracle Application Framework to fix performance, security, and stability issues.

These updates are provided in cumulative Release Update Packs, and cumulative Bundle Patches that can be applied on top of the Release Update Packs. In this context, cumulative means that the latest RUP or Bundle Patch contains everything released earlier.

The latest OAF update for Oracle E-Business Suite Release 12.2.5 is now available:

Where is this update documented?

Instructions for installing this OAF Release Update Pack are in the following My Oracle Support knowledge document:

Who should apply this patch?

All Oracle E-Business Suite Release 12.2.5 users should apply this patch.  Future OAF patches for EBS Release 12.2.5 will require this patch as a prerequisite. 

What's new in this update?

This bundle patch is cumulative: it includes (TBS) fixes in total, including all fixes released in previous EBS Release 12.2.5 bundle patches.

This latest bundle patch includes fixes for following bugs/issues:

  • Tip type is not displayed in 'Screen Reader Optimized' accessibility mode.
  • Null Pointer Exception is thrown with a specific Configurator Developer flow that uses the query bean.

Related Articles

Categories: APPS Blogs

Quick SQL: from Packaged App to built-in feature in Oracle APEX 5.2

Dimitri Gielis - Tue, 2018-01-02 04:26
I blogged about Quick SQL already a few times as I saw not many developers knew about it.

In Oracle APEX 5.1 you can install Quick SQL by going to the Packaged Apps section and install it from there:



I really love Quick SQL as it allows me to build my data model very fast, but also shows me the structure of the tables in a very efficient way. That is why I created a script that can reverse engineer existing tables into the Quick SQL format.

From Oracle APEX 5.2 onwards you won't find Quick SQL in the packaged app section anymore... but no worries, it's not gone, it's now built-in the APEX framework itself :)

Go to SQL Workshop - SQL Scripts:


Hit the Quick SQL button:


Here you have Quick SQL :)


You can run your script after you save, straight from this interface.

Note: the screenshots are taken from Oracle APEX 5.2 Early Adopter, so things might change in the final release of APEX 5.2.
Categories: Development

Defaults

Jonathan Lewis - Tue, 2018-01-02 02:43

Following on from a Twitter reference and an update to an old posting about a side effect of  constraints on the work done inserting data, I decided to have a closer look at the more general picture of default values and inserts. Here’s a script that I’ve tested against 11.2.0.4, 12.1.0.2, and 12.2.0.1 (original install, no patches applied in all cases):


rem
rem     Script:         defaults_cost.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2017
rem

create table t1 (
        column1  varchar2(10),
        column2  varchar2(10),
        column3  varchar2(10),
        column4  varchar2(10),
        column32 varchar2(32)   default 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
        column33 varchar2(33)   default 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
        virtual1      generated always as (
                column4 || column3 || column2 || column1
        ) virtual
)
segment creation immediate
;

execute dbms_output.put_line(dbms_stats.create_extended_stats(user,'t1','(column1 || column2 || column3 || column4)'))
execute dbms_output.put_line(dbms_stats.create_extended_stats(user,'t1','(column1,column2,column3)'))

create or replace function plsqlfunction_with_30char_name(
        i_in varchar2
)
return varchar
deterministic
is
begin
        return initcap(i_in);
end;
/

create index t1_i1 on t1(substr(plsqlfunction_with_30char_name(column1),1,10));

When you create a function-based index you get a hidden, virtual column supporting the index expression; when you create extended stats (of either type) you get a hidden virtual column holding the extension definition, when you create any type of virtual column, including a “real” virtual column you get a data dictionary entry holding the column name and the expression definition: all these options use the “data_default” column from user_tab_cols to display the defining information – as we can see when we the following query:


select  column_name, data_default
from    user_tab_cols
where   table_name = 'T1'
order by
         column_id
;

COLUMN_NAME                      DATA_DEFAULT
-------------------------------- --------------------------------------------------------------------------------
COLUMN1
COLUMN2
COLUMN3
COLUMN4
COLUMN32                         'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
COLUMN33                         'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
VIRTUAL1                         "COLUMN4"||"COLUMN3"||"COLUMN2"||"COLUMN1"
SYS_STUDAN97VB_XDKRTR_NPFAB80P   "COLUMN1"||"COLUMN2"||"COLUMN3"||"COLUMN4"
SYS_STUTCBJ8OFFSY1D9ZXRYZ0W3W#   SYS_OP_COMBINED_HASH("COLUMN1","COLUMN2","COLUMN3")
SYS_NC00010$                     SUBSTR("TEST_USER"."PLSQLFUNCTION_WITH_30CHAR_NAME"("COLUMN1"),1,10)

Apart from the special cases I’ve just listed, you’ll also see the “default values” I specified for column32 and column33 – you’ll notice that I’ve supplied a 30 character string as the default for column32, and a 31 character string as the default for column33 – this is a convenience that means the used space in the data_default (which is a long column) corresponds to the name of the column once you include the single quotes in the their character count.

Having set my data up I’m going to emulate a bad application that uses lots of literal string SQL and leaves Oracle to fill in the default values (and, of course, derive the various virtual values it might need).


alter session set events '10046 trace name context forever, level 4';

begin
        for i in 1..10 loop
                execute immediate '
                        insert into t1 (column1, column2, column3, column4)
                        values( ' || i || ', ' || i || ', ' || i || ', ' || i || ')'
                ;
                commit;
        end loop;
end;
/

alter session set events '10046 trace name context off';

This code generates 10 strings that populate column1 through to column4 only. But you’ll notice the call to enable SQL tracing – and here’s the interesting bit of the output from applying tkprof to the trace file:


  SQL ID: 47r1y8yn34jmj Plan Hash: 2191121161

select default$
from
 col$ where rowid=:1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse       50      0.00       0.00          0          0          0           0
Execute     50      0.00       0.00          0          0          0           0
Fetch       50      0.00       0.00          0        100          0          50
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      150      0.00       0.00          0        100          0          50

The summary is the same for all three versions of Oracle I tested*** – we’ve queried for a default value from col$ 5 times for each row we insert! (Technically that’s 5 times for each insert statement we’ve had to (hard-)parse; this anomaly wouldn’t appear if we have been using a bind-variable method and reusing the insert statement.) *** There is one difference in 12.2 – the number of parse calls reported for this statement was 1 rather than 50 but, judging by the various cursor cache hit stats, that may be due to a change in accounting rather than a change in workload.

Check the table definition: there are two “real defaults” and 4 expressions due to the various virtual columns – so why 5 calls per insert and not 6 ? The answer lies in the length of the actual value involved – if the text that appears in the (long) data_default column is 32 characters or shorter it will be stored in the the dictionary cache (rowcache), but only one of our 6 values is that short, so Oracle looks up the other five on each insert (hard parse).

This is a little strange on two counts: first, given the enormous memories that have been available for years and the extreme usefulness of virtual columns and extended stats it seems something of an oversight to limit the memory available to the cache that holds such critical definitions; secondly, we don’t need to evaluate the virtual columns (unless they are indexed) or extended stats on inserts so why is Oracle querying the definitions anyway ? [Possible answer: perhaps Oracle is using generic code that allows for column check constraints – which might exist on users’ virtual columns – and hasn’t catered for bypassing system-generated virtual columns.]

A key point to remember before you start worrying too much about the impact of the execution count for this query against col$ is that it’s only relevant to “hard” parses – so you’re only likely to notice it if you have a large number of different “literal string” inserts that should be using bind variables; and that means you’ve probably got an important coding defect to address before you worry too much about the extra impact caused by this particular call. Nevertheless there are a couple of bug reports on MoS that have been raised against this query and, after writing this note, I did a quick Google search for the critical SQL_ID and found (among others) this production example from Franck Pachot.

 


Embedding OpenLayers in Oracle JET for Advanced Maps and GIS style User Interfaces

Amis Blog - Mon, 2018-01-01 10:49

Oracle JET is a toolkit for the creation of rich web applications. Many applications will have location-related aspects. Such applications can benefit from advanced map capabilities – for presenting data in maps, allowing users to interact with maps in order to formulate queries, navigate to relevant details or manipulate data. OpenLayers is one of the most prominent open source JavaScript libraries for working with maps in web applications. It provides an API for building rich web-based geographic applications similar to Google Maps and Bing Maps. One of the geographic data providers that OpenLayers works well with is Open Street Map (OSM) – also fully open source.

In this article, I will report on my first steps with OpenLayers and OSM integrated in Oracle JET. In a few simple steps, I will create the JET application illustrated below –a  mix of a JET Checkbox Set where countries can be selected and an OpenLayers map that is manipulated from JavaScript to show (and hide) markers for the countries that are selected (and deselected).

Webp.net-gifmaker

This article should provide you with a starting point for working with OpenLayers in JET yourself. Source code for this article can be downloaded from GitHub: https://github.com/lucasjellema/jet-and-openlayers .

Steps:

  • create new JET application (for example with JET CLI)
  • download OpenLayers distribution and add to JET application’s folders
  • configure the JET application for the OpenLayers library
  • add a DIV as map container to the HTML file
  • add the JavaScript code to initialize and manipulate the map to the ViewModel JS file

In more detail:

1. Create a new Oracle JET application

Follow for example the steps described on the Oracle JET Web Pages: http://www.oracle.com/webfolder/technetwork/jet/globalGetStarted.html 

Use

ojet create projectname

to create the new JET application

2. Download OpenLayers Distribution and Add to the JET Application

Download the OpenLayers distribution – a zip-file with the combined JavaScript and CSS files for OpenLayers (4.x) from https://github.com/openlayers/openlayers/releases/ 

In the JET application’s js/libs directory, create a new directory openlayers and add the new library and any accompanying files to it.

image

3. Configure the JET application for the OpenLayers library

In the js directory update the js/main-release-paths.json file to include the new library.

  "ol": "libs/openlayers/ol-debug",
  "olcss": "libs/openlayers/ol.css"
}

In your RequireJS bootstrap file, typically main.js, add a link to the new file in the path mapping section and include the new library in the require() definition.

  paths:
  //injector:mainReleasePaths
  {
    ...
    'ol': 'libs/openlayers/ol-debug'
  }
  //endinjector

In the same file add a Shim Configuration for OpenLayers

// Shim configurations for modules that do not expose AMD
  shim:
  {
    'jquery':
    {
      exports: ['jQuery', '$']
    }
    ,'ol':
    {
      exports: ['ol']
    }
  }
}

Finally, add module ‘ol’ to the call to require ad as parameter in the callback function (if you want to perform special initialization on this module):

require(['ojs/ojcore', 'knockout', 'appController','ol', 'ojs/ojknockout', 'ojs/ojbutton', 'ojs/ojtoolbar', 'ojs/ojmenu','ojs/ojmodule'],
  function (oj, ko, app, ol) { // this callback gets executed when all required modules are loaded
    ...

Now to actually include  map in a View in the JET application:

4. Add a DIV as map container to the HTML file

The View contains a DIV that will act as the container for the map. It also contains a Checkbox Set with checkboxes for five different countries. The checkbox set is data bound to the ViewModel; any change in selection status will trigger an event listener. Additionally, the currentCountries variable in the ViewModel is updated with any change by the user.

<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/openlayers/4.6.4/ol-debug.css" />
<h2>Workarea with Map - Select Countries</h2>
<div id="div1">
        <oj-checkboxset id="countriesCheckboxSetId" labelled-by="mainlabelid" class="oj-choice-direction-row" value="{{currentCountries}}"
                on-value-changed="[[selectionListener]]">
                <oj-option id="uruopt" value="uy">Uruguay</oj-option>
                <oj-option id="romopt" value="ro">Romania</oj-option>
                <oj-option id="moropt" value="ma">Morocco</oj-option>
                <oj-option id="spaopt" value="es">Spain</oj-option>
                <oj-option id="indopt" value="in">India</oj-option>
        </oj-checkboxset>
        <br/>
</div>
<div id="map2" class="map"></div>

5. Add JavaScript code to initialize and manipulate the map to the ViewModel JS file

Add OpenLayers dependency in workArea.js:

define(
    ['ojs/ojcore', 'knockout', 'jquery', 'ol', 'ojs/ojknockout', 'ojs/ojinputtext', 'ojs/ojbutton', 'ojs/ojlabel', 'ojs/ojcheckboxset'],
    function (oj, ko, $, ol) {
        'use strict';
        function WorkAreaViewModel() {
            var self = this;

The following code defines a countryMap – a collection of five elements (one for each of five countries) that hold longitude and lattitude for each country, as well as a display name and country code (also the key in the map). Subsequenty, an OpenLayers feature is created for each country, and referenced from the countryMap element for later use.

            self.currentCountries = ko.observableArray([]);

            self.countryMap = {};
            self.countryMap['in'] = { "place_id": "177729185", "licence": "Data © OpenStreetMap contributors, ODbL 1.0. http://www.openstreetmap.org/copyright", "osm_type": "relation", "osm_id": "304716", "boundingbox": ["6.5546079", "35.6745457", "68.1113787", "97.395561"], "lat": "22.3511148", "lon": "78.6677428", "display_name": "India", "class": "boundary", "type": "administrative", "importance": 0.3133568788165, "icon": "http://nominatim.openstreetmap.org/images/mapicons/poi_boundary_administrative.p.20.png", "address": { "country": "India", "country_code": "in" } };
            self.countryMap['es'] = { "place_id": "179962651", "licence": "Data © OpenStreetMap contributors, ODbL 1.0. http://www.openstreetmap.org/copyright", "osm_type": "relation", "osm_id": "1311341", "boundingbox": ["27.4335426", "43.9933088", "-18.3936845", "4.5918885"], "lat": "40.0028028", "lon": "-4.003104", "display_name": "Spain", "class": "boundary", "type": "administrative", "importance": 0.22447060272487, "icon": "http://nominatim.openstreetmap.org/images/mapicons/poi_boundary_administrative.p.20.png", "address": { "country": "Spain", "country_code": "es" } };
            self.countryMap['ma'] = { "place_id": "217466685", "licence": "Data © OpenStreetMap contributors, ODbL 1.0. http://www.openstreetmap.org/copyright", "osm_type": "relation", "osm_id": "3630439", "boundingbox": ["21.3365321", "36.0505269", "-17.2551456", "-0.998429"], "lat": "31.1728192", "lon": "-7.3366043", "display_name": "Morocco", "class": "boundary", "type": "administrative", "importance": 0.19300832455819, "icon": "http://nominatim.openstreetmap.org/images/mapicons/poi_boundary_administrative.p.20.png", "address": { "country": "Morocco", "country_code": "ma" } }
            self.countryMap['ro'] = { "place_id": "177563889", "licence": "Data © OpenStreetMap contributors, ODbL 1.0. http://www.openstreetmap.org/copyright", "osm_type": "relation", "osm_id": "90689", "boundingbox": ["43.618682", "48.2653964", "20.2619773", "30.0454257"], "lat": "45.9852129", "lon": "24.6859225", "display_name": "Romania", "class": "boundary", "type": "administrative", "importance": 0.30982735099944, "icon": "http://nominatim.openstreetmap.org/images/mapicons/poi_boundary_administrative.p.20.png", "address": { "country": "Romania", "country_code": "ro" } };
            self.countryMap['uy'] = { "place_id": "179428864", "licence": "Data © OpenStreetMap contributors, ODbL 1.0. http://www.openstreetmap.org/copyright", "osm_type": "relation", "osm_id": "287072", "boundingbox": ["-35.7824481", "-30.0853962", "-58.4948438", "-53.0755833"], "lat": "-32.8755548", "lon": "-56.0201525", "display_name": "Uruguay", "class": "boundary", "type": "administrative", "importance": 0.18848351906936, "icon": "http://nominatim.openstreetmap.org/images/mapicons/poi_boundary_administrative.p.20.png", "address": { "country": "Uruguay", "country_code": "uy" } };

            for (const c in self.countryMap) {
                // create a feature for each country in the map 
                var coordinates = ol.proj.transform([1 * self.countryMap.lon, 1 * self.countryMap.lat], 'EPSG:4326', 'EPSG:3857');
                var featurething = new ol.Feature({
                    name: self.countryMap.display_name,
                    geometry: new ol.geom.Point(coordinates)
                });
                self.countryMap.feature = featurething;
            }

Then add the code to do the initialization of the Map itself – to be performed when the DOM is ready

            $(document).ready
                (
                // when the document is fully loaded and the DOM has been initialized
                // then instantiate the map
                function () {
                    initMap();
                })

            function initMap() {
                self.elem = document.getElementById("text-input");
                self.map = new ol.Map({
                    target: 'map2',
                    layers: [
                        new ol.layer.Tile({
                            source: new ol.source.OSM()
                        })
                    ],
                    view: new ol.View({
                        center: ol.proj.fromLonLat([-2, -5]),
                        zoom: 3
                    })
                });
            }

and the DIV target container is available:

Also add the code for the selectionListener to be executed whenever countries are selected or deselected.
This code adds OpenLayers features for each of the currently selected countries. Next, construct a layer which contains these features and has a specific style (red circle with big X) associated with it. Finally, add this layer to the map – to have the features displayed in the web page.

            // triggered whenever a checkbox is selected or deselected
            self.selectionListener = function (event) {
                console.log("Country Selection Changed");

                var vectorSource = new ol.source.Vector({}); // to hold features for currently selected countries
                for (var i = 0; i < self.currentCountries().length; i++) {
                    // add the feature to the map for each currently selected country
                    vectorSource.addFeature(self.countryMap[self.currentCountries()[i]].feature);
                }//for

                var layers = self.map.getLayers();
                // remove the feature layer from the map if it already was added
                if (layers.getLength() > 1) {
                    self.map.removeLayer(layers.item(1));
                }
                //Create and add the vector layer with features to the map
                // define the style to apply to these features: bright red, circle with radius 10 and a X as (text) content
                var vector_layer = new ol.layer.Vector({
                    source: vectorSource
                    ,style: function(feature) {
                        var style = new ol.style.Style({
                            image: new ol.style.Circle({
                              radius: 10,
                              stroke: new ol.style.Stroke({
                                color: '#fff'
                              }),
                              fill: new ol.style.Fill({
                                //color: '#3399CC' // light blue
                                color: 'red' // light blue
                            })
                            }),
                            text: new ol.style.Text({
                              text: "X",
                              fill: new ol.style.Fill({
                                color: '#fff'
                              })
                            })
                          });
                          return style;
                        }
                 } )
                self.map.addLayer(vector_layer);

            }//selectionListener
        }

References

Source code in GitHub Repo: https://github.com/lucasjellema/jet-and-openlayers 

Blog article by Enno Schulte (Virtual7) on adding Socket.io as third part library to a JET 3.x application: http://www.virtual7.de/blog/2017/07/oracle-jet-3-add-third-party-libraries-example-socket-io/ 

Documentation on adding 3rd party libraries to JET 4.0: https://docs.oracle.com/middleware/jet410/jet/developer/GUID-EC40DF3C-57FB-4919-A066-73E573D66B67.htm#JETDG-GUID-EC40DF3C-57FB-4919-A066-73E573D66B67 

OJET Docs Checkbox Set – http://www.oracle.com/webfolder/technetwork/jet/jsdocs/oj.ojCheckboxset.html

The post Embedding OpenLayers in Oracle JET for Advanced Maps and GIS style User Interfaces appeared first on AMIS Oracle and Java Blog.

Happy New Year 2018

Senthil Rajendran - Sun, 2017-12-31 21:46

Happy New Year 2018

Data Access Layer vs Table APIs

Andrew Clarke - Sun, 2017-12-31 11:59
One of the underlying benefits of PL/SQL APIs is the enabling of data governance. Table owners can shield their tables behind a layer of PL/SQL. Other users have no access to the tables directly but only through stored procedures. This confers many benefits:
  • Calling programs code against a programmatic interface. This frees the table owner to change the table's structure whenever it's necessary without affecting its consumers.
  • Likewise the calling programs get access to the data they need without having to know the details of the table structure, such as technical keys.
  • The table owner can use code to enforce complicated business rules when data is changed.
  • The table owner can enforce sophisticated data access policies (especially for applications using Standard Edition without DBMS_RLS).
So naturally the question arises, is this the same as Table APIs?

Table APIs used to be a popular approach to encapsulating tables. The typical Table API comprised two packages per table; one package provided methods for inserting, updating and deleting records, and the other package provided query methods. The big attraction of Table APIs was that they could be 100% generated from the data dictionary - both Oracle Designer and Steven Feuerstein's QNXO library provided TAPI generators. And they felt like good practice because, y'know, access to the tables was shielded by a PL/SQL layer.

But there are several problems with Table APIs.

The first is that they entrench row-by-agonising-row processing. Table APIs have their roots in early versions of Oracle so the DML methods only worked with a single record. Even after Oracle 8 introduced PL/SQL collection types TAPI code in the wild tended to be RBAR: there seems to something in the brain of the average programmer which predisposes them to prefer loops executing procedural code rather than set operations.

The second is that they prevent SQL joins. Individual records have to be selected from one table to provide keys for looking up records in a second table. Quite often this leads to loops within loops. So-called PL/SQL joins prevent the optimizer from choosing good access paths when handling larger amounts of data.

The third issue is that it is pretty hard to generate methods for all conceivable access paths. Consequently the generated packages had a few standard access paths (primary key, indexed columns) and provided an dynamic SQL method which accepted a free text WHERE clause. Besides opening the package to SQL injection this also broke the Law of Demeter: in order to pass a dynamic WHERE clause the calling program needed to know the structure of the underlying table, which defeats the whole objective of encapsulation.

Which leads on to the fourth, more philosophical problem with Table APIs: there is minimal abstraction. Each package is generated so it fits very closely to the structure of the Table. If the table structure changes we have to regenerate the TAPI packages: the fact that this can be done automatically is scant recompense for the tight coupling between the Table and the API.

So although Table APIs could be mistaken for good practice in actuality they provide no real benefit. The interface is 1:1 with the table structure so it has no advantage over granting privileges on the table. Combined with the impact of RBAR processing and PL/SQL joins on performance and the net effect of Table APIs is disastrous.

We cannot generate good Data Access APIs: we need to write them. This is because the APIs should be built around business functions rather than tables. The API packages granted to other users should comprise procedures for executing transactions. A Unit Of Work is likely to touch more than one table. These have to be written by domain experts who understand the data model and the business rules.

Part of the Designing PL/SQL Programs series

Been There

Jonathan Lewis - Sun, 2017-12-31 04:51

It’s the end of the year and time for a retrospective of some sort so I was thinking of listing the top 10 most popular pages on my blog, but Connor McDonald beat me to it, so I decided to see if I could remember all the countries I’d visited since starting to work with the Oracle software, and here’s the list in alphabetical order:

Antigua
Australia
Austria
Belgium
Bosnia
Brunei
Bulgaria
Canada
China
Croatia
Czech Republic
Denmark
Dubai
Egypt
Estonia
Finland
France
Germany
Greece
Hungary
Iceland
India
Indonesia
Ireland
Israel
Italy
Japan
Latvia
Lithuania
Malaysia

A few of these were holidays rather than work, and I may have forgotten a couple, so if you’ve seen me in your country and it’s not on the list let me know.

The list can be a bit of a nuisance, I had to list “all the countries you’ve visited in the last 10 years” for both the US and Russian visas: the US form only allowed for 5 countries and the Russian one for 40; and the US expected me to list EVERY visit, with dates and city!

 


12c Multitenant internals: PDB_PLUG_IN_VIOLATIONS

Yann Neuhaus - Sat, 2017-12-30 13:54

In the previous post https://blog.dbi-services.com/12c-multitenant-internals-pdb-replay-ddl-for-common-users I’ve done some DDL on a common user to show how this is replayed later for PDBs that were not opened at that time. But what happens when one of the DDL fails on one PDB?

PDB$LASTREPLAY

In the last post, the C##USER1 common user was created and all pluggable databases (PDB1 with con_id=3 and PDB2 with con_id=4) were opened and synchronized:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where opcode=-1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
1 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10

When REPLAY# in the PDB is equal to the CDB$ROOT one, this means that there are no additional statements to replicate on the PDB.

I have PDB1 opened read write and PDB2 in read only:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 READ ONLY NO

For the demo my user’s default tablespace is SYSTEM:

SQL> select con_id,username,common,default_tablespace from cdb_users where username='C##USER1' order by 1;
 
CON_ID USERNAME COMMON DEFAULT_TABLESPACE
------ -------- ------ ------------------
1 C##USER1 YES SYSTEM
3 C##USER1 YES SYSTEM
4 C##USER1 YES SYSTEM

Failure in opened containers

I want to change the default tablespace for C##USER1 and I have a USERS tablespace in CDB$ROOT (but not in the PDBs):

SQL> alter user C##USER1 default tablespace USERS;
 
Error starting at line : 50 File @ common-users-pdb-sync.sql
In command -
alter user C##USER1 default tablespace USERS
Error report -
ORA-65048: error encountered when processing the current DDL statement in pluggable database PDB1
ORA-00959: tablespace 'USERS' does not exist

As we have seen in the last post, the DDL is executed on all containers that are opened read write. Here it is fine on CDB$ROOT but fails on PDB1.

Then I create the USERS tablespace in PDB1:

SQL> alter session set container=PDB1;
Session altered.
SQL> create tablespace USERS datafile '/u01/oradata/CDB2/PDB1/users.dbf' size 5M;
Tablespace USERS created.
SQL> alter session set container=CDB$ROOT;
Session altered.

And now, the statement is successful in CDB$ROOT, replicated on PDB1:

SQL> alter user C##USER1 default tablespace USERS;
User C##USER1 altered.

This is nice: the statement is successful in all containers or fails. When it is successful, statements are recorded in PDB_SYNC$:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id=1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
1 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 11
1 0 852610 26-jan-17 02:57:26 CTXSYS SYS 5 1 alter user CTXSYS account unlock identified by *^@
1 0 853177 26-jan-17 02:57:34 CTXSYS SYS 5 2 alter user CTXSYS password expire account lock^@
1 0 1405359 26-jan-17 03:31:31 SYSTEM SYS 5 3 alter user system account lock password expire^@
1 0 1408693 23-dec-17 11:34:43 SYS SYS 5 4 alter user sys account unlock identified by *^@
1 0 1408703 23-dec-17 11:34:43 SYSTEM SYS 5 5 alter user system account unlock identified by *^@
1 0 1466615 29-dec-17 09:26:56 C##PROFILE1 SYS 7 6 create profile C##PROFILE1 limit inactive_account_time 15 container=all^@
1 0 1466641 29-dec-17 09:26:57 C##ROLE1 SYS 3 7 create role C##ROLE1 container=all^@
1 0 1466748 29-dec-17 09:26:58 C##USER1 SYS 1 8 create user C##USER1 identified by * container=all^@
1 0 1466812 29-dec-17 09:26:59 C##USER1 SYS 5 9 alter user C##USER1 profile C##PROFILE1^@
1 0 1466853 29-dec-17 09:26:59 C##USER1 C##ROLE1 SYS 10 10 grant C##ROLE1 to C##USER1 container=all^@
1 0 1467010 29-dec-17 09:27:01 C##USER1 SYS 5 11 alter user C##USER1 default tablespace USERS^@

Failure in replay at open for closed containers

But PDB2 is not synchronized because it was not opened read write:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 11
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10

But I don’t have a USERS tablespace in PDB2, so the replay will fail:

SQL> alter pluggable database PDB2 close;
Pluggable database PDB2 altered.
 
SQL> alter pluggable database PDB2 open;
ORA-24344: success with compilation error
Pluggable database PDB2 altered.

This is a warning only. The SQlcl feedback is a bit misleading, mentioning a compilation error because this is where we used to have warnings, but the SQl*Plus message is more clear:

SQL> alter pluggable database PDB2 open;
Warning: PDB altered with errors.

The PDB2 cannot be left closed, because you need to create a tablespace here. But it cannot be opened to everyone, because it is not in sync with CDB$ROOT. So what happens is that the PDB is opened in restricted mode:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 READ WRITE YES

Note that if you look at PDB_SYNC$ in the PDB at that time, it looks like REPLAY#=11 has increased but you also see rows for the statement that has to be run. You have to connect to the PDB because containers() do not run in restricted session containers:

SQL> alter session set container=PDB2;
Session altered.
 
SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 11
4 0 1469022 29-dec-17 09:27:02 C##USER1 SYS 5 11 alter user C##USER1 default tablespace USERS^@
 
SQL> alter session set container=CDB$ROOT;
Session altered.

Actually, the attempt to sync has inserted the statements and pushed the last replay indicator. Now, the PDB has all information to do a sync without the need to go to CDB$ROOT. The DDL was not replayed, but has been stored locally. When the sync will be successful, statements will be removed from the local PDB_SYNC$ leaving only the LASTREPLAY indicator.

PDB_PLUG_IN_VIOLATIONS

More info about the warning is stored in PDB_ALERT$ which you query from PDB_PLUG_IN_VIOLATIONS (the strange name reminds the TRANSPORT_SET_VIOLATIONS view used by DBMS_TTS):

SQL> select name, cause, type, status,action,message,time from pdb_plug_in_violations;
 
NAME CAUSE TYPE STATUS ACTION MESSAGE TIME
---- ----- ---- ------ ------ ------- -------------------------------
PDB2 Sync Failure ERROR PENDING Sync PDB failed with ORA-959 during 'alter user C##USER1 default tablespace USERS' 29-DEC-17 09.27.03.266780000 PM

Here you have the statement that failed and the error number, but no recommended ACTION. However, ORA-959 is “tablespace ‘%s’ does not exist” which gives a clue about the problem encountered.

As the PDB is opened a DBA (with RESTRICTED SESSION privilege) can add the tablespace:

SQL> alter session set container=PDB2;
Session altered.
 
SQL> create tablespace USERS datafile '/u01/oradata/CDB2/PDB2/users.dbf' size 5M;
Tablespace USERS created.

But you cannot simply disable restricted session:

SQL> alter system disable restricted session;
SQL> alter system disable restricted session;
 
Error starting at line : 74 File @ common-users-pdb-sync.sql
In command -
alter system disable restricted session
Error report -
ORA-65144: ALTER SYSTEM DISABLE RESTRICTED SESSION is not permitted
65144. 00000 - "ALTER SYSTEM DISABLE RESTRICTED SESSION is not permitted"
*Cause: An attempt was made to disable a restricted session while an unresolved error existed in PDB_PLUG_IN_VIOLATIONS.
*Action: Resolve all of the errors before trying to disable a restricted session.

One solution is to close and open the PDB to get the DDL replay:
SQL> alter session set container=CDB$ROOT;
Session altered.
SQL> alter pluggable database PDB2 close;
Pluggable database PDB2 altered.
SQL> alter pluggable database PDB2 open;
Pluggable database PDB2 altered.

The other solution is to call DBMS_PDB.SYNC_PDB and disable restricted mode:

SQL> exec dbms_pdb.sync_pdb; commit;
PL/SQL procedure successfully completed.
 
SQL> alter system disable restricted session;
System DISABLE altered.

In both case, no warning here, and no restricted mode:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 READ WRITE NO

and the PDB_PLUG_IN_VIOLATIONS is updated to flag the issue as resolved:

SQL> select name, cause, type, status,action,message,time from pdb_plug_in_violations;
 
NAME CAUSE TYPE STATUS ACTION MESSAGE TIME
---- ----- ---- ------ ------ ------- -------------------------------
PDB2 Sync Failure ERROR RESOLVED Sync PDB failed with ORA-959 during 'alter user C##USER1 default tablespace USERS' 29-DEC-17 09.27.04.093659000 PM

At that time, the local PDB_SYNC$ table in PDB2 contains only the PDB$LASTREPLAY row, with the same value as in the CDB$ROOT table. The rows with the statements have been deleted once the DDL has been successfully replayed:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 11

So what?

The mechanism is simple: record what is done in CDB$ROOT, replicate it in PDBs when possible (opened read-write) and try to replay it, mark the last replay step. For containers that were not writeable, at open, the DDL is replicated on the PDBs that lag being CDB$ROOT and replay step is updated. Then the DDL is replayed. When sucessful, the statement is removed from the replicated DDL. When it fails, you get a warning, and a message in PDB_PLUG_IN_VIOLATIONS, and the PDB is opened in restricted session mode to let you solve the problem.
If you can fix the issue so that the DDL to be replayed is successful, then you can just sync and disable restricted session, or simply close and re-open the PDB.
If you can’t fix it I suppose you need to hack the statements in the local PDB_SYNC$, with Oracle Support agreement of course, and make sure that you arrive to a state which is consistent with the other containers, especially CDB$ROOT.

 

Cet article 12c Multitenant internals: PDB_PLUG_IN_VIOLATIONS est apparu en premier sur Blog dbi services.

Docker-Swarm: Running with more than one manager-node / How to add a secondary manager or multiple managers

Dietrich Schroff - Sat, 2017-12-30 06:41
Adding additional masters to a docker swarm is very easy. I just followed the documentation:

On the manager i ran the following command:
alpine:~# docker swarm join-token manager
To add a manager to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-3b7f69d3wgty0u68oab8724z07fkyvgc0w8j37ng1l7jsmbghl-5nrni6ksqnkljvqpp59m5gfh1 192.168.178.46:2377And on the node:
node03:~# docker swarm join --token SWMTKN-1-3b7f69d3wgty0u68oab8724z07fkyvgc0w8j37ng1
l7jsmbghl-5nrni6ksqnkljvqpp59m5gfh1 192.168.178.46:2377

This node joined a swarm as a manager.The new state of the cluster shows:
alpine:~# docker  info | grep -A 23 Swarm
Swarm: active
 NodeID: wy1z8jxmr1cyupdqgkm6lxhe2
 Is Manager: true
 ClusterID: wkbjyxbcuohgdqc3amhl9umlq
 Managers: 2
 Nodes: 4

 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
  Force Rotate: 0
 Autolock Managers: false
 Root Rotation In Progress: false
 Node Address: 192.168.178.46
 Manager Addresses:
  192.168.178.46:2377
  192.168.178.50:2377
If you want to use a new token just run:
alpine:~# docker  swarm join-token --rotate  manager
Successfully rotated manager join token.

To add a manager to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-3b7f69d3wgty0u68oab8724z07fkyvgc0w8j37ng1l7jsmbghl-cgy143mwghbfoozt0b8li2587 192.168.178.46:2377

nvarchar2

Jonathan Lewis - Sat, 2017-12-30 06:08

Here’s an odd little quirk that appeared when I was playing around with default values just recently. I think it’s one I’ve seen before, I may even have written about it many years ago but I can’t find any reference to it at present. Let’s start with a script that I’ll run on 12.2.0.1 (the effect does appear on earlier versions):


rem
rem     Script:         nvarchar2_anomaly.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2017
rem

create table t1 (
        column1  varchar2(10),
        column2  nvarchar2(10)
);

create table t2 (
        column1  varchar2(10)
);

alter table t2 add column2 nvarchar2(10);

create table t3 (
        column1  varchar2(10),
        column2  nvarchar2(10) default 'xxxxxxxx'
);

create table t4 (
        column1  varchar2(10)
);

alter table t4 add column2 nvarchar2(10) default 'xxxxxxxx';

insert into t1(column1) values('a');
insert into t2(column1) values('a');
insert into t3(column1) values('a');
insert into t4(column1) values('a');

All I’ve done it create 4 tables which. when described will all look the same:


 Name                    Null?    Type
 ----------------------- -------- ----------------
 COLUMN1                          VARCHAR2(10)
 COLUMN2                          NVARCHAR2(10)

There is a significant different between the first two and the last two, of course, thanks to the specification of a default value which means that the inserts will produce two possible results: the first two tables will have nulls in column2; the last two will have the nvarchar2 equivalent of ‘xxxxxxxx’ which, in my instance, will be a string of 16 bytes: “0,78,0,78,0,78,0,78,0,78,0,78,0,78,0,78”.

Surprisingly, though, there is a dramatic oddity between t3 and t4 which shows up when I query user_tab_cols:

select
        table_name, column_id, column_name,  segment_column_id, data_default
from    user_tab_cols
where   table_name like 'T_'
order by
        table_name, column_id
;

TABLE_NAME            COLUMN_ID COLUMN_NAME          SEGMENT_COLUMN_ID DATA_DEFAULT
-------------------- ---------- -------------------- ----------------- --------------------
T1                            1 COLUMN1                              1
                              2 COLUMN2                              2

T2                            1 COLUMN1                              1
                              2 COLUMN2                              2

T3                            1 COLUMN1                              1
                              2 COLUMN2                              2 'xxxxxxxx'

T4                            1 COLUMN1                              1
                              2 COLUMN2                              3 'xxxxxxxx'
                                SYS_NC00002$                         2

Table t4 has acquired two columns – a hidden column (which physically exists as the second column in the stored data and is declared as raw(126)) and the column which I had specified. You’ll note that the test shows two differences that may be significant: comparing t3/t4 we see that adding, rather than initially defining, the nvarchar2() column introduces the extra column; comparing t2/t4 we see that adding a varchar2() rather than an nvarchar2() doesn’t produce the same effect. Tentative assumption, therefore, is that there is something special about adding nvarchar2() columns.

Casting my mind back to various customers who have gone through multiple upgrades of 3rd party applications that invariably seem to add columns to tables, I wondered whether this extra column appeared every time you added an nvarchar2(). I’d not noticed anything in that past that suggested this might be the case, but it’s obviously worth checking: and in my simple tests it looked as if Oracle created just one extra column and used it to capture a value that seemed to be determined by the number and location of columns that had been added.

It’s a curiosity, and leaves room for further investigation – so if anyone has links to related articles please feel free to add them in the comments.

 


Getting started with Oracle Database in a Docker container!

Amis Blog - Sat, 2017-12-30 05:33

One of the benefits of using Docker is quick and easy provisioning. I wanted to find out first-hand if this could help me get an Oracle Enterprise Edition database quickly up and running for use in a development environment. Oracle provides Docker images for its Standard and Enterprise Edition database in the Oracle Container Registry. Lucas Jellema has already provided two blogs on this (here and here) which have been a useful starting point. In this blog I’ll describe some of the choices to make and challenges I encountered. To summarize, I’m quite happy with the Docker images in the registry as they provide a very easy way to automate the install of an EE database. You can find a Vagrant provisioning shell script with the installation of Docker and Docker commands to execute here and a description on how to use it here.

Docker Installing Docker on Oracle Linux 7 Why Docker

Preparing for this blog was my first real Docker experience outside of workshops. The benefits of Docker I mainly appreciated during this exercise is that

  • Docker uses paravirtualization which is more lightweight than full virtualization on for example VirtualBox or VMWare.
  • The installation of a product inside the container is already fully scripted if you have a Docker image or Dockerfile. There are a lot of images and Dockerfiles available. Also provided and supported by software vendors such as Oracle.
  • The Docker CLI is very user friendly. For example, you can just throw away your container and create a new one or stop it and start it again at a later time. Starting a shell within a container is also easy. Compare this to for example VBoxManage.

In order to install Docker on Oracle Linux 7, you need to do some things which are described below.

Preparing a filesystem

Docker images/containers are created in /var/lib/docker by default. You do not need to specifically create a filesystem for that, however, Docker runs well on a filesystem of type BTRFS. This is however not supported on Oracle Linux. Docker has two editions. Docker CE and Docker EE. Docker CE is not certified for Oracle Linux while EE is. For Docker CE, BTRFS is only recommended on Ubuntu or Debian and for Docker EE, BTRFS is only supported on SLES.

When you do want to use a BTRFS partition (at your own risk), and you want to automate the installation of your OS using Kickstart, you can do this like:

part btrfs.01 --size=1000 --grow
btrfs /var/lib/docker --label=docker btrfs.01

See a complete Kickstart example here for Oracle Linux 7 and the blog on how to use the Kickstart file with Packer here.

Enable repositories

Docker is not present in a repository which is enabled by default in Oracle Linux 7. You can automate enabling them by:

yum-config-manager --enable ol7_addons
yum-config-manager --enable ol7_optional_latest
yum-config-manager --enable ul7_uekr4
Install Docker

Installing Docker can be done with a single command:

yum install docker-engine btrfs-progs btrfs-progs-devel -y

If you’re not using BTRFS, you can leave those packages out.

Start the Docker daemon

The Docker CLI talks to a daemon which needs to be running. Starting the daemon and making it start on boot can be done with:

systemctl start docker
systemctl enable docker
Allow a user to use Docker

You can add a user to the docker group in order to allow it to use docker. This is however a bad practice since the user can obtain root access to the system. The way to allow a non-root user to execute docker is described here. You allow the user to execute the docker command using sudo and create an alias for the docker command to instead perform sudo docker. You can also tune the docker commands to only allow access to specific containers.

Add to /etc/sudoers

oracle ALL=(ALL) NOPASSWD: /usr/bin/docker

Create the following alias

alias docker="sudo /usr/bin/docker"
Oracle database Choosing an edition Why not XE?

My purpose is to automate the complete install of SOA Suite from scratch. In a previous blog I described how to get started with Kickstart, Vagrant, Packer to get the OS ready. I ended in that blog post with the installation of the XE database. After the installation of the XE database, the Repository Creation Utility (RCU) needs to be run to create tablespaces, schemas, tables, etc for SOA Suite. Here I could not continue with my automated install since the RCU wants to create materialized views. The Advanced Replication option however is not part of the current version of the XE database. There was no automated way to let the installer skip over the error and continue as you would normally do with a manual install. I needed a non-XE edition of the database! The other editions of the database however were more complex to install and thus automate. For example, you need to install the database software, configure the listener, create a database, create scripts to start the database when the OS starts. Not being a DBA (or having any ambitions to become one), this was not something I wanted to invest much time in.

Enter Oracle Container Registry!

The Oracle Container Registry contains preconfigured images for Enterprise Edition and Standard Edition database. The Container Registry also contains useful information on how to use these images. The Standard Edition database uses a minimum of 4Gb of RAM. The Enterprise Edition database has a slim variant with less features but which only uses 2Gb of RAM. The slim image also is a lot smaller. Only about 2Gb to download instead of nearly 5Gb. The Standard Edition can be configured with a password from a configuration file while the Enterprise Edition has the default password ‘Oradoc_db1’. The Docker images can use a mounted share for their datafiles.

Create an account and accept the license

In order to use the Container Registry, you have to create an account first. Next you have to login and accept the license for a specific image. This has been described here and is pretty easy.

After you have done that and you have installed Docker as described above, you can start using the image and create containers!

Start using the image and create a container

First you have to login to the container registry from your OS. This can be done using a command like:

docker login -u maarten.smeets@amis.nl -p XXX container-registry.oracle.com

XXX is not my real password and I also did not accidentally commit it to GitHub. You should use the account here which you have created for the Container Registry.

I created a small configuration file (db_env.dat) with some settings. These are all the configuration options which are currently possible from a separate configuration file. The file contains the below 4 lines:

DB_SID=ORCLCDB
DB_PDB=ORCLPDB1
DB_DOMAIN=localdomain
DB_MEMORY=2GB

Next you can pull the image and run a container with it:

docker run -d --env-file db_env.dat -p 1521:1521 -p 5500:5500 -it --name dockerDB container-registry.oracle.com/database/enterprise:12.2.0.1-slim

The -p options specify port mappings. I want port 1521 and port 5500 mapped to my host (VirtualBox, Oracle Linux 7) OS.

You can see if the container is up and running with:

docker ps

You can start a shell inside the container:

docker exec -it dockerDB /bin/bash

I can easily stop the database with:

docker stop dockerDB

And start it again with

docker start dockerDB

If you want to connect to the database inside the container, you can do so by using a service of ORCLPDB1.localdomain user SYS password Oradoc_db1 hostname localhost (when running on the VirtualBox machine) port 1521. For the RCU, I created an Oracle Wallet file from the RCU configuration wizard and used that to automate the RCU and install the SOA Suite required artifacts in the container database. See here.

Finally

I was surprised at how easy it was to use the Docker image from the registry. Getting Docker itself installed and ready was more work. After a container is created based on the image, managing it with the Docker CLI is also very easy. As a developer this makes me very happy and I recommend other developers to try it out! There are some challenges though if you want to use the images on larger scale.

Limited configuration options

Many customers use different standards. The Docker image comes with a certain configuration and can be configured only in a (very) limited way by means of a configuration file (as shown above). You can mount an external directory to store data files.

Limitations in features

Also, the Docker container database can only run one instance, cannot be patched and does not support Dataguard. I can imagine that in production, not being able to patch the database might be an issue. You can however replace the entire image with a new version and hope the new version can still use the old datafiles. You have to check this though.

Running multiple containers on the same machine is inefficient

If you have multiple Oracle Database containers running at the same time on the same machine, you will not benefit from the multitenancy features since every container runs its own container and pluggable database. Also every container runs its own listener.

The post Getting started with Oracle Database in a Docker container! appeared first on AMIS Oracle and Java Blog.

12c Multitenant internals: PDB replay DDL for common users

Yann Neuhaus - Fri, 2017-12-29 16:05

In multitenant, you can create common Users, Roles, and Profiles. You create them in CDB$ROOT, with the CONTAINER=ALL clause (which is optional because it is the only possible value when connected to CDB$ROOT) but they are visible to all containers. As the goal of multitenant is to avoid to duplicate common metadata to all containers, You may think that they are visible through those magic metadata links. But that’s actually wrong: they are simply replicated with a very simple mechanism: the DDL for common objects is replayed into each user PDB.

I’m connected to CDB2’s CDB$ROOT and I have two pluggable databases:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 MOUNTED

PDB1 is opened and PDB2 is closed.

PDB_SYNC$

In this example, I’ll query PDB_SYNC$ which is the table where Oracle stores all DDL for common users, roles, or profiles in order to be able to replay it later:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id=1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
1 0 852610 26-jan-17 02:57:26 CTXSYS SYS 5 1 alter user CTXSYS account unlock identified by *^@
1 0 853177 26-jan-17 02:57:34 CTXSYS SYS 5 2 alter user CTXSYS password expire account lock^@
1 0 1405359 26-jan-17 03:31:31 SYSTEM SYS 5 3 alter user system account lock password expire^@
1 0 1408693 23-dec-17 11:34:43 SYS SYS 5 4 alter user sys account unlock identified by *^@
1 0 1408703 23-dec-17 11:34:43 SYSTEM SYS 5 5 alter user system account unlock identified by *^@
1 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 5

I excluded the bitand(flags,8)=8 because it concerns application containers. I query with the container() to show the con_id but this is for con_id=1 which is the CDB$ROOT.

You can see some DDL for CTXSYS recorded on January 26th which is the day where this release (12.2.0.1) was built. I used a template with datafiles to create the CDB with DBCA. And you see some DDL to unlock SYS and SYSTEM on December 23rd when I created the database. You can also see that they are ordeded in sequence with REPLAY#.

More interesting is the OPCODE=-1 which is PDB$LASTREPLAY and looks like the last value of REPLAY#. This means that on this container, CDB$ROOT, all statements where REPLAY#<=5 was run.

With a similar query, I query the opened PDBs:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 5

There only one row here in CON_ID=3, which is PDB1: the PDB$LASTREPLAY mentioning that all statements up to REPLAY=5 have been run also in this container.
I don’t see PDB2 (CON_ID=4) here because the container() clause cannot query closed containers.

CONTAINER=ALL DDL

I’ll run some common DLL to create a profile, a role and a user:

SQL> create profile C##PROFILE1 limit inactive_account_time 15 container=all;
Profile C##PROFILE1 created.
 
SQL> create role C##ROLE1 container=all;
Role C##ROLE1 created.
 
SQL> create user C##USER1 identified by oracle container=all;
User C##USER1 created.
 
SQL> alter user C##USER1 profile C##PROFILE1;
User C##USER1 altered.
 
SQL> grant C##ROLE1 to C##USER1 container=all;
Grant succeeded.

The C## prefix is mandatory to isolate the common user namespace. You can change it with the common_prefix parameter. You can even set it to the empty string, but then you have a risk of namespace collision when you plug a PDB between CDB having different common profiles or roles.
The CONTAINER=ALL is the default and the only possibility when connected to CDB$ROOT so it is optional. I recommend to mention it explicitly in order to avoid problems when running the same DDL in CDB$ROOT and in PDBs.

All those DDL have been recorded into PDB_SYNC$ and the REPLAY# has been increased:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id=1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
1 0 852610 26-jan-17 02:57:26 CTXSYS SYS 5 1 alter user CTXSYS account unlock identified by *^@
1 0 853177 26-jan-17 02:57:34 CTXSYS SYS 5 2 alter user CTXSYS password expire account lock^@
1 0 1405359 26-jan-17 03:31:31 SYSTEM SYS 5 3 alter user system account lock password expire^@
1 0 1408693 23-dec-17 11:34:43 SYS SYS 5 4 alter user sys account unlock identified by *^@
1 0 1408703 23-dec-17 11:34:43 SYSTEM SYS 5 5 alter user system account unlock identified by *^@
1 0 1466615 29-dec-17 09:26:56 C##PROFILE1 SYS 7 6 create profile C##PROFILE1 limit inactive_account_time 15 container=all^@
1 0 1466641 29-dec-17 09:26:57 C##ROLE1 SYS 3 7 create role C##ROLE1 container=all^@
1 0 1466748 29-dec-17 09:26:58 C##USER1 SYS 1 8 create user C##USER1 identified by * container=all^@
1 0 1466812 29-dec-17 09:26:59 C##USER1 SYS 5 9 alter user C##USER1 profile C##PROFILE1^@
1 0 1466853 29-dec-17 09:26:59 C##USER1 C##ROLE1 SYS 10 10 grant C##ROLE1 to C##USER1 container=all^@
1 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10

PDB1 (CON_ID=3) was opened read write, and then has been synchronized (the DDL has been run in the container to create the same profile, role and user) and the PDB$LASTREPLAY has been updated in this container to show that all has been done:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,opcode,replay#;
%nbsp;
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10

MOUNTED or READ ONLY

I open the PDB2 read only because I want to see what is in PDB_SYNC$ there. But READ ONLY means that the DDL cannot be run because no write is allowed in the local dictionary.

SQL> alter pluggable database PDB2 open read only;
Pluggable database PDB2 altered.

Running the same query as above, I can see that PDB2 (CON_ID=4) is synchronized only up to the statements with REPLAY#=5 because my DDL was not replicated there.
SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,opcode,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 5

Sync at OPEN

When I open PDB2 in read write mode, the DDL can be synchronized:

SQL> alter pluggable database PDB2 open read write force;
Pluggable database PDB2 altered.

At open, the DDL from REPLAY#>5 has been replayed and once opened the PDB is in sync with CDB$ROOT:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,opcode,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10

So what?

The common users, roles and profiles are not stored only in CDB$ROOT to be shared, but rather replicated to all PDBs. The DDL is replicated synchronously to all opened pluggable databases in read write, and stored into the CDB$ROOT PDB_SYNC$ table to be replayed later when non-synced PDBs are opened. I’ll show in the next post what happens when the DDL is in error.

Note that even when all pluggable databases are opened read write, the DDL is stored and they are purged later (when replayed on all PDBs) because they are needed when you create a new PDB and open it. The PDB$SEED has REPLAY#=0 for PDB$LASTREPLAY which means that all statements will be replayed.

This is 12.2.0.1 where this mechanism is only for common users, roles and profiles having DDL in CDB$ROOT. With application containers, more than that is recorded: all DML and DDL run between the ‘begin install/upgrade/patch’ and ‘end install/upgrade/patch’ in the application root. Then, the statements can be replayed into the application PDB with a simple SYNC command. In the future release (18c) we expect to have that application root behavior ported to CDB$ROOT so that we don’t have to run catupgrd.sql in all containers. Then the PDB will probably be patched or upgraded when opened.

 

Cet article 12c Multitenant internals: PDB replay DDL for common users est apparu en premier sur Blog dbi services.

January PeopleTools Training Courses Posted

Jim Marion - Fri, 2017-12-29 13:56

We posted our January PeopleTools training agenda. This month includes our extremely popular Fluid course as well as our new PeopleTools Delta course. Register online at www.jsmpros.com/training-live-virtual/. Our First course is January 8th, which is coming up quickly.

Besides our regular mid-week agenda, we added weekend courses for those that are committed mid-week and can't take time away for training.

The time zone for these courses is designed to catch as many US attendees as possible. If you would prefer another time zone, let me know and we will consider scheduling a future course in a more favorable time zone.

Why Fluid and why PeopleTools Delta? Fluid first: Any PeopleSoft 2017 Year in review post must include Fluid Campus Solutions. Oracle's Campus Solutions team made significant progress in Fluid student self-service. Honestly, I am extremely impressed with Fluid student self-service. Because of this progress, many of our customers are currently implementing Fluid student self-service. Likewise, PeopleSoft HCM has published retirement dates for key manager and employee self-service components. Support for most Classic manager self-service components, for example, retires in just a couple of days. Classic employee self-service retires one year later, on December 31, 2018 (for more details on Classic retirement dates, see MyOracle Support document 1348959.1). If there was ever a time to think about Fluid, that time has come. Now is a great time to learn Fluid so that you are ready for those change requests and implementation details. While there are obvious similarities between Classic and Fluid development, they are very different.

As customers implement Fluid, they will undoubtedly revisit existing customizations. This is where a PeopleTools Delta course becomes important. You could continue down the same path, customizing delivered definitions or you could investigate new PeopleTools features that allow you to configure (instead of customize) business logic and tailor the user experience. I could recount story after story of customers saving 10's to 100's of thousands of dollars in implementation, customization, and lifecycle management costs because they learned new PeopleTools features.

Does your organization have 8 or more people interested in a training course? If not, do you know 8 or more people from various organizations you can get together for a class (with virtual training, all users can be remote)? If so, we have group rates available. Besides significant savings (quantity discounts), large groups have flexibility and control over the training agenda. Feel free to contact us for more information.

Pages

Subscribe to Oracle FAQ aggregator