Feed aggregator

Monitoring AWS Costs

Jeff Kemp - Tue, 2016-06-21 00:38

I’ve been running my Apex sites on Amazon EC2 for many years now, and I’ve gone through a number of infrastructure upgrades and price changes over time. I have some alerts set up, e.g. if a server starts getting very busy or if my estimated charges go over a threshold. Today I got an alert saying my estimated monthly bill will be over $100 which is unusual.

One of the most useful reports in AWS is the Instance Usage Reports (found under Dashboard > Reports > EC2 Instance Usage Report). I tell it to report Cost, grouped by Instance Type, which gives me the following:

aws_instance_usage_report

As you can see, my daily cost was about $1.58 per day, and this shot up on the 16th (note: these rates are for the Sydney region). I was running Oracle on an m1.medium SUSE Linux instance until June 16, when I upgraded it to an m3.medium instance. I have a Reserved Instance (RI) for m1.medium, but not for m3.medium, which is why the cost has shot up. That RI will expire soon; I will purchase an m3.medium RI which will bring the cost of that instance back down to about $1 per day. Until I do that, I will be charged the “On Demand” rate of $4.63 per day.

I’m also running two t2.nano Amazon Linux instances as my frontend Apache servers. Even though they are the smallest available instance type (nano), they barely register over 1% CPU most of the time. I’ve moved all the DNS entries across to one of those nano instances now, so I will soon decommission one which will save me a few extra dollars per month.

As an Apex developer, outsourcing the hardware-related worries to AWS has been the best decision I’ve made. I’ve only suffered a couple of significant outages to date, and in both instances all my servers were still running without issue when connectivity was restored. I can spin up new instances whenever I want, e.g. to test upgrades (you might notice from the graph that I did a test upgrade on an m3.medium instance on June 14).

In case you’re wondering, the total time I needed to take down my Apex instance, take a snapshot, spin up the new instance, and swap the IP address across to it, was about 30 minutes. And that included about 10 minutes lost because I accidentally picked an incorrect option at one point. Not only that, but my upgrade also included changing from magnetic disk to SSD, which seems a bit faster. Overall I’m pretty happy with all that.


Filed under: Oracle

[EM-01901] Internal error in EM Patch Advisor for Patchng using EM12cR5

Arun Bavera - Mon, 2016-06-20 23:33

I am trying to patch the DB 12.1.0.2.160419 ( APR2016_PSU ) on OEL 6 with Patch 20807398 from Note:2034610.1 using EM12cR5.

OMS side Patches

Agent Side Patches

OMS PSU APR Patch: 22570344
OMS System Side Bundle Patch: 23218275
Patch: BUG:21364690

Agent-Side 12.1.0.5.160531: (May 31, 2016) Patch 23124338
Agent-Side Monitoring 12.1.0.8.160531:(May 31, 2016) Patch 23217639
* Agent-Side Discovery 12.1.0.8.5: (November 30, 2015) Patch 22135601

I get the below error, which is pointing to OMS HOME, I am analyzing Patch 20807398 on a DB Home!

image

EMlogs shows following exception:

2016-06-21 02:21:29,930 [RJob Step 115760] FATAL patch.InitSADB logp.251 - [EM-01901] Internal error in EM Patch Advisor

java.lang.NumberFormatException: For input string: "6000000000093016041902"

at java.lang.NumberFormatException.forInputString(NumberFormatException.java:48)

at java.lang.Long.parseLong(Long.java:422)

at java.lang.Long.valueOf(Long.java:525)

I have reviewed these notes:

ERROR: CHK-012: An Unexpected Exception Occurred when Patching Database (Doc ID 1951435.1)

EM 12c: Enterprise Manager 12c Cloud Control Patch Plan Fails when Checking Target Properties with Error: [EM-01901] Internal error in EM Patch A (Doc ID 1946028.1)

Solution:

Download the patch manually to SoftwareLibrary , seems like this issue is with downloading patch automatically from EM.

Categories: Development

Go Mobile with Oracle JET As Easy As 1 2 3

Andrejus Baranovski - Mon, 2016-06-20 18:38
Oracle JET allows to build and run mobile hybrid applications. It is using Cordova to run on mobile device and provide access to device services. This is cool and what is great about it - it allows to get you started with mobile development in minutes. Besides all this - it is free.

I will describe steps I followed, to generate JET mobile hybrid app and run it in local browser (in the next posts I will describe how to run it on simulator and actual device).

First of all you must install Node.js and npm on your machine. This will allow to run shell environment to execute various commands related to JET app generation, setup, build and deployment. Read this article and you will learn how to do it - Installing Node.js and updating npm.

Next follow Step 1 and install Cordova in Step 5 from JET Get Started list.

You are ready to generate JET mobile hybrid app at this point. If you are on Mac OS, don't forget to use sudo, otherwise there will be permission related errors. Run Yeoman to generate the app:

yo oraclejet:hybrid JETMobileApp --template=navBar --platforms=ios

At this stage you can choose predefined JET template, here I'm generating it with navBar template for  iOS platform. It must complete with Done, without errors message:


Make sure to navigate to app folder with cd AppName:


Build generated app with grunt. Command I was using to build it for iOS (you can see from the log, it is producing *.app file, which can be deployed to mobile device as application):

grunt build:dev --platform=ios

It must complete with Done, without errors:


Run application to test in local web browser. See more options (deploy to simulator or device) here - Serve a Hybrid Mobile Application with Grunt. Use grunt command:

grunt serve --platform=ios --web=true --disableLiveReload=true

I'm running it with disableLiveReload=true for a reason. It seems like live reload initialization takes long time to start. Template based JET mobile hybrid app is started (in web browser, for testing):


Generated project can be opened in NetBeans, simply select project from the folder:


Under src folder you will find JET content. Try to change text in any of the generated pages:


Rebuild and serve application, you should see changes deployed for customers page in this case:

HOWTO solve any problem recursively, PL/SQL edition…

RDBMS Insight - Mon, 2016-06-20 17:47
PROCEDURE solve (my_problem IN varchar2) IS
BEGIN
  my_idea := have_great_idea (my_problem) ;
  my_code := start_coding (my_idea) ;
  IF i_hit_complications (my_idea)
  THEN 
    new_problem := the_complications (my_idea);
    solve (new_problem);
  ELSE
    NULL; --we will never get here
  END IF;
END solve;

This abuse of recursion was inspired by @ThePracticalDev !

Categories: DBA Blogs

Cloud Integration made easy (in Cloud)

Peeyush Tugnawat - Mon, 2016-06-20 16:32

Integration has always been a critical (but challenging) requirement for an enterprise given the diverse systems and applications in a typical IT landscape.

Today, Software as a Service (SaaS) adoption is driving the need for a cloud based integration solution. When businesses run their applications in the cloud, connecting information and processes between them and to on-premise systems is critical.

Integration is also expected to match the SaaS criteria – instant provisioning, rapid development, maintenance-free and scalable. Oracle Integration Cloud Service is a pure cloud based platform for highly productive, simplified integration in the cloud. 

Oracle Integration Cloud Service (ICS) is making Cloud integration extremely easy for not only Oracle SaaS applications but also various third party SaaS applications.

See what ICS can do for you at https://cloud.oracle.com/integration

When changing CURSOR_SHARING takes effect?

Yann Neuhaus - Mon, 2016-06-20 13:15

I usually don’t advice to set CURSOR_SHARING=FORCE but imagine your application requires it, you forgot it (or tried to do without) on migration and then everything is slow. You want to change it, but when does it take effect? New execution? New parsing? New session?

EXACT

I have the default value where parent cursor is shared only when sql_text is the same:

SQL> show parameter cursor_sharing
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cursor_sharing string EXACT

And I check with a query that the predicate is not changed:

SQL> select * from dual where dummy='X';
 
D
-
X
 
SQL> select * from table(dbms_xplan.display_cursor) where plan_table_output like '%filter%';
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
1 - filter("DUMMY"='X')

FORCE

I change at system (=instance) level

SQL> alter system set cursor_sharing=force;
System altered.
 
SQL> show parameter cursor_sharing
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cursor_sharing string FORCE

I tested without session cached cursors:

SQL> alter session set session_cached_cursors=0;
Session altered.

and even from another session

SQL> connect / as sysdba
Connected.

But the predicate still has its predicate:

SQL> select * from dual where dummy='X';
 
D
-
X
 
SQL> select * from table(dbms_xplan.display_cursor) where plan_table_output like '%filter%';
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
1 - filter("DUMMY"='X')

No invalidation, no new cursor. Same old statement.

FLUSH SHARED_POOL

Only when I flush the shared_pool I can execute the statement with literals replaced:

SQL> alter system flush shared_pool;
System altered.
 
SQL> select * from dual where dummy='X';
 
D
-
X
 
SQL> select * from table(dbms_xplan.display_cursor) where plan_table_output like '%filter%';
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
1 - filter("DUMMY"=:SYS_B_0)

If you fear a hard parse fest, you can flush specific cursors. I’ve documented the procedure in a previous post.

Autotrace

As a side note, do not rely on autotrace for that

SQL> set autotrace on explain
SQL> select * from dual where dummy='X';
 
D
-
X
 
Execution Plan
----------------------------------------------------------
Plan hash value: 272002086
 
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 2 | 2 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| DUAL | 1 | 2 | 2 (0)| 00:00:01 |
--------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
1 - filter("DUMMY"='X')

Just one more thing that is special with autotrace…

Conclusion

I don’t know exactly how cursor_sharing=force is managed. I thought that the literal replacement occurred before searching for parent cursor. Don’t hesitate to comment here if you know the ‘why’ behind this behavior. My goal here was just to test what has to be done in order to have immediate effect of cursor_sharing change.

 

Cet article When changing CURSOR_SHARING takes effect? est apparu en premier sur Blog dbi services.

ASM iostats

Yann Neuhaus - Mon, 2016-06-20 12:15

A few screenshots and a link here. Sysadmins do not like ASM because they don’t have the tools they like to manage the disks. For example, they don’t want to run SQL queries to check performance, and asmcmd iostat is quite limited. Here is a nice way to get I/O statistics easily from command line.

The perl script is from Bertrand Drouvot (do not miss his twitter profile picture) and is easily downloadable from his blog:
https://bdrouvot.wordpress.com/2013/10/04/asm-metrics-are-a-gold-mine-welcome-to-asm_metrics-pl-a-new-utility-to-extract-and-to-manipulate-them-in-real-time/

It’s only queries on ASM instance, so no risk.

I order to show the relevance, I took screenshots from this script and the XtremIO console from a system where all ASM disks, and only them, are on the XtremIO brick so you can compare statistics from the storage array and from the ASM instance.

Bandwidth

ASMIOScreenshot 2016-06-20 15.18.55

IOPS

ASMIOScreenshot 2016-06-20 15.19.08

Latency

ASMIOScreenshot 2016-06-20 15.19.17

 

Cet article ASM iostats est apparu en premier sur Blog dbi services.

SQL Server 2016 AlwaysOn: distributed availability groups

Yann Neuhaus - Mon, 2016-06-20 10:49

This time, I will talk about distributed availability groups. What’s that? In short, a group of availability groups. Sounds good right? But in which cases may we need such architecture? Firstly, let’s say that distributed availability groups will run on the top of two distinct availability groups meaning that they reside on two distinct WSFCs with their own quorum and voting management. Referring to the Microsoft documentation here, we may think that this new feature will be mainly used in DR scenario but I’m not sure to understand Microsoft about this sentence:

You want the data to continually replicate to the DR site, but you don’t want a potential network problem or issue at the DR site to bring down your primary site

Indeed, we don’t need this new feature to avoid a DR site or a network failure bringing down the primary site. We may simply exclude cluster nodes at the DR site for voting. Maybe I missed something here and I will probably go back when I will get more information.

Moreover, I may not image my customer using an additional cluster just only for DR purpose. Supposing that the DR site is costless from SQL Server license perspective, we need to maintain a “complex” architecture (WSFC) only for that.

After discussing with one another French MVP Christophe Laporte (@Conseilit), we began to draw some pictures of potential scenarios where using DAG may be a good solution. Firstly, let’s say a customer that have many applications. Some of them may run on the primary site and other ones on the DR site because there is a high latency between the two datacenters. In this specific context, you may implement one availability group on each datacenter for HA purpose and add a distributed availability group for DR recovery.

blog 96 - 7 - DAG scenario

Let’s say now a big company that bought another business that includes an existing datacenter infrastructure with availability groups used by the old company. They may want to introduce HA + DR by using a distributed availability group at the both sides without performing any big changes. Of course, we have to keep in mind the cost of such architecture …

Probably one another advantage of distributed availability groups is that the primary replica has only to send log blocks to one primary replica on another availability group. Let’s imagine a traditional implementation with 2 synchronous replicas at the primary site and 4 or maybe 6 asynchronous replicas at the DR site used for scale-out reporting purpose. In this case, even if we are in a situation where all the read-only replicas are asynchronous, the failure of one may impact the primary replica because the transaction log file won’t be truncated by backup log operations until we fix the issue.

blog 96 - 8 - DAG scenario

We may have potentially up to 6 replicas that may lead to transaction log issue management in this specific context. Let’s say now we change the game by including all of the read-only replicas in one specific availability group at the DR site that is included itself in a DAG. The failure of one read-only replica on the DR site may impact only the primary on the DR availability group.

 

blog 96 - 9 - DAG scenario

I believe that others scenarios are possible and we will discover some of them through experience. Please feel free to comment or add your thoughts J

Ok it’s time to implement our first DAG. On my lab environment I implemented two additional virtual machines and then I configured another WSFC that includes my two virtual machines. So I finally get an environment that includes two WSFCs with two nodes on each. The first is already installed and used for direct seeding (see direct seeding at the beginning). We will also leverage direct seeding when implementing a DAG in order to replicate the WideWorldImporters between the two availability groups.

But before installing the DAG itself, let’s install the second availability group.

:CONNECT WIN20123SQL16\SQL16

Use master;

-- primary replica
CREATE AVAILABILITY GROUP [testGrp2]   
FOR    
REPLICA ON 
N'WIN20123SQL16\SQL16' 
WITH (ENDPOINT_URL = N'TCP://WIN20123SQL16.dbi-services.test:5022',  
    FAILOVER_MODE = MANUAL,  
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
    BACKUP_PRIORITY = 50,   
    SECONDARY_ROLE(ALLOW_CONNECTIONS = ALL),   
    SEEDING_MODE = AUTOMATIC)
,   
N'WIN20124SQL16\SQL16' 
WITH (ENDPOINT_URL = N'TCP://WIN20124SQL16.dbi-services.test:5022',   
    FAILOVER_MODE =MANUAL,   
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
    BACKUP_PRIORITY = 50,   
    SECONDARY_ROLE(ALLOW_CONNECTIONS = ALL),   
    SEEDING_MODE = AUTOMATIC);   
GO  

ALTER AVAILABILITY GROUP [testGrp2]    
ADD LISTENER 'lst-testgrp2' 
( 
	WITH IP ( ('192.168.5.121', '255.255.255.0') ) , 
	PORT = 1433);    
GO 

:CONNECT WIN20124SQL16\SQL16

USE master;

-- secondary replica
ALTER AVAILABILITY GROUP [testGrp2] JOIN   
ALTER AVAILABILITY GROUP [testGrp2] GRANT CREATE ANY DATABASE  
GO

And finally the distributed availability group

:CONNECT WIN20121SQL16\SQL16

USE master;

-- Primary cluster 
--DROP AVAILABILITY GROUP [distributedtestGrp]  
CREATE AVAILABILITY GROUP [distributedtestGrp]  
WITH (DISTRIBUTED)   
AVAILABILITY GROUP ON  
'testGrp' 
WITH    
(   
	LISTENER_URL = 'tcp://lst-testgrp:5022',    
	AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
	FAILOVER_MODE = MANUAL,   
    SEEDING_MODE = AUTOMATIC   
),   
'testGrp2' 
WITH    
(   
	LISTENER_URL = 'tcp://lst-testgrp2:5022',   
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
    FAILOVER_MODE = MANUAL,   
    SEEDING_MODE = AUTOMATIC   
);    
GO   

:CONNECT WIN20123SQL16\SQL16

USE master;

-- secondary cluster
ALTER AVAILABILITY GROUP [distributedtestGrp]   
JOIN   
AVAILABILITY GROUP ON  
'testGrp' 
WITH    
(   
	LISTENER_URL = 'tcp://lst-testgrp:5022',    
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
    FAILOVER_MODE = MANUAL,   
    SEEDING_MODE = AUTOMATIC   
),   
'testGrp2' 
WITH    
(   
	LISTENER_URL = 'tcp://lst-testgrp2:5022',   
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
    FAILOVER_MODE = MANUAL,   
    SEEDING_MODE = AUTOMATIC   
);    
GO

 You may notice the special syntax DISTRIBUTED. In addition, the replicas have been replaced by the listeners of each availability group.

blog 96 - 13 - DAG SSMS

One interesting thing here is that we can’t manage the DAG from SSMS. All options are greyed. So let’s have a look at some DMVs.

select 
	ag.name as group_name,
	ag.is_distributed,
	ar.replica_server_name as replica_name,
	ar.endpoint_url,
	ar.availability_mode_desc,
	ar.failover_mode_desc,
	ar.primary_role_allow_connections_desc as allow_connections_primary,
	ar.secondary_role_allow_connections_desc as allow_connections_secondary,
	ar.seeding_mode_desc as seeding_mode
from sys.availability_replicas as ar
join sys.availability_groups as ag
	on ar.group_id = ag.group_id;
go

 

blog 96 - 12 - DMV config

Replica names are availability group names for DAG.

We may get information about direct seeding between the two availability groups in the same manner than previously:

 

select 
	ag.name as aag_name,
	ar.replica_server_name,
	d.name as database_name,
	has.current_state,
	has.failure_state_desc as failure_state,
	has.error_code,
	has.performed_seeding,
	has.start_time,
	has.completion_time,
	has.number_of_attempts
from sys.dm_hadr_automatic_seeding as has
join sys.availability_groups as ag
	on ag.group_id = has.ag_id
join sys.availability_replicas as ar
	on ar.replica_id = has.ag_remote_replica_id
join sys.databases as d
	on d.group_database_id = has.ag_db_id

 

blog 96 - 11 - WideWorldImporters

 

select 
	local_physical_seeding_id,
	remote_physical_seeding_id,
	local_database_name,
	@@servername as local_machine_name,
	remote_machine_name,
	role_desc as [role],
	transfer_rate_bytes_per_second,
	transferred_size_bytes / 1024 as transferred_size_KB,
	database_size_bytes / 1024 as database_size_KB,
	DATEADD(mi, DATEDIFF(mi, GETUTCDATE(), GETDATE()), start_time_utc) as start_time,
	DATEADD(mi, DATEDIFF(mi, GETUTCDATE(), GETDATE()), estimate_time_complete_utc) as estimate_time_complete,
	total_disk_io_wait_time_ms,
	total_network_wait_time_ms,
	is_compression_enabled
from sys.dm_hadr_physical_seeding_stats

blog 96 - 10 - WideWorldImporters

Ok let’s perform a basic test that consists in creating and inserting data into the WideWorldImporters database from each listener.

:CONNECT lst-testgrp

use [WideWorldImporters];

create table dbo.test (id int identity);
go

insert dbo.test default values;
go

:CONNECT lst-testgrp2

use [WideWorldImporters];

create table dbo.test (id int identity);
go

insert dbo.test default values;
go

 

Connecting to lst-testgrp…
(1 row(s) affected)
Disconnecting connection from lst-testgrp…

Connecting to lst-testgrp2…
Msg 3906, Level 16, State 2, Line 14
Failed to update database “WideWorldImporters” because the database is read-only.
Msg 3906, Level 16, State 2, Line 18
Failed to update database “WideWorldImporters” because the database is read-only.
Disconnecting connection from lst-testgrp2…

Ok, as expected the availability group in DR is in READ ONLY mode. I’m not able to create or update anything from there.

Let’s perform a last test after switching over the DAG from testGrp to testGrp2. From the primary availability group:

ALTER AVAILABILITY GROUP [distributedtestGrp] FORCE_FAILOVER_ALLOW_DATA_LOSS

 

blog 96 - 13 - DAG SSMS after failover

 

This time I cannot insert data from the first availability group testGrp

Connecting to lst-testgrp…
Msg 976, Level 14, State 1, Line 2
The target database, ‘WideWorldImporters’, is participating in an availability group and is currently not accessible for queries. Either data movement is suspended or the availability replica is not enabled for read access. To allow read-only access to this and other databases in the availability group, enable read access to one or more secondary availability replicas in the group.  For more information, see the ALTER AVAILABILITY GROUP statement in SQL Server Books Online.
Disconnecting connection from lst-testgrp…

Connecting to lst-testgrp2…
(1 row(s) affected)
Disconnecting connection from lst-testgrp2…

In this blog we have introduced two new features shipped with SQL Server 2016 and availability groups. At a first glance, they seem to be pretty cool features and will extend the scope of availability group capabilities. I think we will see over time the pros and cons during our other tests and implementation at customer shops. If so, other blog posts will coming soon.

Stay tuned!

 

 

 

 

 

 

 

Cet article SQL Server 2016 AlwaysOn: distributed availability groups est apparu en premier sur Blog dbi services.

SQL Server 2016 AlwaysOn: Add DBs in AGs with direct seeding

Yann Neuhaus - Mon, 2016-06-20 10:26

In this blog I would like to talk about two new features shipped with SQL Server 2016 for AlwaysOn availability groups. The first one concerns the new direct seeding capability and the seconde one introduces distributed availability groups (DAG). I will talk about the second one in a next blog post.

First of all, let’s talk about direct seeding.  At the first glance, this feature seems very interesting because it simplifies a lot the process of adding a high available database.  Indeed, with previous versions, adding a database to an availability group from an application perspective requires some extra steps from the database administrator in order to be highly available like backup/restore or initializing replication process. Let’s say a software editor wants to install or add a database (SharePoint for instance). 

Enabling seeding mode for the concerned replicas reduce the amount of work of adding the databases by automating a little bit more the initialization process. We just have to add the database to the availability group and it’s over: no extra backup / restore tasks, no need to configure a file share for backup / restore and no manual initialization. Of course, CREATE and ALTER AVAILABILITY GROUP syntax includes a new per replica basis parameter named SEEDING_MODE that has two settings: MANUAL and AUTOMATIC. The former means that we will use the legacy method ton initialize databases to secondaries (by using backup / restore) and the latter will use the new automated initialization method that will consist in seeding database data across the replication network.
Let’s have a quick demo of this new feature. I will configure a basic availability group with two replicas (WIN20121SQL16\SQL16 and WIN20122SQL16\SQL16). You may notice the new parameter SEEDING_MODE = AUTOMATIC meaning that we will use the new automated method for initializing databases for this availability group.  You may also notice that we don’t need any more to create a “dummy” database before creating an availability group. 

On the primary:

-- primary replica
CREATE AVAILABILITY GROUP [testGrp]   
FOR   
REPLICA ON 
N'WIN20121SQL16\SQL16' 
WITH (ENDPOINT_URL = N'TCP://WIN20121SQL16.dbi-services.test:5022',  
    FAILOVER_MODE = AUTOMATIC,  
    AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,   
    BACKUP_PRIORITY = 50,   
    SECONDARY_ROLE(ALLOW_CONNECTIONS = NO),   
    SEEDING_MODE = AUTOMATIC)
,   
N'WIN20122SQL16\SQL16' 
WITH (ENDPOINT_URL = N'TCP://WIN20122SQL16.dbi-services.test:5022',   
    FAILOVER_MODE = AUTOMATIC,   
    AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,   
    BACKUP_PRIORITY = 50,   
    SECONDARY_ROLE(ALLOW_CONNECTIONS = NO),   
    SEEDING_MODE = AUTOMATIC);   
GO

System views and DMVs have been updated accordingly. For example the sys.availability_replicas provides a new seeding_mode column.

select 
	ag.name as group_name,
	ar.replica_server_name as replica_name,
	ar.endpoint_url,
	ar.availability_mode_desc,
	ar.failover_mode_desc,
	ar.primary_role_allow_connections_desc as allow_connections_primary,
	ar.secondary_role_allow_connections_desc as allow_connections_secondary,
	ar.seeding_mode_desc as seeding_mode
from sys.availability_replicas as ar
join sys.availability_groups as ag
	on ar.group_id = ag.group_id;
go

blog 96 - 1 - availability group and replicas state

Let’s complete the configuration of the availability group by giving gextra permissions to create databases on secondaries to the availability group itself.

ALTER AVAILABILITY GROUP [testGrp] JOIN   
ALTER AVAILABILITY GROUP [testGrp] GRANT CREATE ANY DATABASE  
GO

At this point we just have to add the new WideWorldImporters database to the testGrp availability group and our job is over!

-- primary replica add database WideWorldImporters to testGrp
ALTER AVAILABILITY GROUP [testGrp] ADD DATABASE [WideWorldImporters];
GO

The system will then seed database data to the secondary. We may get information of seeding progress from two new DMVs sys.dm_hadr_automatic_seeding and sys.dm_hadr_physical_seeding_stats

sys.dm_hadr_automatic_seeding DMV gives information about successful or failed database seedings and corresponding error messages. Furthermore sys.dm_hadr_physical_seeding_stats DMV provides currently running seeding information like estimated completion time and related statistics about I/O and network usage.

Here what I found after adding the database to the availability group:

select 
	ag.name as aag_name,
	ar.replica_server_name,
	d.name as database_name,
	has.current_state,
	has.failure_state_desc as failure_state,
	has.error_code,
	has.performed_seeding,
	has.start_time,
	has.completion_time,
	has.number_of_attempts
from sys.dm_hadr_automatic_seeding as has
join sys.availability_groups as ag
	on ag.group_id = has.ag_id
join sys.availability_replicas as ar
	on ar.replica_id = has.ag_remote_replica_id
join sys.databases as d
	on d.group_database_id = has.ag_db_id

 

blog 96 - 2 - seeding sessions info

select 
	local_physical_seeding_id,
	remote_physical_seeding_id,
	local_database_name,
	@@servername as local_machine_name,
	remote_machine_name,
	role_desc as [role],
	transfer_rate_bytes_per_second,
	transferred_size_bytes / 1024 as transferred_size_KB,
	database_size_bytes / 1024 as database_size_KB,
	DATEADD(mi, DATEDIFF(mi, GETUTCDATE(), GETDATE()), start_time_utc) as start_time,
	DATEADD(mi, DATEDIFF(mi, GETUTCDATE(), GETDATE()), estimate_time_complete_utc) as estimate_time_complete,
	total_disk_io_wait_time_ms,
	total_network_wait_time_ms,
	is_compression_enabled
from sys.dm_hadr_physical_seeding_stats

 

blog 96 - 3 - seeding progress info

The WideWorldImportes database is not so big by default (roughly 455MB). The estimated duration to seed database is approximatively 1’14’’.

Let’s compare this little math by including the transfer rate that is estimated to ~ 42MB/s (transfer_rate_bytes_per_second column value) and the total size of data to transfer. We may deduce 455 / 42 ~=10s to transfer all the data.

Let’s now have a look at the SQL error log on the secondary. We may see information related to the seeding data operation.

blog 96 - 6 - import error log secondary 2

blog 96 - 6 - import error log secondary 2

The seeding operation has occurred between 19:15:53 and 19:16:08 so approximatively 15 seconds that is not so far to our previous math. The estimated duration to seed data based on start and estimated completion time seems to be less accurate than the transfer throughput but I think I need to perform further tests and investigations before coming to any conclusion.

In short, a pretty cool feature but my guess is that using direct seeding may not be suitable for all scenarios. One customer example that is off the top of my head is big databases that run inside availability groups. I had the opportunity to migrate SAP databases (~1TB) to SQL Server 2014 and availability groups : using direct seeding would not be use in this case. Indeed, direct seeding does not use compression by default (you can turn on compression with traceflag 1462) and we may easily imagine that seeding all data across the network may take a long time. We may change this default behaviour by using a special trace flag but at the cost of high CPU consumption. It is probably not a good idea if we plan to use the application immediately after adding the SAP to the concerned availability group. In addition, using log stream compression may hurt performance with heavily OLTP workload because it introduces latency by design. There is a tradeoff to find here … Therefore using backup / restore process seems to be the best option by playing with optimization techniques like compression, increasing the number of backup media files or changing MAXTRANSFERSIZE and BUFFERCOUNT parameter values.

One another scenario that comes to mind concerns databases migration with thousands of databases at the same time. Which is the fastest way to migrate all these databases? Adding them to the availability group and let the system to seed database data across the network to each secondary or perform a copy of database files on each secondary and chose skip initial data synchronization? At this point I need to investigate further to answer.

One another important thing is that direct seeding is not part of the availability group wizard GUI. You have probably noticed that I don’t use it in this blog post and this is an intended behaviour. A connect item has already been created by Brent Ozar here. Please feel free to vote!

See you!

 

 

Cet article SQL Server 2016 AlwaysOn: Add DBs in AGs with direct seeding est apparu en premier sur Blog dbi services.

Services -- 3 : Monitoring Usage of Custom Services

Hemant K Chitale - Mon, 2016-06-20 10:04
In my previous blog post, I had demonstrated a few custom services created and started with DBMS_SERVICE.

Let's look at a couple of examples of monitoring usage of these services.

[oracle@ora12102 Desktop]$ sqlplus system/oracle@PDB1

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jun 20 22:51:08 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Thu Jun 16 2016 23:23:50 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> execute dbms_service.start_service('NEW_APP1');

PL/SQL procedure successfully completed.

SQL> execute dbms_service.start_service('FINANCE');

PL/SQL procedure successfully completed.

SQL> grant create table to hemant;

Grant succeeded.

SQL> grant select_Catalog_role to hemant;

Grant succeeded.

SQL>

[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@NEW_APP1

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jun 20 22:52:27 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Thu Jun 16 2016 23:28:01 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> create table obj_t1 tablespace hemant as select * from dba_objects;

Table created.

SQL> insert into obj_t1 select * from obj_t1;

90935 rows created.

SQL>

[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@FINANCE

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jun 20 22:53:54 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Mon Jun 20 2016 22:52:27 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> create table obj_t2_small tablespace hemant as select * from obj_T1 where rownum < 11;

Table created.

SQL>

SQL> show user
USER is "SYSTEM"
SQL> select sid,serial#, to_char(logon_time,'DD-MON HH24:MI:SS'), service_name
2 from v$session
3 where username = 'HEMANT'
4 order by logon_time
5 /

SID SERIAL# TO_CHAR(LOGON_TIME,'DD-M
---------- ---------- ------------------------
SERVICE_NAME
----------------------------------------------------------------
61 50587 20-JUN 22:52:27
NEW_APP1

76 43919 20-JUN 22:53:54
FINANCE


SQL>


Thus, we can see that V$SESSION tracks the SERVICE_NAME in use --- even though the USERNAME is the same in both sessions, the SERVICE_NAME is different.

SQL> col svc_name format a10
SQL> col stat_name format a25 trunc
SQL> select
2 con_id, service_name SVC_NAME, stat_name, value
3 from v$service_stats
4 where service_name in ('NEW_APP1','FINANCE')
5 and
6 (stat_name like 'DB%' or stat_name like '%block%' or stat_name like 'redo%')
7 order by 1,2,3
8 /

CON_ID SVC_NAME STAT_NAME VALUE
---------- ---------- ------------------------- ----------
3 FINANCE DB CPU 168973
3 FINANCE DB time 771742
3 FINANCE db block changes 653
3 FINANCE gc cr block receive time 0
3 FINANCE gc cr blocks received 0
3 FINANCE gc current block receive 0
3 FINANCE gc current blocks receive 0
3 FINANCE redo size 100484

CON_ID SVC_NAME STAT_NAME VALUE
---------- ---------- ------------------------- ----------
3 NEW_APP1 DB CPU 869867
3 NEW_APP1 DB time 17415363
3 NEW_APP1 db block changes 11101
3 NEW_APP1 gc cr block receive time 0
3 NEW_APP1 gc cr blocks received 0
3 NEW_APP1 gc current block receive 0
3 NEW_APP1 gc current blocks receive 0
3 NEW_APP1 redo size 25057520

16 rows selected.

SQL>


So, even some statistics (unfortunately, not all -- the last time I checked in 11.2) are reported at the Service Level.  Thus, I can see that the users of NEW_APP1 consumed more CPU and DB Time and generated more changes and redo than users of FINANCE !  (Obviously, V$SERVICE_STATS reports statistics from the beginning of the instance so you should either user StatsPack (I haven't verified StatsPack reporting of statistics by individual service) or AWR (if you have the Diagnostic Pack licence) or your own collection scripts to report statistics for a specific window of time).
.
.
.


Categories: DBA Blogs

What’s in a name? – “Brittany” edition

RDBMS Insight - Mon, 2016-06-20 07:46

In my last post, I loaded US SSA names data into my dev instance to play with. In this post, I’ll play around with it a bit and take a look at the name “Brittany” and all its variant spellings.

I found nearly 100 different spellings of “Brittany” in the US SSA data thanks to a handy regexp:

SELECT name nm, SUM(freq) FROM names 
 WHERE regexp_like(UPPER(name),'^BR(I|E|O|U|Y)[T]+[AEIOUY]*N[AEIOUY]+$' )
 AND sex='F'
GROUP BY name
ORDER BY SUM(freq) DESC;
NM				SUM(FREQ)
------------------------------ ----------
Brittany			   357159
Brittney			    81648
Britney 			    34182
Brittani			    11703
Britany 			     6291
Brittni 			     5985
Brittanie			     4725
Britni				     4315
Brittny 			     3584
Brittaney			     3280
...
Bryttnee			       10
Britttany				7
Brytanie				7
Brittanae				6
Bryttnii				6
...
Brittanii				5
Brittiana				5
 
91 rows selected.

The regexp isn’t perfect. It returns a few uncommon names which aren’t pronounced “Brittany”: “Brittiana”, “Brittiani”, “Britane”, “Brittina”, “Britanya”, “Brittine” – and one I’m not sure about, “Brittnae”. But on the other hand, it did let me discover that 7 “Britttany”s applied for SSNs in 1990. Yes, that’s “Britttany” with 3 “T”s.

Fortunately, all the “non-Brittanys” the regexp returns are quite uncommon and not even in the top 20. So the regexp will do for a graph of the top spellings. Let’s get the data by year and look at the percentage of girls in each year named Brittany/Brittney/Britney/Brittani:

WITH n AS (SELECT name nm, YEAR yr, sex, freq FROM names 
 WHERE regexp_like(UPPER(name),'^BR(I|E|O|U|Y)[T]+[AEIOUY]*N[AEIOUY]+$' )
 AND sex='F'),
y AS (SELECT  YEAR yr, sex, SUM(freq) tot FROM names GROUP BY YEAR, sex)
SELECT y.yr, 
decode(n.nm,'Brittany','Brittany', -- like Brittany Furlan
'Brittney','Brittney', -- like Brittney Griner
'Britney','Britney', -- like Britney Spears
'Brittani','Brittani', -- like Brittani Fulfer
'Other Brits') AS thename,
nvl(100*freq/tot,0) pct  FROM n, y 
WHERE  n.sex(+)=y.sex AND n.yr(+)=y.yr AND y.yr >= 1968
ORDER BY y.yr, nvl(n.nm,' ')

I graphed this in SQL Developer:
britts
From the graph it’s clear that “Brittany” is by far the most popular spelling, followed by “Brittney”. The sum of all Brittany-spellings peaked in 1989, but “Britney” has a sharp peak in 2000 – the year that singer Britney Spears released Oops I Did It Again, “one of the best-selling albums of all time” per Wikipedia.

This makes Brittany, however you spell it, a very early-90s-baby kind of name. “Brittany” was the #3 girls’ name in 1989, behind Jessica and Ashley, and was not nearly as popular in decades before or since. In subsequent posts I’ll look some more at names we can identify with specific decades.

Categories: DBA Blogs

APEX UT - Display Image in a List

Denes Kubicek - Mon, 2016-06-20 07:34
If using universal theme, we can switch from a classic navigation bar to a list and display it where usually a navigation bar is displayes - right top of the page. The problem with the list is that it will escape any html code in the list name. If we want to display an image, we will get the html code. To go arround that problem, we can use the "user defined attributes" for the lists and put the image into the "Badge Value". This list will then display the image almost as expected. We will need to apply additional css to make the background of the "Badge Value" transparent and to adjust the font to the rest of the navigation bar list:

.t-Button--navBar .t-Button-badge {background-color: transparent !important; font-weight: 400 !important; font-size: 1.2rem !important}

This is then how it appears:



Here, we can add the "Badge Value":



Finaly, here we can change our navigation bar from classic to list or oposite:

Categories: Development

Oracle Announces Pfizer’s Selection of Oracle Cloud for Clinical Data Management and Trial Management across its Clinical Trial Portfolio

Oracle Press Releases - Mon, 2016-06-20 07:00
Press Release
Oracle Announces Pfizer’s Selection of Oracle Cloud for Clinical Data Management and Trial Management across its Clinical Trial Portfolio

Redwood Shores, Calif.—Jun 20, 2016

Oracle today announced that after a detailed review and selection process, Pfizer has selected Oracle Health Sciences InForm Cloud Service and the Oracle Siebel Clinical Trial Management and Monitoring Cloud Service to help manage and monitor its more than 300 clinical trials a year and continue to provide best-in-class solutions.
 
“Oracle Health Sciences InForm Cloud Service and Oracle Siebel Clinical Trial Management and Monitoring Cloud Service can simplify how we manage both operational and study data for in-house as well as outsourced studies,” said Rob Goodwin, vice president of Pfizer Global Product Development, Center of Excellence. “Clinical teams will be able to access study data through Oracle’s single platform cloud service, eliminating the need to send data back and forth to CROs, saving us time and reducing the cost of our clinical studies.”
 
By using Oracle Health Sciences InForm Cloud Service, Pfizer will be able to take advantage of over 100,000 investigator sites already trained in InForm, while increasing site data satisfaction.
 
The Oracle Health Sciences InForm Cloud Service will enable Pfizer to have greater control over its data and provide measurable efficiency and productivity gains in data management and remote monitoring. In addition, study templates and library management capabilities within Oracle Health Sciences InForm will enable Pfizer to accelerate the study-build process and result in faster trial implementation.
 
“Today, pharma companies are challenged by increasingly complex global clinical trials,” said Steve Rosenberg, senior vice president and general manager, Oracle Health Sciences. “With more than 15 years of electronic data capture experience in the clinical industry, Oracle Health Sciences helps leading pharmas — such as Pfizer — not only to standardize and optimize their clinical study processes, but also to be prepared with value-based clinical innovation solutions that enable them for the future of data collection.”
 
Oracle Health Sciences InForm Cloud Service includes both Oracle Health Sciences InForm and Oracle Health Sciences Data Management Workbench (DMW) to advance standardized end-to-end clinical data collection and management processes that help drive greater efficiencies for both internal and outsourced trials. Utilizing the platform, efficiencies can be gained through the seamless integration of Oracle Health Sciences InForm, Oracle Health Sciences Central Designer, and Oracle Health Sciences Data Management Workbench, enabling data standardization, streamlining collection, consolidation, cleaning, and transformation.
 
Pfizer will also use Oracle Siebel Clinical Trial Management and Monitoring Cloud Service, including Oracle Siebel Clinical Trial Management System, Oracle Health Sciences Clinical Development Analytics, and risk-based monitoring (RBM) functionality to provide new insights into trial management and help increase efficiency of the monitoring workforce. A single set of tools for internal and outsourced trials can simplify data exchange and the interface between investigator sites, CRO partners, and Pfizer. With the selection of Oracle’s platform and RBM solution, Pfizer will be able to execute risk-based monitoring through an integrated approach.
 
The Oracle Siebel Clinical Trial Management and Monitoring Cloud Service, which utilizes Oracle Siebel Clinical Trial Management System and Oracle Health Sciences Clinical Development Analytics, is a solution that combines standardized comprehensive clinical trial capabilities, integrated analytic tools, and data security with a cloud service that enables organizations to manage clinical trials effectively and economically, as well as improve CRO and sponsor relationships from early to late stage clinical trials.
Contact Info
Samantha Thai
Oracle
+1.510.917.5564
samantha.thai@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Samantha Thai

  • +1.510.917.5564

Oracle Gives Partners a Fast Path to Cloud

Oracle Press Releases - Mon, 2016-06-20 07:00
Press Release
Oracle Gives Partners a Fast Path to Cloud Independent Software Vendors Gain New Revenue Opportunities through Oracle Cloud Marketplace

Redwood Shores, Calif.—Jun 20, 2016

The conversion to the cloud is one of the most significant IT transformations in history. IDC predicts that by 2018, at least half of IT spending will be cloud based. To help customers transition to the cloud with proven enterprise applications, while accelerating time-to-market for partners, Oracle PartnerNetwork (OPN) today unveiled Oracle Cloud Platform Ready for Independent Software Vendors (ISVs).

The new offering enables ISV partners to quickly register and publish their Oracle Cloud-compatible applications on the Oracle Cloud Marketplace, allowing them to fast-track new business opportunities in the cloud.  Partners can register here.

“Any new product we develop will go to the Oracle Cloud Marketplace first. We envision in the future that most of our customers will research, validate and request our solutions directly from the Oracle Cloud Marketplace,” said Charles Farnell, CEO, Ventureforth.

Oracle Cloud Platform Ready provides OPN members at Silver level or higher, a simple way to create an initial application listing on Oracle Cloud Marketplace in the Platform and Infrastructure sections in mere minutes. Any partner application that supports an Oracle Cloud compatible operating system or technology product can run on Oracle Cloud without the need for rewrite, and at no additional cost. To speed time-to-market, partners can quickly request access to Oracle Cloud Specialists and other technical resources directly from the Oracle Cloud Platform Ready registration. Additionally, ISVs with an Oracle Cloud Marketplace listing are eligible for the Oracle Cloud ISV Partner of the Year award and may apply for OPN Cloud Standard program benefits, including easy access to Oracle Cloud development and test environments.

“Oracle's Cloud Specialist team provided in-depth technical support during our proof of concept testing that allowed us to create automated deployment scenarios quickly and efficiently,” said Brent Rhymes, Executive Vice President Enterprise Sales & Marketing, Synacor/Zimbra. “Managed Service Providers can create new business opportunities by becoming a Zimbra Hosting Provider in the Oracle Cloud, while customers can deploy and manage their own collaboration solution within minutes. In each case, the flexibility of Zimbra plus the Oracle Cloud means users can start small and grow, or start big and get bigger.” 

In addition to quick access to Oracle Cloud Marketplace and development resources, partners engaging in the new offering will have the benefit of being able to reach Oracle’s expansive customer base, field sales, 25,000 global partners and 15 million developers through enhanced marketing and promotion efforts.

“The cloud represents a huge opportunity for our ISV partner community,” said Dan Miller, Senior Vice President of ISV, OEM and Java Sales, Oracle. “Through the Oracle Cloud Marketplace, and joint initiatives like Oracle Cloud Platform Ready, we are enabling ISV partners to focus more on their unique innovation, while leveraging the Oracle Cloud to get their solutions in front of a broad market of potential users quickly and at scale.”

Contact Info
Candice van der Laan
Oracle
+1.650.506.0814
candice.van.der.laan@oracle.com
About Oracle PartnerNetwork

Oracle PartnerNetwork (OPN) Specialized is the latest version of Oracle's partner program that provides partners with tools to better develop, sell and implement Oracle solutions. OPN Specialized offers resources to train and support specialized knowledge of Oracle’s products and solutions and has evolved to recognize Oracle’s growing product portfolio, partner base and business opportunity. Key to the latest enhancements to OPN is the ability for partners to differentiate through Specializations. Specializations are achieved through competency development, business results, expertise and proven success. To find out more visit: http://www.oracle.com/partners.

About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Candice van der Laan

  • +1.650.506.0814

Plan Shapes

Jonathan Lewis - Mon, 2016-06-20 06:58

There are a number of articles, webinars, and blogs online about how to read execution plans, but many of them seem to stop after the the minimum description of the simplest type of plan, so I thought I’d throw out a brief comment on a couple the slightly more complicated things that are likely to appear fairly commonly because you sometimes find plans with very similar shapes but extremely different interpretation.

First: select with scalar subqueries in the select list (there’s no need to worry about what the table definitions look like):


rem
rem     Script:         plan_shapes.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2016
rem

select
        id, n1,
        (select /*+  no_unnest */ max(n1) from t2 where t2.id = t1.n1)  t2_max,
        (select /*+  no_unnest */ max(n1) from t3 where t3.id = t1.n1)  t3_max
from
        t1
where
        id between 100 and 119
;


--------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |      1 |        |     20 |00:00:00.01 |       8 |
|   1 |  SORT AGGREGATE                      |       |     20 |      1 |     20 |00:00:00.01 |      63 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T2    |     20 |      2 |     40 |00:00:00.01 |      63 |
|*  3 |    INDEX RANGE SCAN                  | T2_I1 |     20 |      2 |     40 |00:00:00.01 |      23 |
|   4 |  SORT AGGREGATE                      |       |     20 |      1 |     20 |00:00:00.01 |      83 |
|   5 |   TABLE ACCESS BY INDEX ROWID BATCHED| T3    |     20 |      3 |     60 |00:00:00.01 |      83 |
|*  6 |    INDEX RANGE SCAN                  | T3_I1 |     20 |      3 |     60 |00:00:00.01 |      23 |
|   7 |  TABLE ACCESS BY INDEX ROWID BATCHED | T1    |      1 |     21 |     20 |00:00:00.01 |       8 |
|*  8 |   INDEX RANGE SCAN                   | T1_I1 |      1 |     21 |     20 |00:00:00.01 |       4 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T2"."ID"=:B1)
   6 - access("T3"."ID"=:B1)
   8 - access("ID">=100 AND "ID"<=119)

We have a select statement, with two columns in the select list generated by (correlated) scalar subqueries.

The overall shape of the plan shows the driving query as the last child plan for the SELECT (operations 7-8). The first and second child plans are the plans for the two scalar subqueries in turn (and the order the sub-plans appear is the order of the scalar subqueries in the select list). In this case the main query returned 20 rows (A-Rows=20), and the scalar subqueries executed 20 times each. There are a few other details we could mention, but the key feature of the plan is that the driver is the last sub-plan.

Second: update with scalar subqueries:


update t1
set
        n1 = (select max(n1) from t2 where t2.id = t1.n1),
        v1 = (select max(v1) from t3 where t3.id = t1.n1)
where
        id between 1000 and 1999
;

---------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT                      |       |      1 |        |      0 |00:00:00.13 |   10361 |
|   1 |  UPDATE                               | T1    |      1 |        |      0 |00:00:00.13 |   10361 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED | T1    |      1 |   1001 |   1000 |00:00:00.01 |      22 |
|*  3 |    INDEX RANGE SCAN                   | T1_I1 |      1 |   1001 |   1000 |00:00:00.01 |       4 |
|   4 |   SORT AGGREGATE                      |       |    916 |      1 |    916 |00:00:00.04 |    3672 |
|   5 |    TABLE ACCESS BY INDEX ROWID BATCHED| T2    |    916 |      2 |   1832 |00:00:00.03 |    3672 |
|*  6 |     INDEX RANGE SCAN                  | T2_I1 |    916 |      2 |   1832 |00:00:00.01 |    1840 |
|   7 |   SORT AGGREGATE                      |       |    916 |      1 |    916 |00:00:00.05 |    4588 |
|   8 |    TABLE ACCESS BY INDEX ROWID BATCHED| T3    |    916 |      3 |   2748 |00:00:00.04 |    4588 |
|*  9 |     INDEX RANGE SCAN                  | T3_I1 |    916 |      3 |   2748 |00:00:00.01 |    1840 |
---------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("ID">=1000 AND "ID"<=1999)
   6 - access("T2"."ID"=:B1)
   9 - access("T3"."ID"=:B1)


In this statement we update two columns by executing (correlated) scalar subqueries. The most important feature of interpreting this plan is that it is the exact opposite of the earlier select statement. In this plan the first subplan is the driving part of the statement – it’s the part of the plan that tells us how to find rows to be updated (and we find 1,000 of them); the 2nd and 3rd sub-plans correspond to the subqueries in the two columns whose value we set. In this case we happen to get some benefit from scalar subquery caching so the two subqueries each run 916 times. (The fact that both subqueries run the same number of times is not a coincidence – the caching benefit is dependent on the value(s) used for driving the subqueries, and that’s the t1.n1 column in both cases.)

Finally: because people do all sorts of complicated things, and it’s easy to get deceived if you read the plan without seeing the SQL, one more example (starting with the plan):


---------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT                      |       |      1 |        |      0 |00:00:00.14 |   11257 |
|   1 |  UPDATE                               | T1    |      1 |        |      0 |00:00:00.14 |   11257 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED | T1    |      1 |   1001 |   1000 |00:00:00.01 |      22 |
|*  3 |    INDEX RANGE SCAN                   | T1_I1 |      1 |   1001 |   1000 |00:00:00.01 |       4 |
|   4 |   SORT AGGREGATE                      |       |   1103 |      1 |   1103 |00:00:00.06 |    5519 |
|   5 |    TABLE ACCESS BY INDEX ROWID BATCHED| T3    |   1103 |      3 |   3306 |00:00:00.04 |    5519 |
|*  6 |     INDEX RANGE SCAN                  | T3_I1 |   1103 |      3 |   3306 |00:00:00.01 |    2213 |
|   7 |   SORT AGGREGATE                      |       |    916 |      1 |    916 |00:00:00.11 |    9191 |
|   8 |    TABLE ACCESS BY INDEX ROWID BATCHED| T2    |    916 |      2 |   1832 |00:00:00.03 |    3672 |
|*  9 |     INDEX RANGE SCAN                  | T2_I1 |    916 |      2 |   1832 |00:00:00.01 |    1840 |
---------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("ID">=1000 AND "ID"<=1999)
   6 - access("T3"."ID"=:B1)
   9 - access("T2"."ID"=:B1)


This plan looks very like the previous plan – it’s an update so the first sub-plan will be identifying the rows to be updated, but then what ?

The easiest assumption – always one worth considering, providing you remember that there are other possibilities – is that we have two subqueries executing for each row we update – but in this example there’s a little clue in the numbers that this isn’what the statement does. The driving query (operations 2 – 3) identifies 1,000 rows, so how can operations 4 – 6 start 1,103 times each ? Something more complicated is going on.

I’ve combined the features of the first two examples. I have a single set subquery, but it contains a scalar subquery in the select list – so operations 4 – 9 are the sub-plan for a single select statement – and a select statement with a scalar subquery in the select list puts the main plan last (operations 7 – 9) and its scalar subquery sub-plan first (operations 4 – 6). Here’s the SQL:


update t1
set
        (n1,v1) = (
                select
                        max(n1),
                        max((select /*+  no_unnest */ max(v1) from t3 where t3.id = t2.n1))
                from    t2
                where   t2.id = t1.n1
        )
where
        id between 1000 and 1999
;

What happens here is that the driving body of the update statement identifies 1,000 rows so the scalar subquery against t2 should execute 1,000 times; thanks to scalar subquery caching, though, it only executes 916 times. Each time it executes it finds 2 row and for each of those rows it executes the scalar subquery against t3 which, fortunately, also benefits from its own scalar subquery caching and so runs only 1,103 times in total

I could go on, of course, with increasingly complex examples – for example scalar subqueries that contain decode() calls with scalar subqueries as their inputs; where clauses which compare the results of scalar subqueries, and so on. It can get quite difficult to see, even with the SQL in front of you, what the plan is trying to tell you so I’ll leave you with one thought: always use the qb_name() hint to name every single subquery so that, if interpretation of the plan gets a bit difficult, you can use the ‘alias’ formatting option in the call to dbms_xplan to connect each table reference in the plan with the query block it came from in the query.

 

 


Oracle and Mellanox to Collaborate on High Speed Cloud Networking Standards

Oracle Press Releases - Mon, 2016-06-20 06:00
Press Release
Oracle and Mellanox to Collaborate on High Speed Cloud Networking Standards Companies to Work on Open Standards, Interoperability, and Backward Compatibility for InfiniBand products

Frankfurt – ISC—Jun 20, 2016

Oracle and Mellanox today announced a partnership to drive interoperability between their products and to jointly develop further industry standards for InfiniBand—a high speed Cloud networking technology.

Products supporting the InfiniBand standard not only provide extreme bandwidth and ultra-low latency for fast, agile, and secure cloud infrastructure, but they are also based upon industry standards and available from multiple vendors. Oracle and Mellanox Enhanced Data Rate (EDR) 100G InfiniBand products are engineered to the InfiniBandTM Architecture Specification Release 1.3 and allow customers to deploy a 100Gb/s fabric that is backwards compatible with the previous generations of InfiniBand-based systems (FDR, QDR, DDR, and SDR), enabling customers to protect their investments in InfiniBand-enabled applications.

Oracle’s core strategy is to enable its cloud solutions and Engineered Systems with open technologies that deliver the highest performance, efficiency, and security. Due to its superior performance, efficiency, and scalability, InfiniBand is a dominant fabric for building Super Computers, which rapidly process massive amounts of data. Oracle has chosen InfiniBand as the foundational technology for Engineered Systems to provide unparalleled performance and scale to Oracle’s suite of business critical applications.

“Customers using InfiniBand as the interconnect for clouds and high performance computing can be certain that as new speeds and features are incorporated into the InfiniBand architecture that they will be able to continue to work with the supplier of their choice and that new products will seamlessly integrate with their existing InfiniBand infrastructures,” said Raju Penumatcha, senior vice president, Netra and Networking, Oracle. “This continued partnership represents an enduring commitment to provide customers with the technologies and standards needed for true enterprise-class cloud infrastructures that enable them to accelerate application performance and quickly respond to changing business needs.”

“With a standards-based architecture, and the most advanced roadmap that is guided by the InfiniBand Trade Association, and a mature, open source software stack, InfiniBand offers customers the best of performance and interoperability with the support of a large community of suppliers, developers, and users,” said Gilad Shainer, vice president, marketing at Mellanox Technologies. “This partnership enables us to continue to optimize and expand InfiniBand’s role as a key technology for any cloud environment.”

Oracle EDR InfiniBand Fabric is on display this week at: ISC High Performance (Booth 910). For more information, please visit: www.oracle.com/goto/InfiniBand.

Mellanox EDR InfiniBand solutions are on display this week at the International Supercomputing Conference (booth 905). For more information, please visit: www.mellanox.com.

Contact Info
Sarah Fraser
Oracle
+1.650.743.0660
sarah.fraser@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

About Mellanox

Mellanox Technologies is a leading supplier of end-to-end InfiniBand and Ethernet interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software, cables and silicon that accelerate application runtime and maximize business results for a wide range of markets including high-performance computing, enterprise data centers, Web 2.0, cloud, storage, telecom and financial services. More information is available at: http://www.mellanox.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Sarah Fraser

  • +1.650.743.0660

PeopleTools October 2014 CPU Security Patch

The prior blog post (PeopleSoft Security Patches) reviewed PeopleSoft CPU patching. Worthy of its own post is the October 2014 CPU. A show of hands back in April at our PeopleSoft database security presentation at Collaborate 2016 (PeopleSoft Database Security) further confirmed Integrigy’s research that a surprising number of PeopleSoft installations have not applied this patch.

The PeopleTools October 2014 CPU (8.52.24, 8.53.17, 8.54.04) fixes a critical issue with the security of the database passwords for the Connect and Access Ids. This patch MUST be applied in order to safeguard the password for the Access Id (e.g. SYSADM) – regardless of how complex you have made it. The details of the specific vulnerability are best not given further explanation on the Internet.

This said if you have not already applied the October 2014 CPU or any CPU since (they are cumulative) and you have questions and/or concerns, please contact us at info@integrigy.com

Michael A. Miller, CISSP-ISSMP, CCSP

References

PeopleSoft Database Security

PeopleSoft Security Quick Reference

Oracle PeopleSoft, Oracle Critical Patch Updates
Categories: APPS Blogs, Security Blogs

database option - multi tenant

Pat Shuff - Mon, 2016-06-20 02:07
Before we dive into what multi-tenant databases are, let's take a step back and define a few terms. With an on premise system we can have a computer loaded with a database series of databases. Historically the way that this was done was by booting the hardware with an operating system and loading the database onto the operating system. We load the OS onto the root file system or "/" in Unix/Solaris/Linux. We create a /u01 directory to hold the ORACLE_HOME or binaries for the database. Traditionally we load the data into /u02 or keep everything in /u01. Best practices have shown us that splitting the database installation into four parts is probably a good idea. Keeping everything in the root partition is not a good idea because your can fill up your database and lock the operating system at the same time. We can put the binaries into /u01 and do a RAID-5 or RAID-10 stripe for these binaries. We can then put all of our data into /u02 and name the /u02 file system a flash disk or high speed disk to improve performance since this has a high read and write performance requirements. We can RAID-5 or RAID-10 this data to ensure that we don't loose data or will use a more advanced striping technology provided by a hardware disk vendor. We then put our backups into /u03 and do a simple mirror for this partition. We can go with a lower performing disk to save money on the installation and only keep data for a few days/weeks/months then delete it as we get multiple copies of this data. We might replicate it to another data center or copy the data to tape and put it into cold storage for compliance requirements as well as disaster recovery fall backs. If we are going to replicate the data to another data center we will create a /u04 area for change logs and redo logs that will be shipped to our secondary system and applied to the second system to reduce recovery time. Backups give us recovery to the last backup. A live system running Data Guard or Active Data Guard gives us failure back to a few seconds or a transaction or two back rather than hours or days back.

The biggest problem with this solution is that purchasing a single system to run a single database is costly and difficult to manage. We might be running at 10% processor utilization the majority of time but run at 90% utilization for a few hours a week or few days a month. The system is idle most of the time and we are paying for the high water mark rather than the average usage. Many administrators overload a system that have different peak usage times and run multiple database instances on the same box. If, for example, our accounting system peaks on the 25th through the 30th and our sales system peaks on the 5th through the 10th, we can run these two systems on the same box and resource limit each instance during the peak periods and let them run at 20% the rest of the month. This is typically done by installing two ORACLE_HOMEs in the /u01 directory. The accounting system goes into /u01/app/oracle/production/12.1.0/accounting and the sales system goes into /u01/app/oracle/production/12.1.0/sales. Both share the /u02 file system as well and put their data into /u02/app/oracle/oradata/12.1.0/accounting and /u02/app/oracle/oradata/12.1.0/sales. Backups are done to two different locations and the replication and redo logs are similarly replicated to different locations.

Having multiple ORACLE_HOMEs has been a way of solving this problem historically for years. The key drawback is that patching can get troublesome if specific options are used or installed. If, for example, both use ASM (automated storage management) you can't patch one database without patching ASM for both. This makes patch testing difficult on production systems because suddenly sales and accounting are tied together and upgrades have to be done at the same time.

Virtualization introduced a solution to this by allowing you to install different operating systems on the same computer and sublicense the software based on the virtual processors assigned to the application. You suddenly are able to separate the storage interfaces and operating system patches and treat these two systems as two separate systems running on the same box. Unfortunately, the way that the Oracle database is licensed has caused problems and tension with customers. The software does not contain a license key or hardware limit and will run on what is available. Virtualization engines like VMWare and HyperV allow you to soft partition the hardware and dynamically grow with demand. This is both good and bad. It is good because it makes it simpler to respond to increase workloads. It is bad because licensing is suddenly flexible and Oracle treats the maximum number of cores in the cluster as the high water mark that needs to be licensed. This is called soft partitioning. Operating systems like Solaris and AIX have hard partitions and virtualization engines like OracleVM and ZEN provide hard partitions. Customers have traditionally solved this by running an Oracle instance on a single socket or dual socket system to limit the core count. This typically means that the most critical data is running on the oldest and slowest hardware to limit price. Alternatively they run the database on a full blade and license all cores in this blade. This typically causes a system to be overlicensed and underutilized. The admin might limit the core count to 8 cores but there could be 32 cores in the blade and all 32 cores must be licensed. Using a virtualization engine to limit the resources between database instances is not necessarily practical and not fine enough resolution. Going with multiple ORACLE_HOME locations has been a growing trend since you have to license all of the cores based on current licensing policies.

Another big problem with the multiple ORACLE_HOME or multiple operating system approach is that you have multiple systems to manage and patch. If we use the 32 core system to run four instances of application databases we have four patches to make for the virtualization engine, the operating systems, and the databases. An optimum solution would be to run one operating system on all 32 cores and spread the four databases with one ORACLE_HOME across each and resource limit each instance so that they don't become a noisy neighbor for the other three. We can then use resource manager to assign shares to each instance and limit the processor, memory, and network bandwidth based on rules so that noisy neighbors don't stop us from getting our job done. We get our shares and can be the noisy neighbor if no one else is using resources.

With the 12c instance of the database, Oracle introduced an option called multi-tenant. Let's think of a company like SalesForce.com. They don't spin up a new instance for each company that they do business with. They don't install a new ORACLE_HOME for each company. They don't spin up a new operating system and install a new database instance for each company. This would not make economic sense. A five person company would have to spend about $3K/month with SalesForce to cover just the cost of the database license. On the flip side, custom code must be written to isolate user from company A from reading customer contact information from company B. A much simpler way would be to spin up a pluggable database for company A and another for company B. No custom code is required since the records for the two companies are stored in different directories and potentially different disk locations. If we go back and look at our partitioning blog entry we notice that we have our data stored in /u02/app/oracle/oradata/ORCL/PDB1. The ORCL directory is the location of our container database. This contains all of the configuration information for our database. We define our listener at this location. We create our RMAN backup scripts here. We define our security and do auditing at this level. Note that we have a PDB1 subdirectory under this. This is our pluggable database for company A. We would have a PDB2 for company B and the system01.dbf file in that directory is different from the system01.dbf file located in the PDB1 directory. This allows us to create unique users in both directories and not have a global name issue. With SalesForce all usernames must be unique because users are stored in a master database and must be unique. I can not, for example, create a user called backupadmin that allows users to log in to company A and backup the data set if there is a user defined by that same name for any other company world wide. This creates script issues and problems. We can't create a single backup script that works across all companies and must create a unique user and script for each company.

The main concept behind the multi-tenant option is to allow you to run more databases on the same box and reduce the amount of work required to support them. By putting common tasks like backup and restore at the container level, all pluggables on this system are backed up in a central location but separated by the pluggable container so that there is no data mingling. Data can be replicated quickly and easily without having to resort to backup and restore onto a new instance. The system global area (SGA) is common for the container database. Each pluggable container gets their own personal global area (PGA) that manages I/O buffers, compiled sql statements, and cached data.

Note that we have one redo log and undo log area. As changes are made they are copied to a secondary system. We don't have to configure Data Guard for each pluggable instance but for the container database. When we plug a instance into a container it inherits the properties of the container. If we had a container configured to be RAC enabled, all pluggables in the database instance would be RAC enabled. We can use the resource manager in the container database to limit the shares that each pluggable instance gets and reduce the noisy neighbor overlap that happens on a virtual machine configuration. We also reduce the patching, backup, and overall maintenance required to administer the database instance.

To create a pluggable instance we need to make sure that we have requested the High Performance or Extreme Performance Edition of the database. The Standard Edition and Enterprise Edition do not support multi-tenant. It is important to note that to get this same feature on Amazon you can not use RDS because they prohibit you from using this option. You must use IaaS and go with Amazon EC2 to get this feature to work. Microsoft Azure does not offer the Oracle database at the platform level so your only option is Azure Compute.

The pluggable creation is simple and can be done from the command line through sqlplus. The 12c Database Documentation details this process.

CREATE PLUGGABLE DATABASE salespdb ADMIN USER salesadm IDENTIFIED BY password;

or

CREATE PLUGGABLE DATABASE salespdb ADMIN USER salesadm IDENTIFIED BY password ROLES=(DBA);

or more complex

CREATE PLUGGABLE DATABASE salespdb ADMIN USER salesadm IDENTIFIED BY password
  STORAGE (MAXSIZE 2G MAX_SHARED_TEMP_SIZE 100M)
  DEFAULT TABLESPACE sales 
    DATAFILE '/disk1/oracle/dbs/salespdb/sales01.dbf' SIZE 250M AUTOEXTEND ON
  PATH_PREFIX = '/disk1/oracle/dbs/salespdb/'
  FILE_NAME_CONVERT = ('/disk1/oracle/dbs/pdbseed/', '/disk1/oracle/dbs/salespdb/');
Note that we can make the creation simple or define all of the options and file locations. In the last example we create the pluggable instance by cloning the existing pdbseed. In our example this would be located in /u02/app/oracle/oradata/ORCL. We would pull from the pdbseed directory and push into the salespdb directory. All three examples would do this but the third details all options and configurations.

When we create the instance from the sql plus command line, it could assume a PDB name for the file system. We might want to use the more complex configuration. When we executed this from the command line we got a long string of numbers for the directory name of our new pluggable instance called salespdb.

We could do the same thing through sql developer and have it guide us through the renaming steps. It prompts us for the new file name showing where the seed is coming from. We could have just as easily have cloned the salespdb and used it as our foundation rather than creating one from the pdbseed. We right click on the container database header and it prompts us to create, clone, or unplug a pluggable. If we select create we see the following sequence.

One thing that we did not talk about was economics. If you wanted to run multi-tenant on premise you need to purchase a database license at $47.5K per two processors and the multi-tenant option at $23K per two processors as well. This comes in at $60.5K for the license and $13,310 per year for support. Using our four year cost of ownership this comes in at $2,495 per month for the database license. The High Performance edition comes in at $4K per month. Along with this you get about $5K in additional features like diagnostics, tuning, partitioning, compression, and a few other features that we have not covered yet. If you are going to run these options on Amazon or Azure you will need to budget the $2.5K for the database license and more if you want the other features on top of the processor and storage costs for those cloud services. You should also budget the outgoing data charges that you do not have to pay for with the non-metered database service in the Oracle Cloud. Going with the multi-tenant option is cheaper than running the database on two servers and easier than running two ORACLE_HOME instances on the same machine. Going with the High Performance Edition gets you all of these options and offloads things like scale up, backup, initial configuration, and restart of services if a process fails.

In summary, multi-tenant is a good way of overloading services on a single server. The resource management features of the container allow us to dynamically change the allocation to a pluggable database and give more resources to instances that need it and limit noisy neighbors. With the High Performance edition and Extreme Performance Edition we get multi-tenant as a foundation for the service. Our primary interface to create a pluggable instance is either SQL Developer, Enterprise Manager, or sqlplus. We can easily clone an existing instance for a dev/test replica or export an instance and plug it into another system. We will look at this more in depth tomorrow.

Next Round Of ANZ “Let’s Talk Database” Events (July/August 2016)

Richard Foote - Mon, 2016-06-20 00:51
I’ll be presenting the next round of “Let’s Talk Database” events around Australia and NZ this winter in July/August 2016. These are free events but due to limited places have often “sold out” in the past, so booking early is recommended to avoid disappointment. All events run between 9:00am – 12:30pm and are followed by a networking lunch. We always have […]
Categories: DBA Blogs

Links for 2016-06-18 [del.icio.us]

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator