Feed aggregator

Pester: Cannot bind argument to parameter ‘Actual’ because it is an empty string.

Matt Penny - Tue, 2016-06-21 03:23

I’m just getting started with Pester and I got this error

   Cannot bind argument to parameter 'Actual' because it is an empty string.
   at line: 18 in C:\Program Files\WindowsPowerShell\Modules\pester\3.3.5\Functions\Assertions\Be.ps1

The code I’m testing is very simple – it just separates a ‘Property Name’ and a ‘Property Value’.

So, when it’s working it does this:

get-HugoNameAndValue -FrontMatterLine "Weighting: 103"
DEBUG: 09:15:37.6806 Start: get-HugoNameAndValue
DEBUG: - FrontMatterLine=Weighting: 103
DEBUG: - get-HugoNameAndValue.ps1: line 5
DEBUG: $PositionOfFirstColon: 9
DEBUG: $PropertyName : {Weighting}
DEBUG: $PropertyValue : { 103}
DEBUG: $PropertyValue : {103}

PropertyName PropertyValue
------------ -------------
Weighting    103          

When I ran it from Pester I got this

GetHugoNameAndValue 06/21/2016 08:45:19 $ invoke-pester
Describing get-HugoNameAndValue
DEBUG: 08:45:56.3377 Start: get-HugoNameAndValue
DEBUG: - FrontMatterLine=Weighting: 103
DEBUG: - get-HugoNameAndValue.ps1: line 5
DEBUG: $PositionOfFirstColon: 9
DEBUG: $PropertyName : {Weighting}
DEBUG: $PropertyValue : { 103}
DEBUG: $PropertyValue : {103}
 [-] returns name and value 189ms
   Cannot bind argument to parameter 'Actual' because it is an empty string.
   at line: 18 in C:\Program Files\WindowsPowerShell\Modules\pester\3.3.5\Functions\Assertions\Be.ps1
Tests completed in 189ms
Passed: 0 Failed: 1 Skipped: 0 Pending: 0

My Pester code was:

$here = Split-Path -Parent $MyInvocation.MyCommand.Path
$sut = (Split-Path -Leaf $MyInvocation.MyCommand.Path).Replace(".Tests.", ".")
. "$here\$sut"

Describe "get-HugoNameAndValue" {
    It "returns name and value" {
        $Hugo = get-HugoNameAndValue -FrontMatterLine "Weighting: 103"
        $value = $Hugo.Value
        $value | Should Be '103'
    }
}

The problem here was simply that I’d got the name of the Property wrong. It was ‘PropertyName’ not just ‘Name’

So I changed the Pester

$here = Split-Path -Parent $MyInvocation.MyCommand.Path
$sut = (Split-Path -Leaf $MyInvocation.MyCommand.Path).Replace(".Tests.", ".")
. "$here\$sut"

Describe "get-HugoNameAndValue" {
    It "returns name and value" {
        $Hugo = get-HugoNameAndValue -FrontMatterLine "Weighting: 103"
        $value = $Hugo.PropertyValue
        $value | Should Be '103'
    }
}

….and then it worked

invoke-pester
Describing get-HugoNameAndValue
DEBUG: 09:22:21.2291 Start: get-HugoNameAndValue
DEBUG: - FrontMatterLine=Weighting: 103
DEBUG: - get-HugoNameAndValue.ps1: line 5
DEBUG: $PositionOfFirstColon: 9
DEBUG: $PropertyName : {Weighting}
DEBUG: $PropertyValue : { 103}
DEBUG: $PropertyValue : {103}
 [+] returns name and value 99ms
Tests completed in 99ms
Passed: 1 Failed: 0 Skipped: 0 Pending: 0

Categories: DBA Blogs

database option - multi tenant part 2

Pat Shuff - Tue, 2016-06-21 02:07
Yesterday we looked at the concept behind multi-tenant and talked about the economics and operational benefits of using the option. Today we are going to look at examples and technical details. Fortunately, this is a hot topic and there are a ton of interviews, tutorials, and videos showing you how to do this. The best place to start is Oracle Technology Network - multitenant. This site lists seven offerings from Oracle Education, six online tutorials, and four video demos. Unfortunately, most books are a little light on this topic and cover it lightly in a chapter buried in the high number chapters. The most recent books cover this topic directly Two of these books are pre-order and the third is only a few months old. The other books talk about it as an abstract term with little or no examples. Safari Books does not have many that cover this subject because the topic is so new and few books have been published on the topic.

The Oracle Base Blog has a series of postings about multitenant and does a really good job of showing diagrams and sample code. There is a significant amount of information at this site (24 posts) looking at the subject in depth. I normally provide a variety of links to other bloggers but I think that this work is good enough to deserve top billing by itself.

Internal to Oracle the GSE Demo pages have a few demos relating to multi-tenant.

  • PaaS - Data Management (Solutions Demo) has a hands on tutorial in the cloud
  • DB12c Multi-Tenant Workshop by William Summerhill is on retriever.us.oracle.com
  • Oracle Database 12c multitenant and in-memory workshop using OPC by Ramulu Posham on retriever.us.oracle.com

For the rest of this blog I am going to go through the workshop by Ramulu Posham because it is the most complete and does everything 100% in the cloud. We could do this on the Oracle Public Cloud using DBaaS, or a database installed in IaaS on Oracle, Amazon, or Azure. We can not do this on Amazon RDS because they disable multi-tenant and prohibit it from working.

The schedule for the workshop is

  • 9:00 - 9:15 intro
  • 9:15 -10:15 cloud intro
  • 10:30 - 2:00 multi-tenant workshop
The workshop consists of creating two pluggable database instances in the cloud and look at pluggable creation, cloning, and snap cloning. The assumption is that you have a public cloud account and can create two 12c databases in the cloud with less than 100 GB of disk space. You can do this on two 1 OCPU 7.5 GB instance but require High Performance or Extreme Performance Edition to get multi-tenant to work. The only software that we will need for our Windows 2012 IaaS instance will be Swing Bench which helps put a load on the database and allows us to look at utilization of resources for a container database and our pluggable instances.

The flow of the workshop is shown in the following slide. We are going to create a database with data in the container and another database and put both instances in a pluggable database on one instance.

Some of the more interesting slides from the presentation are shown below. The file location slide helped me understand where resources get allocated. The redo logs, for example, are part of the container database and not each pluggable. You setup Data Guard for all pluggables by configuring the container and replication happens automatically. The discussion on cloning a database is interesting because you don't need to copy all of the data. You only copy the header information and reference the same data between the original and the clone. Changes are tracked with file links as they are updated on both sides. The managing slide helped me understand that there is still a DBA for each pluggable as well as a master DBA for the container. Seeing that in a picture helped me understand it better. There are also multiple slides on resource management and shares. I pulled a representative slide as well as the summary benefits slide. This is what is covered in the first hour prior to the hands on labs starting.

To start the lab, we create a 12c High Performance instance called salessvc$GC where $GC is a substitute for each lab participant. We will use 01 as our example so we will create salessvc01.

Note that we call the database instance salessvc01, the ORACLE_SID salesc01, and the pluggable container sales01. We can not have the ORACLE_SID and the pluggable instance the same because it will confuse the listener so those names must be different. The creation takes between 30 minutes and an hour on our demo accounts. While we are waiting we will create a second instance with the name cmrsvc01 with similar parameters using the SID of crms01 and a pluggable container of crm01.

Once we have the ip address of the two instances we can create an ssh tunnel for ports 443, 5500, 80, and 1521. This is done by creating an ssh tunnel in our putty client. The presentation material for the labs go through with very good screen shots for all of these steps. We have covered all of this before and are just summarizing the steps rather than detailing each step. Refer to previous blog entries or the workshop notes on how to do this.

The sales instance looks like the screen shots below. We configure the ports and look at the directory structure in the /u02/app/oracle/oradata directory to verify that the sales01 pluggable database was created under the container database salesc01.

Once we have the database created and ports configured we download and launch SwingBench to load data into the database and drive loads to test response time and sql performance.

We need to download SwingBench and Java 8 to execute the code properly. Once we download SwingBench we unzip it and install it with a java command.

The only true trick in the install is that we must execute lsnrctl status on the database to figure out what the listener is looking for in the connection string. We do this then type in localhost:1521 and the connection string to populate the database with SwingBench.

We repeat this process for the cmrc01 instance, repeat the SwingBench install, and unplug the soe database from the crmc01 pluggable database to the salessvc01 database service and plug it in as a pluggable. The big issue here is having to unplug and copy the xml file. It requires uploading the private key and allowing ssh between the two instances. Once this is done the SwingBench is run against both instances to see if performance improves or decreases with two pluggables on the same instance. The instructions do a good job of walking you through all of this.

Overall, this is a good workshop to go through. It describes how to create pluggable containers. It describes how to unplug and clone PDBs. It is a good hands on introduction and even brings in performance and a sample program to generate a synthetic load.

Start problems with Nodemanager

Darwin IT - Tue, 2016-06-21 02:05
Since my previous long running assignment, I'm involved in a few OSB 11g to 12c upgrade trajects, where I have been working on automatic installs. Hence, my articles about automatic installs, patching and domain configuration.

When I create a new domain, 12cR2 (12.2.1), using my scripts, I'm not able to use the nodemanager of the domain to start the servers. Actually, I can't connect to it. I get it running alright, but connecting to it fails with a message like:
WLSTException: Error occured while performing nmConnect : Cannot connect to Node Manager. : Access to domain 'osb_domain' for user 'weblogic' denied.

When you google the standard solution is to change the nodemanager password on the security tab of the domain in the console.Like in this article. Actually a very good suggestion, but it did not work for me. It turns out that apparently performing an nmEnroll() did the job in that case. What you need to do is:
  • Start the admin server using the startWeblogic.sh script in the domain root.
  • Verify and correct the nodemanager settings under Environment->Machines-><your machine>, ensuring it is inline with the file nodemanager.properties in {domainHome}/nodemanager.
  • Start wlst.sh
  • Connect to the adminserver using: connect({adminuser}, {adminpassword} ,'{adminHost}:{adminPort}')
  • Perform: nmEnroll({domainHome}, {domainHome}/nodemanager)
Here I assume the use of a per-domain nodemanager, where {domainHome}/nodemanager is the nodemanager home within the domain home.

Then I was able to connect to the nodemanager, start the AdminServer and then the OSB Server.


At my customer, they have been struggling in configuring a 'standalone' nodemanager, as they did in the 11g situation. Nodemanager can be started, and connected to. But doning an nmStart of the admin server got:

wls:/nm/osb_domain> nmStart('Adminserver')

Starting server Adminserver ...

Traceback (innermost last):

File "", line 1, in ?

File "", line 188, in nmStart

File "", line 553, in raiseWLSTException

WLSTException: Error occurred while performing nmStart : Error Starting server Adminserver : Received error message from Node Manager Server: [Server start command for WebLogic server 'Adminserver' failed due to: [No such file or directory]. Please check Node Manager log and/or server 'Adminserver' log for detailed information.]. Please check Node Manager log for details.

Use dumpStack() to view the full stacktrace :

wls:/nm/osb_domain

This is also in 12cR2 (12.2.1), with a domain created with the same script. Sharp eyes may notice that the adminserver name is not default: it has a lowercase 's' in stead of a uppercase. They've been fiddling a round with naming of the admin server. What we finally did was to keep the un-default naming, but cleansed the server folder by removing the data, cache and tmp folder. We also removed the logs folder to be able to see if new logs were made from starting from the nodemanager. We configured the per-domain nodemanager and then we did the same as above, performing an nmEnroll() against the domain-nodemanager. After that the 'Adminserver' was startable.

ConclusionI hardly ever had the need to use nmEnroll(), not with a new domain and in 11g at least not even using a seperate nodemanager. From colleagues I did not hear the need to use it in 12c. So why did I need it to solve the sketched problems? I haven't sorted that out, I hope I once get a finger around it. For the moment, take advantage of these experiences.

Links for 2016-06-20 [del.icio.us]

Categories: DBA Blogs

Monitoring AWS Costs

Jeff Kemp - Tue, 2016-06-21 00:38

I’ve been running my Apex sites on Amazon EC2 for many years now, and I’ve gone through a number of infrastructure upgrades and price changes over time. I have some alerts set up, e.g. if a server starts getting very busy or if my estimated charges go over a threshold. Today I got an alert saying my estimated monthly bill will be over $100 which is unusual.

One of the most useful reports in AWS is the Instance Usage Reports (found under Dashboard > Reports > EC2 Instance Usage Report). I tell it to report Cost, grouped by Instance Type, which gives me the following:

aws_instance_usage_report

As you can see, my daily cost was about $1.58 per day, and this shot up on the 16th (note: these rates are for the Sydney region). I was running Oracle on an m1.medium SUSE Linux instance until June 16, when I upgraded it to an m3.medium instance. I have a Reserved Instance (RI) for m1.medium, but not for m3.medium, which is why the cost has shot up. That RI will expire soon; I will purchase an m3.medium RI which will bring the cost of that instance back down to about $1 per day. Until I do that, I will be charged the “On Demand” rate of $4.63 per day.

I’m also running two t2.nano Amazon Linux instances as my frontend Apache servers. Even though they are the smallest available instance type (nano), they barely register over 1% CPU most of the time. I’ve moved all the DNS entries across to one of those nano instances now, so I will soon decommission one which will save me a few extra dollars per month.

As an Apex developer, outsourcing the hardware-related worries to AWS has been the best decision I’ve made. I’ve only suffered a couple of significant outages to date, and in both instances all my servers were still running without issue when connectivity was restored. I can spin up new instances whenever I want, e.g. to test upgrades (you might notice from the graph that I did a test upgrade on an m3.medium instance on June 14).

In case you’re wondering, the total time I needed to take down my Apex instance, take a snapshot, spin up the new instance, and swap the IP address across to it, was about 30 minutes. And that included about 10 minutes lost because I accidentally picked an incorrect option at one point. Not only that, but my upgrade also included changing from magnetic disk to SSD, which seems a bit faster. Overall I’m pretty happy with all that.


Filed under: Oracle

[EM-01901] Internal error in EM Patch Advisor for Patchng using EM12cR5

Arun Bavera - Mon, 2016-06-20 23:33

I am trying to patch the DB 12.1.0.2.160419 ( APR2016_PSU ) on OEL 6 with Patch 20807398 from Note:2034610.1 using EM12cR5.

OMS side Patches

Agent Side Patches

OMS PSU APR Patch: 22570344
OMS System Side Bundle Patch: 23218275
Patch: BUG:21364690

Agent-Side 12.1.0.5.160531: (May 31, 2016) Patch 23124338
Agent-Side Monitoring 12.1.0.8.160531:(May 31, 2016) Patch 23217639
* Agent-Side Discovery 12.1.0.8.5: (November 30, 2015) Patch 22135601

I get the below error, which is pointing to OMS HOME, I am analyzing Patch 20807398 on a DB Home!

image

EMlogs shows following exception:

2016-06-21 02:21:29,930 [RJob Step 115760] FATAL patch.InitSADB logp.251 - [EM-01901] Internal error in EM Patch Advisor

java.lang.NumberFormatException: For input string: "6000000000093016041902"

at java.lang.NumberFormatException.forInputString(NumberFormatException.java:48)

at java.lang.Long.parseLong(Long.java:422)

at java.lang.Long.valueOf(Long.java:525)

I have reviewed these notes:

ERROR: CHK-012: An Unexpected Exception Occurred when Patching Database (Doc ID 1951435.1)

EM 12c: Enterprise Manager 12c Cloud Control Patch Plan Fails when Checking Target Properties with Error: [EM-01901] Internal error in EM Patch A (Doc ID 1946028.1)

Solution:

Download the patch manually to SoftwareLibrary , seems like this issue is with downloading patch automatically from EM.

Categories: Development

Go Mobile with Oracle JET As Easy As 1 2 3

Andrejus Baranovski - Mon, 2016-06-20 18:38
Oracle JET allows to build and run mobile hybrid applications. It is using Cordova to run on mobile device and provide access to device services. This is cool and what is great about it - it allows to get you started with mobile development in minutes. Besides all this - it is free.

I will describe steps I followed, to generate JET mobile hybrid app and run it in local browser (in the next posts I will describe how to run it on simulator and actual device).

First of all you must install Node.js and npm on your machine. This will allow to run shell environment to execute various commands related to JET app generation, setup, build and deployment. Read this article and you will learn how to do it - Installing Node.js and updating npm.

Next follow Step 1 and install Cordova in Step 5 from JET Get Started list.

You are ready to generate JET mobile hybrid app at this point. If you are on Mac OS, don't forget to use sudo, otherwise there will be permission related errors. Run Yeoman to generate the app:

yo oraclejet:hybrid JETMobileApp --template=navBar --platforms=ios

At this stage you can choose predefined JET template, here I'm generating it with navBar template for  iOS platform. It must complete with Done, without errors message:


Make sure to navigate to app folder with cd AppName:


Build generated app with grunt. Command I was using to build it for iOS (you can see from the log, it is producing *.app file, which can be deployed to mobile device as application):

grunt build:dev --platform=ios

It must complete with Done, without errors:


Run application to test in local web browser. See more options (deploy to simulator or device) here - Serve a Hybrid Mobile Application with Grunt. Use grunt command:

grunt serve --platform=ios --web=true --disableLiveReload=true

I'm running it with disableLiveReload=true for a reason. It seems like live reload initialization takes long time to start. Template based JET mobile hybrid app is started (in web browser, for testing):


Generated project can be opened in NetBeans, simply select project from the folder:


Under src folder you will find JET content. Try to change text in any of the generated pages:


Rebuild and serve application, you should see changes deployed for customers page in this case:

HOWTO solve any problem recursively, PL/SQL edition…

RDBMS Insight - Mon, 2016-06-20 17:47
PROCEDURE solve (my_problem IN varchar2) IS
BEGIN
  my_idea := have_great_idea (my_problem) ;
  my_code := start_coding (my_idea) ;
  IF i_hit_complications (my_idea)
  THEN 
    new_problem := the_complications (my_idea);
    solve (new_problem);
  ELSE
    NULL; --we will never get here
  END IF;
END solve;

This abuse of recursion was inspired by @ThePracticalDev !

Categories: DBA Blogs

Cloud Integration made easy (in Cloud)

Peeyush Tugnawat - Mon, 2016-06-20 16:32

Integration has always been a critical (but challenging) requirement for an enterprise given the diverse systems and applications in a typical IT landscape.

Today, Software as a Service (SaaS) adoption is driving the need for a cloud based integration solution. When businesses run their applications in the cloud, connecting information and processes between them and to on-premise systems is critical.

Integration is also expected to match the SaaS criteria – instant provisioning, rapid development, maintenance-free and scalable. Oracle Integration Cloud Service is a pure cloud based platform for highly productive, simplified integration in the cloud. 

Oracle Integration Cloud Service (ICS) is making Cloud integration extremely easy for not only Oracle SaaS applications but also various third party SaaS applications.

See what ICS can do for you at https://cloud.oracle.com/integration

When changing CURSOR_SHARING takes effect?

Yann Neuhaus - Mon, 2016-06-20 13:15

I usually don’t advice to set CURSOR_SHARING=FORCE but imagine your application requires it, you forgot it (or tried to do without) on migration and then everything is slow. You want to change it, but when does it take effect? New execution? New parsing? New session?

EXACT

I have the default value where parent cursor is shared only when sql_text is the same:

SQL> show parameter cursor_sharing
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cursor_sharing string EXACT

And I check with a query that the predicate is not changed:

SQL> select * from dual where dummy='X';
 
D
-
X
 
SQL> select * from table(dbms_xplan.display_cursor) where plan_table_output like '%filter%';
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
1 - filter("DUMMY"='X')

FORCE

I change at system (=instance) level

SQL> alter system set cursor_sharing=force;
System altered.
 
SQL> show parameter cursor_sharing
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cursor_sharing string FORCE

I tested without session cached cursors:

SQL> alter session set session_cached_cursors=0;
Session altered.

and even from another session

SQL> connect / as sysdba
Connected.

But the predicate still has its predicate:

SQL> select * from dual where dummy='X';
 
D
-
X
 
SQL> select * from table(dbms_xplan.display_cursor) where plan_table_output like '%filter%';
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
1 - filter("DUMMY"='X')

No invalidation, no new cursor. Same old statement.

FLUSH SHARED_POOL

Only when I flush the shared_pool I can execute the statement with literals replaced:

SQL> alter system flush shared_pool;
System altered.
 
SQL> select * from dual where dummy='X';
 
D
-
X
 
SQL> select * from table(dbms_xplan.display_cursor) where plan_table_output like '%filter%';
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
1 - filter("DUMMY"=:SYS_B_0)

If you fear a hard parse fest, you can flush specific cursors. I’ve documented the procedure in a previous post.

Autotrace

As a side note, do not rely on autotrace for that

SQL> set autotrace on explain
SQL> select * from dual where dummy='X';
 
D
-
X
 
Execution Plan
----------------------------------------------------------
Plan hash value: 272002086
 
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 2 | 2 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| DUAL | 1 | 2 | 2 (0)| 00:00:01 |
--------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
1 - filter("DUMMY"='X')

Just one more thing that is special with autotrace…

Conclusion

I don’t know exactly how cursor_sharing=force is managed. I thought that the literal replacement occurred before searching for parent cursor. Don’t hesitate to comment here if you know the ‘why’ behind this behavior. My goal here was just to test what has to be done in order to have immediate effect of cursor_sharing change.

 

Cet article When changing CURSOR_SHARING takes effect? est apparu en premier sur Blog dbi services.

ASM iostats

Yann Neuhaus - Mon, 2016-06-20 12:15

A few screenshots and a link here. Sysadmins do not like ASM because they don’t have the tools they like to manage the disks. For example, they don’t want to run SQL queries to check performance, and asmcmd iostat is quite limited. Here is a nice way to get I/O statistics easily from command line.

The perl script is from Bertrand Drouvot (do not miss his twitter profile picture) and is easily downloadable from his blog:
https://bdrouvot.wordpress.com/2013/10/04/asm-metrics-are-a-gold-mine-welcome-to-asm_metrics-pl-a-new-utility-to-extract-and-to-manipulate-them-in-real-time/

It’s only queries on ASM instance, so no risk.

I order to show the relevance, I took screenshots from this script and the XtremIO console from a system where all ASM disks, and only them, are on the XtremIO brick so you can compare statistics from the storage array and from the ASM instance.

Bandwidth

ASMIOScreenshot 2016-06-20 15.18.55

IOPS

ASMIOScreenshot 2016-06-20 15.19.08

Latency

ASMIOScreenshot 2016-06-20 15.19.17

 

Cet article ASM iostats est apparu en premier sur Blog dbi services.

SQL Server 2016 AlwaysOn: distributed availability groups

Yann Neuhaus - Mon, 2016-06-20 10:49

This time, I will talk about distributed availability groups. What’s that? In short, a group of availability groups. Sounds good right? But in which cases may we need such architecture? Firstly, let’s say that distributed availability groups will run on the top of two distinct availability groups meaning that they reside on two distinct WSFCs with their own quorum and voting management. Referring to the Microsoft documentation here, we may think that this new feature will be mainly used in DR scenario but I’m not sure to understand Microsoft about this sentence:

You want the data to continually replicate to the DR site, but you don’t want a potential network problem or issue at the DR site to bring down your primary site

Indeed, we don’t need this new feature to avoid a DR site or a network failure bringing down the primary site. We may simply exclude cluster nodes at the DR site for voting. Maybe I missed something here and I will probably go back when I will get more information.

Moreover, I may not image my customer using an additional cluster just only for DR purpose. Supposing that the DR site is costless from SQL Server license perspective, we need to maintain a “complex” architecture (WSFC) only for that.

After discussing with one another French MVP Christophe Laporte (@Conseilit), we began to draw some pictures of potential scenarios where using DAG may be a good solution. Firstly, let’s say a customer that have many applications. Some of them may run on the primary site and other ones on the DR site because there is a high latency between the two datacenters. In this specific context, you may implement one availability group on each datacenter for HA purpose and add a distributed availability group for DR recovery.

blog 96 - 7 - DAG scenario

Let’s say now a big company that bought another business that includes an existing datacenter infrastructure with availability groups used by the old company. They may want to introduce HA + DR by using a distributed availability group at the both sides without performing any big changes. Of course, we have to keep in mind the cost of such architecture …

Probably one another advantage of distributed availability groups is that the primary replica has only to send log blocks to one primary replica on another availability group. Let’s imagine a traditional implementation with 2 synchronous replicas at the primary site and 4 or maybe 6 asynchronous replicas at the DR site used for scale-out reporting purpose. In this case, even if we are in a situation where all the read-only replicas are asynchronous, the failure of one may impact the primary replica because the transaction log file won’t be truncated by backup log operations until we fix the issue.

blog 96 - 8 - DAG scenario

We may have potentially up to 6 replicas that may lead to transaction log issue management in this specific context. Let’s say now we change the game by including all of the read-only replicas in one specific availability group at the DR site that is included itself in a DAG. The failure of one read-only replica on the DR site may impact only the primary on the DR availability group.

 

blog 96 - 9 - DAG scenario

I believe that others scenarios are possible and we will discover some of them through experience. Please feel free to comment or add your thoughts J

Ok it’s time to implement our first DAG. On my lab environment I implemented two additional virtual machines and then I configured another WSFC that includes my two virtual machines. So I finally get an environment that includes two WSFCs with two nodes on each. The first is already installed and used for direct seeding (see direct seeding at the beginning). We will also leverage direct seeding when implementing a DAG in order to replicate the WideWorldImporters between the two availability groups.

But before installing the DAG itself, let’s install the second availability group.

:CONNECT WIN20123SQL16\SQL16

Use master;

-- primary replica
CREATE AVAILABILITY GROUP [testGrp2]   
FOR    
REPLICA ON 
N'WIN20123SQL16\SQL16' 
WITH (ENDPOINT_URL = N'TCP://WIN20123SQL16.dbi-services.test:5022',  
    FAILOVER_MODE = MANUAL,  
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
    BACKUP_PRIORITY = 50,   
    SECONDARY_ROLE(ALLOW_CONNECTIONS = ALL),   
    SEEDING_MODE = AUTOMATIC)
,   
N'WIN20124SQL16\SQL16' 
WITH (ENDPOINT_URL = N'TCP://WIN20124SQL16.dbi-services.test:5022',   
    FAILOVER_MODE =MANUAL,   
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
    BACKUP_PRIORITY = 50,   
    SECONDARY_ROLE(ALLOW_CONNECTIONS = ALL),   
    SEEDING_MODE = AUTOMATIC);   
GO  

ALTER AVAILABILITY GROUP [testGrp2]    
ADD LISTENER 'lst-testgrp2' 
( 
	WITH IP ( ('192.168.5.121', '255.255.255.0') ) , 
	PORT = 1433);    
GO 

:CONNECT WIN20124SQL16\SQL16

USE master;

-- secondary replica
ALTER AVAILABILITY GROUP [testGrp2] JOIN   
ALTER AVAILABILITY GROUP [testGrp2] GRANT CREATE ANY DATABASE  
GO

And finally the distributed availability group

:CONNECT WIN20121SQL16\SQL16

USE master;

-- Primary cluster 
--DROP AVAILABILITY GROUP [distributedtestGrp]  
CREATE AVAILABILITY GROUP [distributedtestGrp]  
WITH (DISTRIBUTED)   
AVAILABILITY GROUP ON  
'testGrp' 
WITH    
(   
	LISTENER_URL = 'tcp://lst-testgrp:5022',    
	AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
	FAILOVER_MODE = MANUAL,   
    SEEDING_MODE = AUTOMATIC   
),   
'testGrp2' 
WITH    
(   
	LISTENER_URL = 'tcp://lst-testgrp2:5022',   
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
    FAILOVER_MODE = MANUAL,   
    SEEDING_MODE = AUTOMATIC   
);    
GO   

:CONNECT WIN20123SQL16\SQL16

USE master;

-- secondary cluster
ALTER AVAILABILITY GROUP [distributedtestGrp]   
JOIN   
AVAILABILITY GROUP ON  
'testGrp' 
WITH    
(   
	LISTENER_URL = 'tcp://lst-testgrp:5022',    
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
    FAILOVER_MODE = MANUAL,   
    SEEDING_MODE = AUTOMATIC   
),   
'testGrp2' 
WITH    
(   
	LISTENER_URL = 'tcp://lst-testgrp2:5022',   
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
    FAILOVER_MODE = MANUAL,   
    SEEDING_MODE = AUTOMATIC   
);    
GO

 You may notice the special syntax DISTRIBUTED. In addition, the replicas have been replaced by the listeners of each availability group.

blog 96 - 13 - DAG SSMS

One interesting thing here is that we can’t manage the DAG from SSMS. All options are greyed. So let’s have a look at some DMVs.

select 
	ag.name as group_name,
	ag.is_distributed,
	ar.replica_server_name as replica_name,
	ar.endpoint_url,
	ar.availability_mode_desc,
	ar.failover_mode_desc,
	ar.primary_role_allow_connections_desc as allow_connections_primary,
	ar.secondary_role_allow_connections_desc as allow_connections_secondary,
	ar.seeding_mode_desc as seeding_mode
from sys.availability_replicas as ar
join sys.availability_groups as ag
	on ar.group_id = ag.group_id;
go

 

blog 96 - 12 - DMV config

Replica names are availability group names for DAG.

We may get information about direct seeding between the two availability groups in the same manner than previously:

 

select 
	ag.name as aag_name,
	ar.replica_server_name,
	d.name as database_name,
	has.current_state,
	has.failure_state_desc as failure_state,
	has.error_code,
	has.performed_seeding,
	has.start_time,
	has.completion_time,
	has.number_of_attempts
from sys.dm_hadr_automatic_seeding as has
join sys.availability_groups as ag
	on ag.group_id = has.ag_id
join sys.availability_replicas as ar
	on ar.replica_id = has.ag_remote_replica_id
join sys.databases as d
	on d.group_database_id = has.ag_db_id

 

blog 96 - 11 - WideWorldImporters

 

select 
	local_physical_seeding_id,
	remote_physical_seeding_id,
	local_database_name,
	@@servername as local_machine_name,
	remote_machine_name,
	role_desc as [role],
	transfer_rate_bytes_per_second,
	transferred_size_bytes / 1024 as transferred_size_KB,
	database_size_bytes / 1024 as database_size_KB,
	DATEADD(mi, DATEDIFF(mi, GETUTCDATE(), GETDATE()), start_time_utc) as start_time,
	DATEADD(mi, DATEDIFF(mi, GETUTCDATE(), GETDATE()), estimate_time_complete_utc) as estimate_time_complete,
	total_disk_io_wait_time_ms,
	total_network_wait_time_ms,
	is_compression_enabled
from sys.dm_hadr_physical_seeding_stats

blog 96 - 10 - WideWorldImporters

Ok let’s perform a basic test that consists in creating and inserting data into the WideWorldImporters database from each listener.

:CONNECT lst-testgrp

use [WideWorldImporters];

create table dbo.test (id int identity);
go

insert dbo.test default values;
go

:CONNECT lst-testgrp2

use [WideWorldImporters];

create table dbo.test (id int identity);
go

insert dbo.test default values;
go

 

Connecting to lst-testgrp…
(1 row(s) affected)
Disconnecting connection from lst-testgrp…

Connecting to lst-testgrp2…
Msg 3906, Level 16, State 2, Line 14
Failed to update database “WideWorldImporters” because the database is read-only.
Msg 3906, Level 16, State 2, Line 18
Failed to update database “WideWorldImporters” because the database is read-only.
Disconnecting connection from lst-testgrp2…

Ok, as expected the availability group in DR is in READ ONLY mode. I’m not able to create or update anything from there.

Let’s perform a last test after switching over the DAG from testGrp to testGrp2. From the primary availability group:

ALTER AVAILABILITY GROUP [distributedtestGrp] FORCE_FAILOVER_ALLOW_DATA_LOSS

 

blog 96 - 13 - DAG SSMS after failover

 

This time I cannot insert data from the first availability group testGrp

Connecting to lst-testgrp…
Msg 976, Level 14, State 1, Line 2
The target database, ‘WideWorldImporters’, is participating in an availability group and is currently not accessible for queries. Either data movement is suspended or the availability replica is not enabled for read access. To allow read-only access to this and other databases in the availability group, enable read access to one or more secondary availability replicas in the group.  For more information, see the ALTER AVAILABILITY GROUP statement in SQL Server Books Online.
Disconnecting connection from lst-testgrp…

Connecting to lst-testgrp2…
(1 row(s) affected)
Disconnecting connection from lst-testgrp2…

In this blog we have introduced two new features shipped with SQL Server 2016 and availability groups. At a first glance, they seem to be pretty cool features and will extend the scope of availability group capabilities. I think we will see over time the pros and cons during our other tests and implementation at customer shops. If so, other blog posts will coming soon.

Stay tuned!

 

 

 

 

 

 

 

Cet article SQL Server 2016 AlwaysOn: distributed availability groups est apparu en premier sur Blog dbi services.

SQL Server 2016 AlwaysOn: Add DBs in AGs with direct seeding

Yann Neuhaus - Mon, 2016-06-20 10:26

In this blog I would like to talk about two new features shipped with SQL Server 2016 for AlwaysOn availability groups. The first one concerns the new direct seeding capability and the seconde one introduces distributed availability groups (DAG). I will talk about the second one in a next blog post.

First of all, let’s talk about direct seeding.  At the first glance, this feature seems very interesting because it simplifies a lot the process of adding a high available database.  Indeed, with previous versions, adding a database to an availability group from an application perspective requires some extra steps from the database administrator in order to be highly available like backup/restore or initializing replication process. Let’s say a software editor wants to install or add a database (SharePoint for instance). 

Enabling seeding mode for the concerned replicas reduce the amount of work of adding the databases by automating a little bit more the initialization process. We just have to add the database to the availability group and it’s over: no extra backup / restore tasks, no need to configure a file share for backup / restore and no manual initialization. Of course, CREATE and ALTER AVAILABILITY GROUP syntax includes a new per replica basis parameter named SEEDING_MODE that has two settings: MANUAL and AUTOMATIC. The former means that we will use the legacy method ton initialize databases to secondaries (by using backup / restore) and the latter will use the new automated initialization method that will consist in seeding database data across the replication network.
Let’s have a quick demo of this new feature. I will configure a basic availability group with two replicas (WIN20121SQL16\SQL16 and WIN20122SQL16\SQL16). You may notice the new parameter SEEDING_MODE = AUTOMATIC meaning that we will use the new automated method for initializing databases for this availability group.  You may also notice that we don’t need any more to create a “dummy” database before creating an availability group. 

On the primary:

-- primary replica
CREATE AVAILABILITY GROUP [testGrp]   
FOR   
REPLICA ON 
N'WIN20121SQL16\SQL16' 
WITH (ENDPOINT_URL = N'TCP://WIN20121SQL16.dbi-services.test:5022',  
    FAILOVER_MODE = AUTOMATIC,  
    AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,   
    BACKUP_PRIORITY = 50,   
    SECONDARY_ROLE(ALLOW_CONNECTIONS = NO),   
    SEEDING_MODE = AUTOMATIC)
,   
N'WIN20122SQL16\SQL16' 
WITH (ENDPOINT_URL = N'TCP://WIN20122SQL16.dbi-services.test:5022',   
    FAILOVER_MODE = AUTOMATIC,   
    AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,   
    BACKUP_PRIORITY = 50,   
    SECONDARY_ROLE(ALLOW_CONNECTIONS = NO),   
    SEEDING_MODE = AUTOMATIC);   
GO

System views and DMVs have been updated accordingly. For example the sys.availability_replicas provides a new seeding_mode column.

select 
	ag.name as group_name,
	ar.replica_server_name as replica_name,
	ar.endpoint_url,
	ar.availability_mode_desc,
	ar.failover_mode_desc,
	ar.primary_role_allow_connections_desc as allow_connections_primary,
	ar.secondary_role_allow_connections_desc as allow_connections_secondary,
	ar.seeding_mode_desc as seeding_mode
from sys.availability_replicas as ar
join sys.availability_groups as ag
	on ar.group_id = ag.group_id;
go

blog 96 - 1 - availability group and replicas state

Let’s complete the configuration of the availability group by giving gextra permissions to create databases on secondaries to the availability group itself.

ALTER AVAILABILITY GROUP [testGrp] JOIN   
ALTER AVAILABILITY GROUP [testGrp] GRANT CREATE ANY DATABASE  
GO

At this point we just have to add the new WideWorldImporters database to the testGrp availability group and our job is over!

-- primary replica add database WideWorldImporters to testGrp
ALTER AVAILABILITY GROUP [testGrp] ADD DATABASE [WideWorldImporters];
GO

The system will then seed database data to the secondary. We may get information of seeding progress from two new DMVs sys.dm_hadr_automatic_seeding and sys.dm_hadr_physical_seeding_stats

sys.dm_hadr_automatic_seeding DMV gives information about successful or failed database seedings and corresponding error messages. Furthermore sys.dm_hadr_physical_seeding_stats DMV provides currently running seeding information like estimated completion time and related statistics about I/O and network usage.

Here what I found after adding the database to the availability group:

select 
	ag.name as aag_name,
	ar.replica_server_name,
	d.name as database_name,
	has.current_state,
	has.failure_state_desc as failure_state,
	has.error_code,
	has.performed_seeding,
	has.start_time,
	has.completion_time,
	has.number_of_attempts
from sys.dm_hadr_automatic_seeding as has
join sys.availability_groups as ag
	on ag.group_id = has.ag_id
join sys.availability_replicas as ar
	on ar.replica_id = has.ag_remote_replica_id
join sys.databases as d
	on d.group_database_id = has.ag_db_id

 

blog 96 - 2 - seeding sessions info

select 
	local_physical_seeding_id,
	remote_physical_seeding_id,
	local_database_name,
	@@servername as local_machine_name,
	remote_machine_name,
	role_desc as [role],
	transfer_rate_bytes_per_second,
	transferred_size_bytes / 1024 as transferred_size_KB,
	database_size_bytes / 1024 as database_size_KB,
	DATEADD(mi, DATEDIFF(mi, GETUTCDATE(), GETDATE()), start_time_utc) as start_time,
	DATEADD(mi, DATEDIFF(mi, GETUTCDATE(), GETDATE()), estimate_time_complete_utc) as estimate_time_complete,
	total_disk_io_wait_time_ms,
	total_network_wait_time_ms,
	is_compression_enabled
from sys.dm_hadr_physical_seeding_stats

 

blog 96 - 3 - seeding progress info

The WideWorldImportes database is not so big by default (roughly 455MB). The estimated duration to seed database is approximatively 1’14’’.

Let’s compare this little math by including the transfer rate that is estimated to ~ 42MB/s (transfer_rate_bytes_per_second column value) and the total size of data to transfer. We may deduce 455 / 42 ~=10s to transfer all the data.

Let’s now have a look at the SQL error log on the secondary. We may see information related to the seeding data operation.

blog 96 - 6 - import error log secondary 2

blog 96 - 6 - import error log secondary 2

The seeding operation has occurred between 19:15:53 and 19:16:08 so approximatively 15 seconds that is not so far to our previous math. The estimated duration to seed data based on start and estimated completion time seems to be less accurate than the transfer throughput but I think I need to perform further tests and investigations before coming to any conclusion.

In short, a pretty cool feature but my guess is that using direct seeding may not be suitable for all scenarios. One customer example that is off the top of my head is big databases that run inside availability groups. I had the opportunity to migrate SAP databases (~1TB) to SQL Server 2014 and availability groups : using direct seeding would not be use in this case. Indeed, direct seeding does not use compression by default (you can turn on compression with traceflag 1462) and we may easily imagine that seeding all data across the network may take a long time. We may change this default behaviour by using a special trace flag but at the cost of high CPU consumption. It is probably not a good idea if we plan to use the application immediately after adding the SAP to the concerned availability group. In addition, using log stream compression may hurt performance with heavily OLTP workload because it introduces latency by design. There is a tradeoff to find here … Therefore using backup / restore process seems to be the best option by playing with optimization techniques like compression, increasing the number of backup media files or changing MAXTRANSFERSIZE and BUFFERCOUNT parameter values.

One another scenario that comes to mind concerns databases migration with thousands of databases at the same time. Which is the fastest way to migrate all these databases? Adding them to the availability group and let the system to seed database data across the network to each secondary or perform a copy of database files on each secondary and chose skip initial data synchronization? At this point I need to investigate further to answer.

One another important thing is that direct seeding is not part of the availability group wizard GUI. You have probably noticed that I don’t use it in this blog post and this is an intended behaviour. A connect item has already been created by Brent Ozar here. Please feel free to vote!

See you!

 

 

Cet article SQL Server 2016 AlwaysOn: Add DBs in AGs with direct seeding est apparu en premier sur Blog dbi services.

Services -- 3 : Monitoring Usage of Custom Services

Hemant K Chitale - Mon, 2016-06-20 10:04
In my previous blog post, I had demonstrated a few custom services created and started with DBMS_SERVICE.

Let's look at a couple of examples of monitoring usage of these services.

[oracle@ora12102 Desktop]$ sqlplus system/oracle@PDB1

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jun 20 22:51:08 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Thu Jun 16 2016 23:23:50 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> execute dbms_service.start_service('NEW_APP1');

PL/SQL procedure successfully completed.

SQL> execute dbms_service.start_service('FINANCE');

PL/SQL procedure successfully completed.

SQL> grant create table to hemant;

Grant succeeded.

SQL> grant select_Catalog_role to hemant;

Grant succeeded.

SQL>

[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@NEW_APP1

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jun 20 22:52:27 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Thu Jun 16 2016 23:28:01 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> create table obj_t1 tablespace hemant as select * from dba_objects;

Table created.

SQL> insert into obj_t1 select * from obj_t1;

90935 rows created.

SQL>

[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@FINANCE

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jun 20 22:53:54 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Mon Jun 20 2016 22:52:27 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> create table obj_t2_small tablespace hemant as select * from obj_T1 where rownum < 11;

Table created.

SQL>

SQL> show user
USER is "SYSTEM"
SQL> select sid,serial#, to_char(logon_time,'DD-MON HH24:MI:SS'), service_name
2 from v$session
3 where username = 'HEMANT'
4 order by logon_time
5 /

SID SERIAL# TO_CHAR(LOGON_TIME,'DD-M
---------- ---------- ------------------------
SERVICE_NAME
----------------------------------------------------------------
61 50587 20-JUN 22:52:27
NEW_APP1

76 43919 20-JUN 22:53:54
FINANCE


SQL>


Thus, we can see that V$SESSION tracks the SERVICE_NAME in use --- even though the USERNAME is the same in both sessions, the SERVICE_NAME is different.

SQL> col svc_name format a10
SQL> col stat_name format a25 trunc
SQL> select
2 con_id, service_name SVC_NAME, stat_name, value
3 from v$service_stats
4 where service_name in ('NEW_APP1','FINANCE')
5 and
6 (stat_name like 'DB%' or stat_name like '%block%' or stat_name like 'redo%')
7 order by 1,2,3
8 /

CON_ID SVC_NAME STAT_NAME VALUE
---------- ---------- ------------------------- ----------
3 FINANCE DB CPU 168973
3 FINANCE DB time 771742
3 FINANCE db block changes 653
3 FINANCE gc cr block receive time 0
3 FINANCE gc cr blocks received 0
3 FINANCE gc current block receive 0
3 FINANCE gc current blocks receive 0
3 FINANCE redo size 100484

CON_ID SVC_NAME STAT_NAME VALUE
---------- ---------- ------------------------- ----------
3 NEW_APP1 DB CPU 869867
3 NEW_APP1 DB time 17415363
3 NEW_APP1 db block changes 11101
3 NEW_APP1 gc cr block receive time 0
3 NEW_APP1 gc cr blocks received 0
3 NEW_APP1 gc current block receive 0
3 NEW_APP1 gc current blocks receive 0
3 NEW_APP1 redo size 25057520

16 rows selected.

SQL>


So, even some statistics (unfortunately, not all -- the last time I checked in 11.2) are reported at the Service Level.  Thus, I can see that the users of NEW_APP1 consumed more CPU and DB Time and generated more changes and redo than users of FINANCE !  (Obviously, V$SERVICE_STATS reports statistics from the beginning of the instance so you should either user StatsPack (I haven't verified StatsPack reporting of statistics by individual service) or AWR (if you have the Diagnostic Pack licence) or your own collection scripts to report statistics for a specific window of time).
.
.
.


Categories: DBA Blogs

What’s in a name? – “Brittany” edition

RDBMS Insight - Mon, 2016-06-20 07:46

In my last post, I loaded US SSA names data into my dev instance to play with. In this post, I’ll play around with it a bit and take a look at the name “Brittany” and all its variant spellings.

I found nearly 100 different spellings of “Brittany” in the US SSA data thanks to a handy regexp:

SELECT name nm, SUM(freq) FROM names 
 WHERE regexp_like(UPPER(name),'^BR(I|E|O|U|Y)[T]+[AEIOUY]*N[AEIOUY]+$' )
 AND sex='F'
GROUP BY name
ORDER BY SUM(freq) DESC;
NM				SUM(FREQ)
------------------------------ ----------
Brittany			   357159
Brittney			    81648
Britney 			    34182
Brittani			    11703
Britany 			     6291
Brittni 			     5985
Brittanie			     4725
Britni				     4315
Brittny 			     3584
Brittaney			     3280
...
Bryttnee			       10
Britttany				7
Brytanie				7
Brittanae				6
Bryttnii				6
...
Brittanii				5
Brittiana				5
 
91 rows selected.

The regexp isn’t perfect. It returns a few uncommon names which aren’t pronounced “Brittany”: “Brittiana”, “Brittiani”, “Britane”, “Brittina”, “Britanya”, “Brittine” – and one I’m not sure about, “Brittnae”. But on the other hand, it did let me discover that 7 “Britttany”s applied for SSNs in 1990. Yes, that’s “Britttany” with 3 “T”s.

Fortunately, all the “non-Brittanys” the regexp returns are quite uncommon and not even in the top 20. So the regexp will do for a graph of the top spellings. Let’s get the data by year and look at the percentage of girls in each year named Brittany/Brittney/Britney/Brittani:

WITH n AS (SELECT name nm, YEAR yr, sex, freq FROM names 
 WHERE regexp_like(UPPER(name),'^BR(I|E|O|U|Y)[T]+[AEIOUY]*N[AEIOUY]+$' )
 AND sex='F'),
y AS (SELECT  YEAR yr, sex, SUM(freq) tot FROM names GROUP BY YEAR, sex)
SELECT y.yr, 
decode(n.nm,'Brittany','Brittany', -- like Brittany Furlan
'Brittney','Brittney', -- like Brittney Griner
'Britney','Britney', -- like Britney Spears
'Brittani','Brittani', -- like Brittani Fulfer
'Other Brits') AS thename,
nvl(100*freq/tot,0) pct  FROM n, y 
WHERE  n.sex(+)=y.sex AND n.yr(+)=y.yr AND y.yr >= 1968
ORDER BY y.yr, nvl(n.nm,' ')

I graphed this in SQL Developer:
britts
From the graph it’s clear that “Brittany” is by far the most popular spelling, followed by “Brittney”. The sum of all Brittany-spellings peaked in 1989, but “Britney” has a sharp peak in 2000 – the year that singer Britney Spears released Oops I Did It Again, “one of the best-selling albums of all time” per Wikipedia.

This makes Brittany, however you spell it, a very early-90s-baby kind of name. “Brittany” was the #3 girls’ name in 1989, behind Jessica and Ashley, and was not nearly as popular in decades before or since. In subsequent posts I’ll look some more at names we can identify with specific decades.

Categories: DBA Blogs

APEX UT - Display Image in a List

Denes Kubicek - Mon, 2016-06-20 07:34
If using universal theme, we can switch from a classic navigation bar to a list and display it where usually a navigation bar is displayes - right top of the page. The problem with the list is that it will escape any html code in the list name. If we want to display an image, we will get the html code. To go arround that problem, we can use the "user defined attributes" for the lists and put the image into the "Badge Value". This list will then display the image almost as expected. We will need to apply additional css to make the background of the "Badge Value" transparent and to adjust the font to the rest of the navigation bar list:

.t-Button--navBar .t-Button-badge {background-color: transparent !important; font-weight: 400 !important; font-size: 1.2rem !important}

This is then how it appears:



Here, we can add the "Badge Value":



Finaly, here we can change our navigation bar from classic to list or oposite:

Categories: Development

Oracle Announces Pfizer’s Selection of Oracle Cloud for Clinical Data Management and Trial Management across its Clinical Trial Portfolio

Oracle Press Releases - Mon, 2016-06-20 07:00
Press Release
Oracle Announces Pfizer’s Selection of Oracle Cloud for Clinical Data Management and Trial Management across its Clinical Trial Portfolio

Redwood Shores, Calif.—Jun 20, 2016

Oracle today announced that after a detailed review and selection process, Pfizer has selected Oracle Health Sciences InForm Cloud Service and the Oracle Siebel Clinical Trial Management and Monitoring Cloud Service to help manage and monitor its more than 300 clinical trials a year and continue to provide best-in-class solutions.
 
“Oracle Health Sciences InForm Cloud Service and Oracle Siebel Clinical Trial Management and Monitoring Cloud Service can simplify how we manage both operational and study data for in-house as well as outsourced studies,” said Rob Goodwin, vice president of Pfizer Global Product Development, Center of Excellence. “Clinical teams will be able to access study data through Oracle’s single platform cloud service, eliminating the need to send data back and forth to CROs, saving us time and reducing the cost of our clinical studies.”
 
By using Oracle Health Sciences InForm Cloud Service, Pfizer will be able to take advantage of over 100,000 investigator sites already trained in InForm, while increasing site data satisfaction.
 
The Oracle Health Sciences InForm Cloud Service will enable Pfizer to have greater control over its data and provide measurable efficiency and productivity gains in data management and remote monitoring. In addition, study templates and library management capabilities within Oracle Health Sciences InForm will enable Pfizer to accelerate the study-build process and result in faster trial implementation.
 
“Today, pharma companies are challenged by increasingly complex global clinical trials,” said Steve Rosenberg, senior vice president and general manager, Oracle Health Sciences. “With more than 15 years of electronic data capture experience in the clinical industry, Oracle Health Sciences helps leading pharmas — such as Pfizer — not only to standardize and optimize their clinical study processes, but also to be prepared with value-based clinical innovation solutions that enable them for the future of data collection.”
 
Oracle Health Sciences InForm Cloud Service includes both Oracle Health Sciences InForm and Oracle Health Sciences Data Management Workbench (DMW) to advance standardized end-to-end clinical data collection and management processes that help drive greater efficiencies for both internal and outsourced trials. Utilizing the platform, efficiencies can be gained through the seamless integration of Oracle Health Sciences InForm, Oracle Health Sciences Central Designer, and Oracle Health Sciences Data Management Workbench, enabling data standardization, streamlining collection, consolidation, cleaning, and transformation.
 
Pfizer will also use Oracle Siebel Clinical Trial Management and Monitoring Cloud Service, including Oracle Siebel Clinical Trial Management System, Oracle Health Sciences Clinical Development Analytics, and risk-based monitoring (RBM) functionality to provide new insights into trial management and help increase efficiency of the monitoring workforce. A single set of tools for internal and outsourced trials can simplify data exchange and the interface between investigator sites, CRO partners, and Pfizer. With the selection of Oracle’s platform and RBM solution, Pfizer will be able to execute risk-based monitoring through an integrated approach.
 
The Oracle Siebel Clinical Trial Management and Monitoring Cloud Service, which utilizes Oracle Siebel Clinical Trial Management System and Oracle Health Sciences Clinical Development Analytics, is a solution that combines standardized comprehensive clinical trial capabilities, integrated analytic tools, and data security with a cloud service that enables organizations to manage clinical trials effectively and economically, as well as improve CRO and sponsor relationships from early to late stage clinical trials.
Contact Info
Samantha Thai
Oracle
+1.510.917.5564
samantha.thai@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Samantha Thai

  • +1.510.917.5564

Oracle Gives Partners a Fast Path to Cloud

Oracle Press Releases - Mon, 2016-06-20 07:00
Press Release
Oracle Gives Partners a Fast Path to Cloud Independent Software Vendors Gain New Revenue Opportunities through Oracle Cloud Marketplace

Redwood Shores, Calif.—Jun 20, 2016

The conversion to the cloud is one of the most significant IT transformations in history. IDC predicts that by 2018, at least half of IT spending will be cloud based. To help customers transition to the cloud with proven enterprise applications, while accelerating time-to-market for partners, Oracle PartnerNetwork (OPN) today unveiled Oracle Cloud Platform Ready for Independent Software Vendors (ISVs).

The new offering enables ISV partners to quickly register and publish their Oracle Cloud-compatible applications on the Oracle Cloud Marketplace, allowing them to fast-track new business opportunities in the cloud.  Partners can register here.

“Any new product we develop will go to the Oracle Cloud Marketplace first. We envision in the future that most of our customers will research, validate and request our solutions directly from the Oracle Cloud Marketplace,” said Charles Farnell, CEO, Ventureforth.

Oracle Cloud Platform Ready provides OPN members at Silver level or higher, a simple way to create an initial application listing on Oracle Cloud Marketplace in the Platform and Infrastructure sections in mere minutes. Any partner application that supports an Oracle Cloud compatible operating system or technology product can run on Oracle Cloud without the need for rewrite, and at no additional cost. To speed time-to-market, partners can quickly request access to Oracle Cloud Specialists and other technical resources directly from the Oracle Cloud Platform Ready registration. Additionally, ISVs with an Oracle Cloud Marketplace listing are eligible for the Oracle Cloud ISV Partner of the Year award and may apply for OPN Cloud Standard program benefits, including easy access to Oracle Cloud development and test environments.

“Oracle's Cloud Specialist team provided in-depth technical support during our proof of concept testing that allowed us to create automated deployment scenarios quickly and efficiently,” said Brent Rhymes, Executive Vice President Enterprise Sales & Marketing, Synacor/Zimbra. “Managed Service Providers can create new business opportunities by becoming a Zimbra Hosting Provider in the Oracle Cloud, while customers can deploy and manage their own collaboration solution within minutes. In each case, the flexibility of Zimbra plus the Oracle Cloud means users can start small and grow, or start big and get bigger.” 

In addition to quick access to Oracle Cloud Marketplace and development resources, partners engaging in the new offering will have the benefit of being able to reach Oracle’s expansive customer base, field sales, 25,000 global partners and 15 million developers through enhanced marketing and promotion efforts.

“The cloud represents a huge opportunity for our ISV partner community,” said Dan Miller, Senior Vice President of ISV, OEM and Java Sales, Oracle. “Through the Oracle Cloud Marketplace, and joint initiatives like Oracle Cloud Platform Ready, we are enabling ISV partners to focus more on their unique innovation, while leveraging the Oracle Cloud to get their solutions in front of a broad market of potential users quickly and at scale.”

Contact Info
Candice van der Laan
Oracle
+1.650.506.0814
candice.van.der.laan@oracle.com
About Oracle PartnerNetwork

Oracle PartnerNetwork (OPN) Specialized is the latest version of Oracle's partner program that provides partners with tools to better develop, sell and implement Oracle solutions. OPN Specialized offers resources to train and support specialized knowledge of Oracle’s products and solutions and has evolved to recognize Oracle’s growing product portfolio, partner base and business opportunity. Key to the latest enhancements to OPN is the ability for partners to differentiate through Specializations. Specializations are achieved through competency development, business results, expertise and proven success. To find out more visit: http://www.oracle.com/partners.

About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Candice van der Laan

  • +1.650.506.0814

Plan Shapes

Jonathan Lewis - Mon, 2016-06-20 06:58

There are a number of articles, webinars, and blogs online about how to read execution plans, but many of them seem to stop after the the minimum description of the simplest type of plan, so I thought I’d throw out a brief comment on a couple the slightly more complicated things that are likely to appear fairly commonly because you sometimes find plans with very similar shapes but extremely different interpretation.

First: select with scalar subqueries in the select list (there’s no need to worry about what the table definitions look like):


rem
rem     Script:         plan_shapes.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2016
rem

select
        id, n1,
        (select /*+  no_unnest */ max(n1) from t2 where t2.id = t1.n1)  t2_max,
        (select /*+  no_unnest */ max(n1) from t3 where t3.id = t1.n1)  t3_max
from
        t1
where
        id between 100 and 119
;


--------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |      1 |        |     20 |00:00:00.01 |       8 |
|   1 |  SORT AGGREGATE                      |       |     20 |      1 |     20 |00:00:00.01 |      63 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T2    |     20 |      2 |     40 |00:00:00.01 |      63 |
|*  3 |    INDEX RANGE SCAN                  | T2_I1 |     20 |      2 |     40 |00:00:00.01 |      23 |
|   4 |  SORT AGGREGATE                      |       |     20 |      1 |     20 |00:00:00.01 |      83 |
|   5 |   TABLE ACCESS BY INDEX ROWID BATCHED| T3    |     20 |      3 |     60 |00:00:00.01 |      83 |
|*  6 |    INDEX RANGE SCAN                  | T3_I1 |     20 |      3 |     60 |00:00:00.01 |      23 |
|   7 |  TABLE ACCESS BY INDEX ROWID BATCHED | T1    |      1 |     21 |     20 |00:00:00.01 |       8 |
|*  8 |   INDEX RANGE SCAN                   | T1_I1 |      1 |     21 |     20 |00:00:00.01 |       4 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T2"."ID"=:B1)
   6 - access("T3"."ID"=:B1)
   8 - access("ID">=100 AND "ID"<=119)

We have a select statement, with two columns in the select list generated by (correlated) scalar subqueries.

The overall shape of the plan shows the driving query as the last child plan for the SELECT (operations 7-8). The first and second child plans are the plans for the two scalar subqueries in turn (and the order the sub-plans appear is the order of the scalar subqueries in the select list). In this case the main query returned 20 rows (A-Rows=20), and the scalar subqueries executed 20 times each. There are a few other details we could mention, but the key feature of the plan is that the driver is the last sub-plan.

Second: update with scalar subqueries:


update t1
set
        n1 = (select max(n1) from t2 where t2.id = t1.n1),
        v1 = (select max(v1) from t3 where t3.id = t1.n1)
where
        id between 1000 and 1999
;

---------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT                      |       |      1 |        |      0 |00:00:00.13 |   10361 |
|   1 |  UPDATE                               | T1    |      1 |        |      0 |00:00:00.13 |   10361 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED | T1    |      1 |   1001 |   1000 |00:00:00.01 |      22 |
|*  3 |    INDEX RANGE SCAN                   | T1_I1 |      1 |   1001 |   1000 |00:00:00.01 |       4 |
|   4 |   SORT AGGREGATE                      |       |    916 |      1 |    916 |00:00:00.04 |    3672 |
|   5 |    TABLE ACCESS BY INDEX ROWID BATCHED| T2    |    916 |      2 |   1832 |00:00:00.03 |    3672 |
|*  6 |     INDEX RANGE SCAN                  | T2_I1 |    916 |      2 |   1832 |00:00:00.01 |    1840 |
|   7 |   SORT AGGREGATE                      |       |    916 |      1 |    916 |00:00:00.05 |    4588 |
|   8 |    TABLE ACCESS BY INDEX ROWID BATCHED| T3    |    916 |      3 |   2748 |00:00:00.04 |    4588 |
|*  9 |     INDEX RANGE SCAN                  | T3_I1 |    916 |      3 |   2748 |00:00:00.01 |    1840 |
---------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("ID">=1000 AND "ID"<=1999)
   6 - access("T2"."ID"=:B1)
   9 - access("T3"."ID"=:B1)


In this statement we update two columns by executing (correlated) scalar subqueries. The most important feature of interpreting this plan is that it is the exact opposite of the earlier select statement. In this plan the first subplan is the driving part of the statement – it’s the part of the plan that tells us how to find rows to be updated (and we find 1,000 of them); the 2nd and 3rd sub-plans correspond to the subqueries in the two columns whose value we set. In this case we happen to get some benefit from scalar subquery caching so the two subqueries each run 916 times. (The fact that both subqueries run the same number of times is not a coincidence – the caching benefit is dependent on the value(s) used for driving the subqueries, and that’s the t1.n1 column in both cases.)

Finally: because people do all sorts of complicated things, and it’s easy to get deceived if you read the plan without seeing the SQL, one more example (starting with the plan):


---------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT                      |       |      1 |        |      0 |00:00:00.14 |   11257 |
|   1 |  UPDATE                               | T1    |      1 |        |      0 |00:00:00.14 |   11257 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED | T1    |      1 |   1001 |   1000 |00:00:00.01 |      22 |
|*  3 |    INDEX RANGE SCAN                   | T1_I1 |      1 |   1001 |   1000 |00:00:00.01 |       4 |
|   4 |   SORT AGGREGATE                      |       |   1103 |      1 |   1103 |00:00:00.06 |    5519 |
|   5 |    TABLE ACCESS BY INDEX ROWID BATCHED| T3    |   1103 |      3 |   3306 |00:00:00.04 |    5519 |
|*  6 |     INDEX RANGE SCAN                  | T3_I1 |   1103 |      3 |   3306 |00:00:00.01 |    2213 |
|   7 |   SORT AGGREGATE                      |       |    916 |      1 |    916 |00:00:00.11 |    9191 |
|   8 |    TABLE ACCESS BY INDEX ROWID BATCHED| T2    |    916 |      2 |   1832 |00:00:00.03 |    3672 |
|*  9 |     INDEX RANGE SCAN                  | T2_I1 |    916 |      2 |   1832 |00:00:00.01 |    1840 |
---------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("ID">=1000 AND "ID"<=1999)
   6 - access("T3"."ID"=:B1)
   9 - access("T2"."ID"=:B1)


This plan looks very like the previous plan – it’s an update so the first sub-plan will be identifying the rows to be updated, but then what ?

The easiest assumption – always one worth considering, providing you remember that there are other possibilities – is that we have two subqueries executing for each row we update – but in this example there’s a little clue in the numbers that this isn’what the statement does. The driving query (operations 2 – 3) identifies 1,000 rows, so how can operations 4 – 6 start 1,103 times each ? Something more complicated is going on.

I’ve combined the features of the first two examples. I have a single set subquery, but it contains a scalar subquery in the select list – so operations 4 – 9 are the sub-plan for a single select statement – and a select statement with a scalar subquery in the select list puts the main plan last (operations 7 – 9) and its scalar subquery sub-plan first (operations 4 – 6). Here’s the SQL:


update t1
set
        (n1,v1) = (
                select
                        max(n1),
                        max((select /*+  no_unnest */ max(v1) from t3 where t3.id = t2.n1))
                from    t2
                where   t2.id = t1.n1
        )
where
        id between 1000 and 1999
;

What happens here is that the driving body of the update statement identifies 1,000 rows so the scalar subquery against t2 should execute 1,000 times; thanks to scalar subquery caching, though, it only executes 916 times. Each time it executes it finds 2 row and for each of those rows it executes the scalar subquery against t3 which, fortunately, also benefits from its own scalar subquery caching and so runs only 1,103 times in total

I could go on, of course, with increasingly complex examples – for example scalar subqueries that contain decode() calls with scalar subqueries as their inputs; where clauses which compare the results of scalar subqueries, and so on. It can get quite difficult to see, even with the SQL in front of you, what the plan is trying to tell you so I’ll leave you with one thought: always use the qb_name() hint to name every single subquery so that, if interpretation of the plan gets a bit difficult, you can use the ‘alias’ formatting option in the call to dbms_xplan to connect each table reference in the plan with the query block it came from in the query.

 

 


Oracle and Mellanox to Collaborate on High Speed Cloud Networking Standards

Oracle Press Releases - Mon, 2016-06-20 06:00
Press Release
Oracle and Mellanox to Collaborate on High Speed Cloud Networking Standards Companies to Work on Open Standards, Interoperability, and Backward Compatibility for InfiniBand products

Frankfurt – ISC—Jun 20, 2016

Oracle and Mellanox today announced a partnership to drive interoperability between their products and to jointly develop further industry standards for InfiniBand—a high speed Cloud networking technology.

Products supporting the InfiniBand standard not only provide extreme bandwidth and ultra-low latency for fast, agile, and secure cloud infrastructure, but they are also based upon industry standards and available from multiple vendors. Oracle and Mellanox Enhanced Data Rate (EDR) 100G InfiniBand products are engineered to the InfiniBandTM Architecture Specification Release 1.3 and allow customers to deploy a 100Gb/s fabric that is backwards compatible with the previous generations of InfiniBand-based systems (FDR, QDR, DDR, and SDR), enabling customers to protect their investments in InfiniBand-enabled applications.

Oracle’s core strategy is to enable its cloud solutions and Engineered Systems with open technologies that deliver the highest performance, efficiency, and security. Due to its superior performance, efficiency, and scalability, InfiniBand is a dominant fabric for building Super Computers, which rapidly process massive amounts of data. Oracle has chosen InfiniBand as the foundational technology for Engineered Systems to provide unparalleled performance and scale to Oracle’s suite of business critical applications.

“Customers using InfiniBand as the interconnect for clouds and high performance computing can be certain that as new speeds and features are incorporated into the InfiniBand architecture that they will be able to continue to work with the supplier of their choice and that new products will seamlessly integrate with their existing InfiniBand infrastructures,” said Raju Penumatcha, senior vice president, Netra and Networking, Oracle. “This continued partnership represents an enduring commitment to provide customers with the technologies and standards needed for true enterprise-class cloud infrastructures that enable them to accelerate application performance and quickly respond to changing business needs.”

“With a standards-based architecture, and the most advanced roadmap that is guided by the InfiniBand Trade Association, and a mature, open source software stack, InfiniBand offers customers the best of performance and interoperability with the support of a large community of suppliers, developers, and users,” said Gilad Shainer, vice president, marketing at Mellanox Technologies. “This partnership enables us to continue to optimize and expand InfiniBand’s role as a key technology for any cloud environment.”

Oracle EDR InfiniBand Fabric is on display this week at: ISC High Performance (Booth 910). For more information, please visit: www.oracle.com/goto/InfiniBand.

Mellanox EDR InfiniBand solutions are on display this week at the International Supercomputing Conference (booth 905). For more information, please visit: www.mellanox.com.

Contact Info
Sarah Fraser
Oracle
+1.650.743.0660
sarah.fraser@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

About Mellanox

Mellanox Technologies is a leading supplier of end-to-end InfiniBand and Ethernet interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software, cables and silicon that accelerate application runtime and maximize business results for a wide range of markets including high-performance computing, enterprise data centers, Web 2.0, cloud, storage, telecom and financial services. More information is available at: http://www.mellanox.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Sarah Fraser

  • +1.650.743.0660

Pages

Subscribe to Oracle FAQ aggregator