Skip navigation.

Feed aggregator

How to Add Two Ethernet Interfaces with Azure VM

Pythian Group - Thu, 2015-12-31 09:55

 

Recently, working with my test VM on Azure, I needed a second network interface on my VM, but found no way to add one. I was using a standard A2 size Oracle Linux VM. I tried to search it in GUI interface and settings but could find no obvious way to add it. This surprised me so I continued to search and I found some blog posts on how to do it using Azure PowerShell. I discovered that there is no way to add the interface when the VM was already created and that you have to use Azure PowerShell for that. So, if you are a Mac user, as I am, and have only Azure CLI for Mac then you need to find a Windows box. I hope it will be fixed in future releases and that you will be able to manage networks from GUI or using any command line tool provided for Mac, Linux or any other platform. Thus, I will try to explain how you can create a new VM with 2 or more NIC.

 
First, we need to get a Windows machine to run Azure PowerShell commands. I created a small Windows box on Azure itself, explicitly to run the PowerShell when I need it. I’ve chosen basic A1 size Widows 2012 and installed the PowerShell there. It worked fine except you need to be careful if you use more than one monitor and rdp client for Mac. By default it was trying to use all monitors and in my case I got 1.5 screens (one was cut by half because it could not fit to my monitor). I removed check “Use all monitors” in my configuration for that connection in the rdp client. So, the first obstacle was resolved and I continued to work with the next steps.

 
Next, we will need “ImageName” to create a new machine. It can be checked using “Get-AzureVMImage” command. For Oracle linux it looks like:

PS C:\> Get-AzureVMImage | Where-Object { $_.Label -like "*Oracle Linux*" }
VERBOSE: 5:50:58 PM - Begin Operation: Get-AzureVMImage

ImageName : c290a6b031d841e09f2da759bbabe71f__Oracle-Linux-6-12-2014
OS : Linux
MediaLink :
…………………………

Using the given ImageName we can now proceed. Keep in mind that you cannot create a VM with two or more NICs for an A2 size box. For two NICs you need at least a Large (A3) for 2 interfaces or an ExtraLarge(A4) if you need 4 NICs.
Let’s set up the image name:

PS C:\> $image = Get-AzureVMImage -ImageName "c290a6b031d841e09f2da759bbabe71f__Oracle-Linux-6-12-2014"

You need to setup your subscription ID for the session in PowerShell and a storage account:

PS C:\> Set-AzureSubscription -SubscriptionId "321265e2-ffb5-66f9-9e07-96079bd7e0a6" -CurrentStorageAccount "oradb5"

Create a custom config for our VM :

PS C:\> $vm = New-AzureVMConfig -name "oradb5" -InstanceSize "Large" -Image $image.ImageName
PS C:\> Add-AzureProvisioningConfig -VM $vm -Linux -LinuxUser "otochkin" -Password "welcome1"

I’ve created a virtual network “Multi-VNet” with two subnets for my VM and named the subnets as “Public” and “Private”. The Virtual network and subnets you can create in a GUI portal or using command line. I am going to use those subnets for my NICs.
Adding the first subnet to our VM configuration:

PS C:\> Set-AzureSubnet -SubnetName "Public" -VM $vm

Setting static IP for the network:

PS C:\> Set-AzureStaticVNetIP -IPAddress 10.1.1.11 -VM $vm

Adding the second interface to the configuration.

PS C:\> Add-AzureNetworkInterfaceConfig -name "eth1" -SubnetName "Private" -StaticVNetIPAddress 10.0.2.11 -VM $vm

And we can deploy our custom VM now:

PS C:\> New-AzureVM -ServiceName "test-1" -VNetName "Multi-VNet" -VM $vm
WARNING: No deployment found in service: 'test-1'.
VERBOSE: 6:59:03 PM - Begin Operation: New-AzureVM - Create Deployment with VM oradb5

OperationDescription OperationId OperationStatus
——————– ———– —————
New-AzureVM b7fcb2de-eac7-3684-aa8b-d1e9addc4587 Succeeded
VERBOSE: 7:01:08 PM – Completed Operation: New-AzureVM – Create Deployment with VM oradb5

The VM is created and you can check and connect to it. You don’t need your Windows box anymore and can shut it down to save money:

MacBook:~ otochkin$ azure vm list -v
info: Executing command vm list
verbose: Getting virtual machines
data: Name Status Location DNS Name IP Address
data: -------- ------------------ -------- --------------------- ----------
data: winman1 ReadyRole East US winman1.cloudapp.net 10.2.0.16
data: oradb5 ReadyRole East US test-1.cloudapp.net 10.0.1.11
info: vm list command OK
MacBook:~ otochkin$

You can connect to your freshly created VM and check the network:

[root@oradb5 ~]# ifconfig -a
eth0 Link encap:Ethernet HWaddr 00:0D:3A:11:A3:71
inet addr:10.0.1.11 Bcast:10.0.1.255 Mask:255.255.254.0
inet6 addr: fe80::20d:3aff:fe11:a371/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:337 errors:0 dropped:0 overruns:0 frame:0
TX packets:353 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:54281 (53.0 KiB) TX bytes:49802 (48.6 KiB)

eth1 Link encap:Ethernet HWaddr 00:0D:3A:11:AC:92
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)

[root@oradb5 ~]#

And that’s it. In summary I can say that it is not difficult to create more than one interface on an Azure VM, but I think it can be a good addition to GUI (Azure portal). In my next blog post I will try to check other aspects of using Oracle and Linux on Microsoft Azure. Stay tuned.

 

Discover more about our expertise in the Cloud.

Categories: DBA Blogs

Good Habits & Gratitude

Pythian Group - Thu, 2015-12-31 09:40

 

One of my favourite books is The Power of Habit: Why We Do What We Do in Life and Business. With the New Year ahead and many of us focused on how we can continue to “up our game” by creating or changing our habits, it serves as a great read!

When most of us reflect on our habits, we tend to focus on our “bad” habits as opposed to the habits that are “good” or positive. I try to take a different approach by considering all the things that I do well, and incorporating those habits into my daily routine, including being grateful.

Two years ago I received the gift of a leather bound book. There were many use cases for the book, including note taking for work, collecting my favourite quotes, random sketches or jotting down ideas. I chose to use my book as a gratitude journal. At that time, a number of books about gratitude and the art of keeping a gratitude journal hit several bestseller lists. While I didn’t begin one then, I was keen on the new daily habit of documenting my gratitude.

As far as new habits go, this one was fairly easy to adopt:

  1. Find journal.
  2. Have journal in a convenient place.
  3. Pick a time of day to write in journal.
  4. Be grateful.
  5. Write it down.

My entries have covered everything from lessons I’ve learned, celebrating wins at work, special moments I’ve experienced, feelings I’ve felt, acknowledging good fortunes like health and wellness, etc. On days when I’m really pressed for time, I mindfully think about what I’m grateful for and log it in the next day. Sometimes the entries are short like a note about health, happiness, family, friends, a chocolate brownie, a great book, warm boots, etc.

This habit continues to help me remember and recognize the things that really matter to me. In times of stress or challenge my journal entries serve as a reminder that it’s important to take a few moments to find something to be grateful for. And while you don’t need a journal to be grateful, it’s wonderful to flip back and read what you were grateful for eight months ago, six weeks ago or even four days ago.

Today I’m grateful for the free speech that allows me to write this blog post, the winter tires on my car, the collective talents of the Pythian HR team, for the amazing 400+ colleagues in 36 countries who we get to work with every day and the opportunities that lay ahead for all of us in 2016.

What new habit(s) will you form in 2016?

Categories: DBA Blogs

Oracle as the new IBM — has a long decline started?

DBMS2 - Thu, 2015-12-31 03:15

When I find myself making the same observation fairly frequently, that’s a good impetus to write a post based on it. And so this post is based on the thought that there are many analogies between:

  • Oracle and the Oracle DBMS.
  • IBM and the IBM mainframe.

And when you look at things that way, Oracle seems to be swimming against the tide.

Drilling down, there are basically three things that can seriously threaten Oracle’s market position:

  • Growth in apps of the sort for which Oracle’s RDBMS is not well-suited. Much of “Big Data” fits that description.
  • Outright, widespread replacement of Oracle’s application suites. This is the least of Oracle’s concerns at the moment, but could of course be a disaster in the long term.
  • Transition to “the cloud”. This trend amplifies the other two.

Oracle’s decline, if any, will be slow — but I think it has begun.

 

Oracle/IBM analogies

There’s a clear market lead in the core product category. IBM was dominant in mainframe computing. While not as dominant, Oracle is definitely a strong leader in high-end OTLP/mixed-use (OnLine Transaction Processing) RDBMS.

That market lead is even greater than it looks, because some of the strongest competitors deserve asterisks. Many of IBM’s mainframe competitors were “national champions” — Fujitsu and Hitachi in Japan, Bull in France and so on. Those were probably stronger competitors to IBM than the classic BUNCH companies (Burroughs, Univac, NCR, Control Data, Honeywell).

Similarly, Oracle’s strongest direct competitors are IBM DB2 and Microsoft SQL Server, each of which is sold primarily to customers loyal to the respective vendors’ full stacks. SAP is now trying to play a similar game.

The core product is stable, secure, richly featured, and generally very mature. Duh.

The core product is complicated to administer — which provides great job security for administrators. IBM had JCL (Job Control Language). Oracle has a whole lot of manual work overseeing indexes. In each case, there are many further examples of the point. Edit: A Twitter discussion suggests the specific issue with indexes has been long fixed.

Niche products can actually be more reliable than the big, super-complicated leader. Tandem Nonstop computers were super-reliable. Simple, “embeddable” RDBMS — e.g. Progress or SQL Anywhere — in many cases just work. Still, if you want one system to run most of your workload 24×7, it’s natural to choose the category leader.

The category leader has a great “whole product” story. Here I’m using “whole product” in the sense popularized by Geoffrey Moore, to encompass ancillary products, professional services, training, and so on, from the vendor and third parties alike. There was a time when most serious packaged apps ran exclusively on IBM mainframes. Oracle doesn’t have quite the same dominance, but there are plenty of packaged apps for which it is the natural choice of engine.

Notwithstanding all the foregoing, there’s strong vulnerability to alternative product categories. IBM mainframes eventually were surpassed by UNIX boxes, which had grown up from the minicomputer and even workstation categories. Similarly, the Oracle DBMS has trouble against analytic RDBMS specialists, NoSQL, text search engines and more.

 

IBM’s fate, and Oracle’s

Given that background, what does it teach us about possible futures for Oracle? The golden age of the IBM mainframe lasted 25 or 30 years — 1965-1990 is a good way to think about it, although there’s a little wiggle room at both ends of the interval. Since then it’s been a fairly stagnant cash-cow business, in which a large minority or perhaps even small majority of IBM’s customers have remained intensely loyal, while others have aligned with other vendors.

Oracle’s DBMS business seems pretty stagnant now too. There’s no new on-premises challenger to Oracle now as strong as UNIX boxes were to IBM mainframes 20-25 years ago, but as noted above, traditional competitors are stronger in Oracle’s case than they were in IBM’s. Further, the transition to the cloud is a huge deal, currently in its early stages, and there’s no particular reason to think Oracle will hold any more share there than IBM did in the transition to UNIX.

Within its loyal customer base, IBM has been successful at selling a broad variety of new products (typically software) and services, often via acquired firms. Oracle, of course, has also extended its product lines immensely from RDBMS, to encompass “engineered systems” hardware, app server, apps, business intelligence and more. On the whole, this aspect of Oracle’s strategy is working well.

That said, in most respects Oracle is weaker at account control than peak IBM.

  • Oracle’s core competitors, IBM and Microsoft, are stronger than IBM’s were.
  • DB2 and SQL Server are much closer to Oracle compatibility than most mainframes were to IBM. (Amdahl is an obvious exception.) This is especially true as of the past 10-15 years, when it has become increasingly clear that reliance on stored procedures is a questionable programming practice. Edit: But please see the discussion below challenging this claim.
  • Oracle (the company) is widely hated, in a way that IBM generally wasn’t.
  • Oracle doesn’t dominate a data center the way hardware monopolist IBM did in a hardware-first era.

Above all, Oracle doesn’t have the “Trust us; we’ll make sure your IT works” story that IBM did. Appliances, aka “engineered systems”, are a step in that direction, but those are only — or at least mainly — to run Oracle software, which generally isn’t everything a customer has.

 

But think of the apps!

Oracle does have one area in which it has more account control power than IBM ever did — applications. If you run Oracle apps, you probably should be running the Oracle RDBMS and perhaps an Exadata rack as well. And perhaps you’ll use Oracle BI too, at least in use cases where you don’t prefer something that emphasizes a more modern UI.

As a practical matter, most enterprise app rip-and-replace happens in a few scenarios:

  • Merger/acquisition. An enterprise that winds up with different apps for the same functions may consolidate and throw the loser out. I’m sure Oracle loses a few customers this way to SAP every year, and vice-versa.
  • Drastic obsolescence. This can take a few forms, mainly:
    • Been there, done that.
    • Enterprise outgrows the capabilities of the current app suite. Oracle’s not going to lose much business that way.
    • Major platform shift. Going forward, that means SaaS/”cloud” (Software as a Service).

And so the main “opportunity” for Oracle to lose application market share is in the transition to the cloud.

 

Putting this all together …

A typical large-enterprise Oracle customer has 1000s of apps running on Oracle. The majority would be easy to port to some other system, but the exceptions to that rule are numerous enough to matter — a lot. Thus, Oracle has a secure place at that customer until such time as its applications are mainly swept away and replaced with something new.

But what about new apps? In many cases, they’ll arise in areas where Oracle’s position isn’t strong.

  • New third-party apps are likely to come from SaaS vendors. Oracle can reasonably claim to be a major SaaS vendor itself, and salesforce.com has a complex relationship with the Oracle RDBMS. But on the whole, SaaS vendors aren’t enthusiastic Oracle adopters.
  • New internet-oriented apps are likely to focus on customer/prospect interactions (here I’m drawing the (trans)action/interaction distinction) or even more purely machine-generated data (“Internet of Things”). The Oracle RDBMS has few advantages in those realms.
  • Further, new apps — especially those that focus on data external to the company — will in many cases be designed for the cloud. This is not a realm of traditional Oracle strength.

And that is why I think the answer to this post’s title question is probably “Yes”.

 

Related links

A significant fraction of my posts, in this blog and Software Memories alike, are probably at least somewhat relevant to this sweeping discussion. Particularly germane is my 2012 overview of Oracle’s evolution. Other posts to call out are my recent piece on transitioning to the cloud, and my series on enterprise application history.

Categories: Other

Oracle as the new IBM — has a long decline started?

Curt Monash - Thu, 2015-12-31 03:15

When I find myself making the same observation fairly frequently, that’s a good impetus to write a post based on it. And so this post is based on the thought that there are many analogies between:

  • Oracle and the Oracle DBMS.
  • IBM and the IBM mainframe.

And when you look at things that way, Oracle seems to be swimming against the tide.

Drilling down, there are basically three things that can seriously threaten Oracle’s market position:

  • Growth in apps of the sort for which Oracle’s RDBMS is not well-suited. Much of “Big Data” fits that description.
  • Outright, widespread replacement of Oracle’s application suites. This is the least of Oracle’s concerns at the moment, but could of course be a disaster in the long term.
  • Transition to “the cloud”. This trend amplifies the other two.

Oracle’s decline, if any, will be slow — but I think it has begun.

 

Oracle/IBM analogies

There’s a clear market lead in the core product category. IBM was dominant in mainframe computing. While not as dominant, Oracle is definitely a strong leader in high-end OTLP/mixed-use (OnLine Transaction Processing) RDBMS.

That market lead is even greater than it looks, because some of the strongest competitors deserve asterisks. Many of IBM’s mainframe competitors were “national champions” — Fujitsu and Hitachi in Japan, Bull in France and so on. Those were probably stronger competitors to IBM than the classic BUNCH companies (Burroughs, Univac, NCR, Control Data, Honeywell).

Similarly, Oracle’s strongest direct competitors are IBM DB2 and Microsoft SQL Server, each of which is sold primarily to customers loyal to the respective vendors’ full stacks. SAP is now trying to play a similar game.

The core product is stable, secure, richly featured, and generally very mature. Duh.

The core product is complicated to administer — which provides great job security for administrators. IBM had JCL (Job Control Language). Oracle has a whole lot of manual work overseeing indexes. In each case, there are many further examples of the point. Edit: A Twitter discussion suggests the specific issue with indexes has been long fixed.

Niche products can actually be more reliable than the big, super-complicated leader. Tandem Nonstop computers were super-reliable. Simple, “embeddable” RDBMS — e.g. Progress or SQL Anywhere — in many cases just work. Still, if you want one system to run most of your workload 24×7, it’s natural to choose the category leader.

The category leader has a great “whole product” story. Here I’m using “whole product” in the sense popularized by Geoffrey Moore, to encompass ancillary products, professional services, training, and so on, from the vendor and third parties alike. There was a time when most serious packaged apps ran exclusively on IBM mainframes. Oracle doesn’t have quite the same dominance, but there are plenty of packaged apps for which it is the natural choice of engine.

Notwithstanding all the foregoing, there’s strong vulnerability to alternative product categories. IBM mainframes eventually were surpassed by UNIX boxes, which had grown up from the minicomputer and even workstation categories. Similarly, the Oracle DBMS has trouble against analytic RDBMS specialists, NoSQL, text search engines and more.

 

IBM’s fate, and Oracle’s

Given that background, what does it teach us about possible futures for Oracle? The golden age of the IBM mainframe lasted 25 or 30 years — 1965-1990 is a good way to think about it, although there’s a little wiggle room at both ends of the interval. Since then it’s been a fairly stagnant cash-cow business, in which a large minority or perhaps even small majority of IBM’s customers have remained intensely loyal, while others have aligned with other vendors.

Oracle’s DBMS business seems pretty stagnant now too. There’s no new on-premises challenger to Oracle now as strong as UNIX boxes were to IBM mainframes 20-25 years ago, but as noted above, traditional competitors are stronger in Oracle’s case than they were in IBM’s. Further, the transition to the cloud is a huge deal, currently in its early stages, and there’s no particular reason to think Oracle will hold any more share there than IBM did in the transition to UNIX.

Within its loyal customer base, IBM has been successful at selling a broad variety of new products (typically software) and services, often via acquired firms. Oracle, of course, has also extended its product lines immensely from RDBMS, to encompass “engineered systems” hardware, app server, apps, business intelligence and more. On the whole, this aspect of Oracle’s strategy is working well.

That said, in most respects Oracle is weaker at account control than peak IBM.

  • Oracle’s core competitors, IBM and Microsoft, are stronger than IBM’s were.
  • DB2 and SQL Server are much closer to Oracle compatibility than most mainframes were to IBM. (Amdahl is an obvious exception.) This is especially true as of the past 10-15 years, when it has become increasingly clear that reliance on stored procedures is a questionable programming practice. Edit: But please see the discussion below challenging this claim.
  • Oracle (the company) is widely hated, in a way that IBM generally wasn’t.
  • Oracle doesn’t dominate a data center the way hardware monopolist IBM did in a hardware-first era.

Above all, Oracle doesn’t have the “Trust us; we’ll make sure your IT works” story that IBM did. Appliances, aka “engineered systems”, are a step in that direction, but those are only — or at least mainly — to run Oracle software, which generally isn’t everything a customer has.

 

But think of the apps!

Oracle does have one area in which it has more account control power than IBM ever did — applications. If you run Oracle apps, you probably should be running the Oracle RDBMS and perhaps an Exadata rack as well. And perhaps you’ll use Oracle BI too, at least in use cases where you don’t prefer something that emphasizes a more modern UI.

As a practical matter, most enterprise app rip-and-replace happens in a few scenarios:

  • Merger/acquisition. An enterprise that winds up with different apps for the same functions may consolidate and throw the loser out. I’m sure Oracle loses a few customers this way to SAP every year, and vice-versa.
  • Drastic obsolescence. This can take a few forms, mainly:
    • Been there, done that.
    • Enterprise outgrows the capabilities of the current app suite. Oracle’s not going to lose much business that way.
    • Major platform shift. Going forward, that means SaaS/”cloud” (Software as a Service).

And so the main “opportunity” for Oracle to lose application market share is in the transition to the cloud.

 

Putting this all together …

A typical large-enterprise Oracle customer has 1000s of apps running on Oracle. The majority would be easy to port to some other system, but the exceptions to that rule are numerous enough to matter — a lot. Thus, Oracle has a secure place at that customer until such time as its applications are mainly swept away and replaced with something new.

But what about new apps? In many cases, they’ll arise in areas where Oracle’s position isn’t strong.

  • New third-party apps are likely to come from SaaS vendors. Oracle can reasonably claim to be a major SaaS vendor itself, and salesforce.com has a complex relationship with the Oracle RDBMS. But on the whole, SaaS vendors aren’t enthusiastic Oracle adopters.
  • New internet-oriented apps are likely to focus on customer/prospect interactions (here I’m drawing the (trans)action/interaction distinction) or even more purely machine-generated data (“Internet of Things”). The Oracle RDBMS has few advantages in those realms.
  • Further, new apps — especially those that focus on data external to the company — will in many cases be designed for the cloud. This is not a realm of traditional Oracle strength.

And that is why I think the answer to this post’s title question is probably “Yes”.

 

Related links

A significant fraction of my posts, in this blog and Software Memories alike, are probably at least somewhat relevant to this sweeping discussion. Particularly germane is my 2012 overview of Oracle’s evolution. Other posts to call out are my recent piece on transitioning to the cloud, and my series on enterprise application history.

Death and taxes - and Oracle 11gR2?

Andrew Clarke - Thu, 2015-12-31 02:48
Oracle Premier Support for 11gR2 Database expired this time last. However, Oracle announced they would waive the fees for Extended Support for 2015. This was supposed to provide 11gR2 customers an additional twelve months to migrate to 12c. So, twelve months on, how many of those laggards are still on 11gR2. My entirely unscientific guess is, most of them. Why else would Oracle announce the extension of the Extended Support fees waiver until May 2017?

But 11gR2's continued longevity should not be a surprise.

For a start, it is a really good product. It is fully-featured and extremely robust. It offers pretty much everything an organization might want from a database. Basically it's the Windows XP of RDBMS.

The marketing of 12c has compounded this. It has focused on the "big ticket" features of 12c: Cloud, Multi-tenancy and In-Memory Database. Which is fair enough, except that these are all chargeable extras. So to get any actual benefits from upgrading to 12c requires laying out additional license fees, which is not a popular message these days.

And then there's Big Data. The hype has swept up lots of organizations who are now convinced they should be replacing their databases with Hadoop. They have heard the siren singing of free software and vendor-independence. In reality, most enterprises' core business rests on structured data for which they need an RDBMS, and their use cases for Big Data are marginal. But right now, it seems easier to make a business case for the shiny new toys than spending more on the existing estate.

So how can Oracle shift organizations onto 12c? They need to offer compelling positive reasons, not just the fear of loss of Support. My suggestion would be to make a couple of the Options part of the core product. For instance, freeing Partitioning and In-Memory Database would make Oracle 12c database a much more interesting proposition for many organizations.

Happy New Year 2016!

The Oracle Instructor - Thu, 2015-12-31 02:08

Another year has passed. I take the opportunity to thank you for visiting and to wish you a Happy New Year 2016!

Happy New Year 2016!

In case you didn’t recognize: That is supposed to look like fireworks, The Oracle Instructor style ;-)

2015 was a great year for uhesse.com with 345,000+ views and the crossing of the one million hits threshold. Top countries with more than 4,000 views in 2015 were

Visitors 2015

Visitors came from 202 countries, even China is on the list this year with 1,500+ views.

Hope to see all of you again here in 2016 :-)


Categories: DBA Blogs

Kickstart Your 2016 with Rittman Mead’s Data Integration Training

Rittman Mead Consulting - Wed, 2015-12-30 19:38

Happy Holidays and a Happy New Year to all! As you begin your 2016 this January, it’s time to start planning your team’s data integration training. Look no further than Rittman Mead’s Oracle Data Integrator training course! We offer a 4 day Oracle Data Integrator 12c Bootcamp for those looking to take advantage of the latest and greatest features in ODI 12c. We also still teach our 5 day Oracle Data Integrator 11g Bootcamp, as we know sometimes it can be difficult to upgrade to the latest release and new data warehouse team members need to be brought up to speed on the product. ODI 11g is also still very much alive in Oracle Business Intelligence Applications, being the ETL technology for the 11g release of the product suite.

ODI12c training

Customized Data Integration Training

BI Apps 11g training has been a hot topic from the data integration perspective over the last couple of years. Rittman Mead have delivered custom BI Apps training for ODI developers several times just within the last year, prompting us to add a new public training course specific to this topic to our public schedule. This course walks attendees through the unique relationship between OBIEE and ODI 11g as the data integration technology, including configuration, load plan generation, and ETL customization. If you have an Oracle Business Intelligence Applications 11g team looking to enhance their ODI 11g skills, take a look at the new ODI for BI Applications course description.

The customization of training does not just apply to BI Applications, but to all aspects of Oracle Data Integration. Whether adding more details around Oracle GoldenGate installation and maintenance to the ODI 12c course, or learning about Oracle EDQ integration, the Rittman Mead data integration team of experts can work to deliver the course so your team gains the most value from its investment in Oracle Data Integration technologies. Just ask! Reach out and we can work together to create a custom course to fit your needs.

Public or Onsite Training?

Rittman Mead has several dates for each course, scheduled to be delivered out of our offices in either Atlanta, GA or Brighton, UK. Take a look here for our ODI 12c bootcamp, ODI 11g bootcamp, and ODI for BI Apps Developers offerings in the US. Look here for the same in the UK/Europe (Note: as of the writing of this blog post, the 2016 UK/Europe schedule had not been released). We also offer the same courses for delivery onsite at your company’s office, allowing our experts to come to you! Quite often our clients will combine consulting and training, ensuring they get the most out of their investment in our team of experts.

Why Rittman Mead?

Many folks in the Business Intelligence and Data Integration profession who are looking for a consulting company might think Rittman Mead only work on extremely challenging projects based on the depth of knowledge and type of problems (and solutions) we offer via our blog. The fact is, most of our projects are the “standard” data warehouse or business intelligence reporting implementations, with some of these additional challenges coming along the way. Why do I bring that up? Well, if you’re looking for the experts in Oracle Data Integration technology, with experience in both project implementation and solving challenging technical problems, then you’ve come to the right place to learn about ODI.

Unlike many other companies offering training, we don’t have a staff of educators on hand. Our trainers are the same folks that deliver projects, using the technology you’re interested in learning about, on a day-to-day basis. We offer you real world examples as we walk through our training slide deck and labs. Need to know why Oracle GoldenGate is an integral part of real-time data integration? Let me tell you about my latest client where I implemented GoldenGate and ODI. Want to know what to look out for when installing the JEE Agent in ODI 12c? We’ve done that many times – and know the tricks necessary to get it all working.

Our experts, such Jérôme Françoisse, Becky Wagner, Mark Rittman, myself, and many others, all have multiple years of experience with Oracle Data Integration implementations. Not only that, but we here at Rittman Mead truly enjoy sharing our knowledge! Whether posting to this blog, speaking at Oracle conferences, or on the OTN forums, Rittman Mead experts are always looking to teach others in order to better the Oracle Data Integration community.

If you or your company are in need of Oracle Data Integration training, please drop us a line at training@rittmanmead.com. As always, feel free to reach out to me directly on Twitter (@mRainey), LinkedIn, or via email (michael.rainey@rittmanmead.com) if you have any direct questions. See you all next year!

The post Kickstart Your 2016 with Rittman Mead’s Data Integration Training appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Unizin RFP For LMS: An offering to appease the procurement gods?

Michael Feldstein - Wed, 2015-12-30 18:06

By Phil HillMore Posts (381)

Well this was interesting:

Unizin issues an RFP for "Enterprise and Multitentant LMS" https://t.co/kRVSyzQgYI & I owe my wife an engagement ring soon

— Phil Hill (@PhilOnEdTech) December 30, 2015

In a blog post from Monday, Unizin announced a public Request For Proposals (RFP) to solicit bids for an enterprise and multitenant LMS. The RFP states its purpose:

We seek to identify a qualified vendor (“vendor”) to make available an enterprise and multitenant learning management system and related services (collectively, the “LMS”) to Member Institutions.

Say what?

From the beginning, Unizin has chosen and contracted with Instructure to have Canvas as the central LMS platform for its member institutions. In our e-Literate post breaking the news of Unizin (from May 2014), we shared this slide that was used to help get the founding members to formally sign up to create Unizin:

Unizin Status

Every member institution of Unizin has already chosen to adopt Canvas or is actively piloting Canvas with the goal of making it the campus LMS. Carl Straumsheim at Inside Higher Ed wrote about the Unizin / Canvas decision right after the public announcement in mid 2014.

Unizin is “the Internet2 for digital education,” its founders say. It’s about “creating common gauge rails.” It will be a “goldmine for researchers.” And it begins with Canvas, Instructure’s learning management system.

Procurement Rules

So why would Unizin issue a public RFP for an LMS? Neither Unizin nor Instructure would agree to provide commentary on the subject due to the RFP rules, but I am quite certain that this move is all about the procurement processes at public universities and not about Unizin looking to change their common usage of Canvas as the LMS.

The first reason is that if Unizin changed away from Canvas at this stage, they might as well admit they are starting over. If a member institution thought that they would have to reverse course and not go to their active Canvas system (e.g. at Indiana University) or their planned system pending pilot results (e.g. Colorado State University), this would create chaos and could jeopardize the founding members’ funding (each pays $350k annually for the first three years). I cannot see this happening unless Unizin had discovered a show stopper flaw in Canvas and could not resolve the issue even with custom development. There have been no indications of such problems. The biggest challenge I have heard of is access to data, but even there Unizin has been actively working to with Instructure to make improvements.

The second reason is that there have been strong indications that some schools are having trouble justifying their Unizin membership and implied selection of Canvas based on procurement rules. Most states require public universities (and every member of Unizin is a public university) to have a formal procurement process, typically through an RFP, to use a vendor for enterprise services. Unizin is an organization within Internet2, where the latter has a Net+ program for cloud services that has allowed member institutions to use without the schools doing their own RFP. In other words, part of the value of Internet2 and Unizin is for schools to bypass their expensive and cumbersome RFP processes and rely on these non-profit organizations – based on the assumption that Internet2 did the procurement work of evaluating multiple proposals.

The problem is that Unizin was announced having already selected Canvas with no official procurement process of their own. I know of at least two campuses that have delayed their decision to join Unizin as their state or campus procurement rules will not allow them to contract for Canvas without some documentation of a public procurement process.

That is what I believe is happening. Somewhat after the fact (hence the engagement ring reference), Unizin is going through a formal and public RFP process to help current member institutions and to make it easier for future institutions to join.

Holding Instructure’s Feet To The Fire

There is an additional benefit to consider. When Unizin selected Canvas, there were no detailed requirements other than referencing those from Indiana University’s process. Go back to the data access challenges referenced in November, where I quoted Unizin CEO Amin Qazi:

Yes, Unizin had an agreement which allowed access to the daily Canvas Data files without our members paying any additional fees. My understanding of the new pricing model is all Instructure Canvas customers now have a similar arrangement.

Unizin is only beginning to explore the benefits of Live Events from Canvas. We are transporting the data from Instructure to our members via cloud-based infrastructure Unizin is building and maintaining, at no cost to our members. We have started developing some prototypes to take advantage of this data to meet our objective of increasing learner success.

Unizin has had, and plans to have, discussions with Instructure regarding the breadth of the data available (current:https://canvas.beta.instructure.com/doc/api/file.live_events.html), the continued conformity of that data to the IMS Global standards, and certain aspects of privacy and security. Unizin believes these topics are of interest to all Instructure Canvas customers.

We understand this is a beta product from Instructure and we appreciate their willingness to engage in these discussions, and potentially dedicate time and resources. We look forward to working with Instructure to mature Live Events.

Now look at the Data Warehouse section of the Functional requirements in the current RFP:

Unizin_LMS_RFP_-_Minimum_Functional_Requirements_xlsx_-_Google_Drive

That reads to me like Unizin is putting the free access to data and daily feeds from Canvas Data and the upcoming API access and live feeds from Live Events into formal requirements. I suspect that Unizin is trying to make vendor management lemonade out of RFP lemons.

The post Unizin RFP For LMS: An offering to appease the procurement gods? appeared first on e-Literate.

Log Buffer #455: A Carnival of the Vanities for DBAs

Pythian Group - Wed, 2015-12-30 13:45

What better to do during the holiday season than to read the Log Buffer? This log buffer edition is here to add some sparkle to Oracle, MySQL and SQL Server on your days off.

Oracle:

  • Ops Center version 12.3.1 has just been released. There are a number of enhancements here.
  • Oracle R Enterprise (ORE) 1.5 is now available for download on all supported platforms with Oracle R Distribution 3.2.0 / R-3.2.0. ORE 1.5 introduces parallel distributed implementations of Random Forest, Singular Value Decomposition (SVD), and Principal Component Analysis (PCA) that operate on ore.frame objects.
  • Create a SOA Application in JDeveloper 12c Using Maven SOA Plug-In by Daniel Rodriguez.
  • How reliable are the memory advisors?
  • Oracle Enterprise Manager offers a complete cloud solution including self-service provisioning balanced against centralized, policy-based resource management, integrated chargeback and capacity planning and complete visibility of the physical and virtual environments from applications to disk.

SQL Server:

  • SQL Server Data Tools (SSDT) and Database References.
  • Stairway to SQL Server Extended Events Level 1: From SQL Trace to Extended Events.
  • Advanced Mathematical Formulas using the M Language.
  • Liberating the DBA from SQL Authentication with AD Groups.
  • Enterprise Edition customers enjoy the manageability and performance benefits offered by table partitioning, but this feature is not available in Standard Edition.

MySQL:

  • Is MySQL X faster than MySQL Y? – Ask query profiler.
  • Usually when one says “SSL” or “TLS” it means not a specific protocol but a family of protocols.
  • The MariaDB project is pleased to announce the immediate availability of MariaDB 10.1.10, MariaDB Galera Cluster 5.5.47, and MariaDB Galera Cluster 10.0.23.
  • EXPLAIN FORMAT=JSON: everything about attached_subqueries, optimized_away_subqueries, materialized_from_subquery.
  • Use MySQL to store data from Amazon’s API via Perl scripts.

 

Learn more about Pythian’s expertise in Oracle SQL Server & MySQL.

Categories: DBA Blogs

12c Parallel Execution New Features: Parallel FILTER Subquery Evaluation - Part 3: The Optimizer And Distribution Methods

Randolf Geist - Wed, 2015-12-30 13:45
As mentioned in the first and second part of this instalment the different available distribution methods of the new parallel FILTER are selected automatically by the optimizer - in this last post of this series I want to focus on that optimizer behaviour.

It looks like there are two new optimizer related parameters that control the behaviour of the new feature: "_px_filter_parallelized" is the overall switch to enable/disable the new parallel filter capability - and defaults to "true" in 12c, and "_px_filter_skew_handling" influences how the optimizer determines the distribution methods - the parameter naming suggests that it somehow has to do with some kind of "skew" - note that the internal parameter that handles the new automatic join skew handling is called "_px_join_skew_handling" - rather similar in name.

But even after playing around with the feature for quite a while I couldn't come up with a good test case where the optimizer chose a different distribution method based on the typical data distribution skew patterns - so that the expression used for the FILTER lookup had some more popular values than others. So I got in touch with Yasin Baskan - product manager for Parallel Execution at Oracle, asking what kind of skew is meant to see a difference in behaviour.

As it turns out "skew" means something different in this context here. When the mentioned parameter "_px_filter_skew_handling" is set to "true" (default value in 12c) the optimizer will choose a different distribution method based on the size of object driving the filter. According to my tests this effectively means: If the object is such small that only one granule (usually 13 blocks) per PX slave can be assigned the optimizer will use automatically a HASH distribution, otherwise - if the object is larger than this threshold - no re-distribution will be selected. I wasn't able to come up with an example where the optimizer automatically comes up with the other available distribution method, which is RANDOM / ROUND-ROBIN (see previous post). To demonstrate the point, here is a small example:

create table t2 as select * from dba_objects where rownum <= 90000;

exec dbms_stats.gather_table_stats(null, 't2')

create table t3 as select * from dba_objects where rownum <= 90000;

exec dbms_stats.gather_table_stats(null, 't3')

explain plan for
select /*+ monitor
parallel(4)
--opt_param('_px_filter_skew_handling' 'false')
*/ count(*) from
t3 t
--(select /*+ no_merge */ a.* from t3 a) t
--(select a.* from t3 a, t3 b where a.object_id = b.object_id) t
where exists (select /*+ no_unnest */ 1 from t2 where t.object_id=t2.object_id);

-- Default plan, no redistribution before parallel FILTER
-----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 5 | 32M (1)| 00:21:13 | | | |
| 1 | SORT AGGREGATE | | 1 | 5 | | | | | |
| 2 | PX COORDINATOR | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | 5 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | 5 | | | Q1,00 | PCWP | |
|* 5 | FILTER | | | | | | Q1,00 | PCWC | |
| 6 | PX BLOCK ITERATOR | | 90000 | 439K| 114 (0)| 00:00:01 | Q1,00 | PCWC | |
| 7 | TABLE ACCESS FULL| T3 | 90000 | 439K| 114 (0)| 00:00:01 | Q1,00 | PCWP | |
|* 8 | TABLE ACCESS FULL | T2 | 1 | 5 | 412 (1)| 00:00:01 | | | |
-----------------------------------------------------------------------------------------------------------------

exec dbms_stats.set_table_stats(null, 't3', numblks => 52)

-- Setting stats of T3 to 52 (13 * DOP) blocks or smaller - HASH distribution will be used, 53 blocks or greater => no redistribution
-------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 5 | 32M (1)| 00:21:13 | | | |
| 1 | SORT AGGREGATE | | 1 | 5 | | | | | |
| 2 | PX COORDINATOR | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10001 | 1 | 5 | | | Q1,01 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | 5 | | | Q1,01 | PCWP | |
|* 5 | FILTER | | | | | | Q1,01 | PCWP | |
| 6 | PX RECEIVE | | 90000 | 439K| 5 (20)| 00:00:01 | Q1,01 | PCWP | |
| 7 | PX SEND HASH | :TQ10000 | 90000 | 439K| 5 (20)| 00:00:01 | Q1,00 | P->P | HASH |
| 8 | PX BLOCK ITERATOR | | 90000 | 439K| 5 (20)| 00:00:01 | Q1,00 | PCWC | |
| 9 | TABLE ACCESS FULL| T3 | 90000 | 439K| 5 (20)| 00:00:01 | Q1,00 | PCWP | |
|* 10 | TABLE ACCESS FULL | T2 | 1 | 5 | 412 (1)| 00:00:01 | | | |
-------------------------------------------------------------------------------------------------------------------
So this example shows that the HASH distribution will be used by the optimizer if the object T3 driving the FILTER operation is 52 blocks or smaller, which corresponds to 13 blocks per PX slave at a degree of 4.

Now I find this behaviour pretty odd to explain - since usually you wouldn't want to use Parallel Execution on such a small object anyway. But things become even worse: Not only to me the "skew" handling based on the object size is questionable, but the behaviour can become a potential threat if the row source driving the FILTER operator no longer is a plain table but the result of a more complex operation, which can be simply a join or non-mergeable view:

-- Resetting stats to true size of table - this would mean no redistribution at a DOP of 4, see above
exec dbms_stats.gather_table_stats(null, 't3')

explain plan for
select /*+ monitor
parallel(4)
--opt_param('_px_filter_skew_handling' 'false')
*/ count(*) from
--t3 t
(select /*+ no_merge */ a.* from t3 a) t
--(select a.* from t3 a, t3 b where a.object_id = b.object_id) t
where exists (select /*+ no_unnest */ 1 from t2 where t.object_id=t2.object_id);

-- But simply using a NO_MERGE hint on the select from the simple T3 row source results in an unnecessary HASH re-distribution
--------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 13 | 9755K (1)| 00:06:22 | | | |
| 1 | SORT AGGREGATE | | 1 | 13 | | | | | |
| 2 | PX COORDINATOR | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10001 | 1 | 13 | | | Q1,01 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | 13 | | | Q1,01 | PCWP | |
|* 5 | FILTER | | | | | | Q1,01 | PCWP | |
| 6 | PX RECEIVE | | 90000 | 1142K| 114 (0)| 00:00:01 | Q1,01 | PCWP | |
| 7 | PX SEND HASH | :TQ10000 | 90000 | 1142K| 114 (0)| 00:00:01 | Q1,00 | P->P | HASH |
| 8 | VIEW | | 90000 | 1142K| 114 (0)| 00:00:01 | Q1,00 | PCWP | |
| 9 | PX BLOCK ITERATOR | | 90000 | 439K| 114 (0)| 00:00:01 | Q1,00 | PCWC | |
| 10 | TABLE ACCESS FULL| T3 | 90000 | 439K| 114 (0)| 00:00:01 | Q1,00 | PCWP | |
|* 11 | TABLE ACCESS FULL | T2 | 1 | 5 | 114 (0)| 00:00:01 | | | |
--------------------------------------------------------------------------------------------------------------------

explain plan for
select /*+ monitor
parallel(4)
--opt_param('_px_filter_skew_handling' 'false')
*/ count(*) from
--t3 t
--(select /*+ no_merge */ a.* from t3 a) t
(select a.* from t3 a, t3 b where a.object_id = b.object_id) t
where exists (select /*+ no_unnest */ 1 from t2 where t.object_id=t2.object_id);

-- If we use a simple join as driving row source again a HASH re-distribution before the FILTER gets added
-- As a result the dreaded HASH JOIN BUFFERED will be used instead of the plain HASH JOIN
-------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 10 | 32M (1)| 00:21:13 | | | |
| 1 | SORT AGGREGATE | | 1 | 10 | | | | | |
| 2 | PX COORDINATOR | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10003 | 1 | 10 | | | Q1,03 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | 10 | | | Q1,03 | PCWP | |
|* 5 | FILTER | | | | | | Q1,03 | PCWP | |
| 6 | PX RECEIVE | | 90000 | 878K| 229 (1)| 00:00:01 | Q1,03 | PCWP | |
| 7 | PX SEND HASH | :TQ10002 | 90000 | 878K| 229 (1)| 00:00:01 | Q1,02 | P->P | HASH |
|* 8 | HASH JOIN BUFFERED | | 90000 | 878K| 229 (1)| 00:00:01 | Q1,02 | PCWP | |
| 9 | PX RECEIVE | | 90000 | 439K| 114 (0)| 00:00:01 | Q1,02 | PCWP | |
| 10 | PX SEND HYBRID HASH | :TQ10000 | 90000 | 439K| 114 (0)| 00:00:01 | Q1,00 | P->P | HYBRID HASH|
| 11 | STATISTICS COLLECTOR | | | | | | Q1,00 | PCWC | |
| 12 | PX BLOCK ITERATOR | | 90000 | 439K| 114 (0)| 00:00:01 | Q1,00 | PCWC | |
| 13 | TABLE ACCESS FULL | T3 | 90000 | 439K| 114 (0)| 00:00:01 | Q1,00 | PCWP | |
| 14 | PX RECEIVE | | 90000 | 439K| 114 (0)| 00:00:01 | Q1,02 | PCWP | |
| 15 | PX SEND HYBRID HASH | :TQ10001 | 90000 | 439K| 114 (0)| 00:00:01 | Q1,01 | P->P | HYBRID HASH|
| 16 | PX BLOCK ITERATOR | | 90000 | 439K| 114 (0)| 00:00:01 | Q1,01 | PCWC | |
| 17 | TABLE ACCESS FULL | T3 | 90000 | 439K| 114 (0)| 00:00:01 | Q1,01 | PCWP | |
|* 18 | TABLE ACCESS FULL | T2 | 1 | 5 | 412 (1)| 00:00:01 | | | |
-------------------------------------------------------------------------------------------------------------------------

explain plan for
select /*+ monitor
parallel(4)
opt_param('_px_filter_skew_handling' 'false')
*/ count(*) from
--t3 t
--(select /*+ no_merge */ a.* from t3 a) t
(select a.* from t3 a, t3 b where a.object_id = b.object_id) t
where exists (select /*+ no_unnest */ 1 from t2 where t.object_id=t2.object_id);

-- Disabling the FILTER skew handling behaviour means no re-distribution before the FILTER, and hence no HASH JOIN BUFFERED
-----------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 10 | 32M (1)| 00:21:13 | | | |
| 1 | SORT AGGREGATE | | 1 | 10 | | | | | |
| 2 | PX COORDINATOR | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10002 | 1 | 10 | | | Q1,02 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | 10 | | | Q1,02 | PCWP | |
|* 5 | FILTER | | | | | | Q1,02 | PCWC | |
|* 6 | HASH JOIN | | 90000 | 878K| 229 (1)| 00:00:01 | Q1,02 | PCWP | |
| 7 | PX RECEIVE | | 90000 | 439K| 114 (0)| 00:00:01 | Q1,02 | PCWP | |
| 8 | PX SEND HYBRID HASH | :TQ10000 | 90000 | 439K| 114 (0)| 00:00:01 | Q1,00 | P->P | HYBRID HASH|
| 9 | STATISTICS COLLECTOR | | | | | | Q1,00 | PCWC | |
| 10 | PX BLOCK ITERATOR | | 90000 | 439K| 114 (0)| 00:00:01 | Q1,00 | PCWC | |
| 11 | TABLE ACCESS FULL | T3 | 90000 | 439K| 114 (0)| 00:00:01 | Q1,00 | PCWP | |
| 12 | PX RECEIVE | | 90000 | 439K| 114 (0)| 00:00:01 | Q1,02 | PCWP | |
| 13 | PX SEND HYBRID HASH | :TQ10001 | 90000 | 439K| 114 (0)| 00:00:01 | Q1,01 | P->P | HYBRID HASH|
| 14 | PX BLOCK ITERATOR | | 90000 | 439K| 114 (0)| 00:00:01 | Q1,01 | PCWC | |
| 15 | TABLE ACCESS FULL | T3 | 90000 | 439K| 114 (0)| 00:00:01 | Q1,01 | PCWP | |
|* 16 | TABLE ACCESS FULL | T2 | 1 | 5 | 412 (1)| 00:00:01 | | | |
-----------------------------------------------------------------------------------------------------------------------
So it looks like if the row source driving the parallel FILTER operator is complex (in this case by complex I mean not a simple table) the optimizer will always add a HASH distribution unconditionally before the FILTER. It it obvious that such a re-distribution adds overhead - it requires resources to perform. What is even worse is that in general the rules is: The more redistributions the more likely the dreaded buffering will be added to the execution plans, as can be seen from the example above, where the HASH JOIN turns into a HASH JOIN BUFFERED due to the HASH distribution by default added by the optimizer after the join and before the FILTER. By disabling the filter "skew" handling this in my opinion unnecessary redistribution doesn't show up and hence the HASH JOIN without buffering can be used in this example.

Summary
The new parallel FILTER operator comes with different distribution methods available to the optimizer. However, at present the way the optimizer determines automatically if and how to re-distribute the data seems to be questionable to me.

The skew handling is based on the size of the driving object - for very small objects a re-distribution gets added before the FILTER. For row sources driving the filter that are no simple tables the skew handling seems to add a re-distribution unconditionally.

For the reasons outlined at present I would recommend considering to disable the filter skew handling by setting the parameter "_px_filter_skew_handling" to "false", of course not without getting the blessing from Oracle Support before doing so - this should allow minimising the number of re-distributions added to an execution plan. Losing the capability of handling the "skew" caused by very small objects in my opinion is negligible in most cases.

Three Views of Top 10 e-Literate Posts in 2015

Michael Feldstein - Wed, 2015-12-30 00:04

By Phil HillMore Posts (380)

It’s the year end, and I have writer’s block. Like many people, I would much prefer to play with numbers than get work done. Instead of just sharing the Top 10 or Top 20 blog posts in terms of 2015 page views, however, I thought it would be interesting to take three different views this time.

Top 10 most-viewed blog posts on e-Literate for 2015

This one is straightforward and ranked by page views, but note that several posts from prior years continue to get a lot of views.

  1. How Much Do College Students Actually Pay For Textbooks? (2015)
  2. What is a Learning Platform? (2012)
  3. A response to USA Today article on Flipped Classroom research (2013)
  4. Reuters: Blackboard up for sale, seeking up to $3 billion in auction (2015)
  5. First View of Bridge: The new corporate LMS from Instructure (2015)
  6. State of the US Higher Education LMS Market: 2014 Edition (2014)
  7. Why Google Classroom won’t affect institutional LMS market … yet (2014)
  8. Blackboard Ultra and Other Product and Company Updates (2015)
  9. No Discernible Growth in US Higher Ed Online Learning (2015)
  10. State of the US Higher Education LMS Market: 2015 Edition (2015)
Top 10 most-mentioned blog posts in social media


This one took the most effort. It turns out that Twitter changed how they provide sharing data in May 2015 – essentially they no longer provide it unless you go through their channels and pay them. Our social media sharing plugin for WordPress, wpSocialStats, has not been updated in two years, and it is hard to find a good substitute. What it provides is Facebook, LinkedIn, Pinterest, and StumbleUpon data. Google Analytics has new capabilities to measure mentions for their “Data Hub” – primarily Diigo, Google+, and Google Groups. What I did was to combine wpSocialStats with Google’s Data Hub minus Twitter results. For what it’s worth, LinkedIn sharing was on the same order as Twitter, but it now dominates this group for e-Literate blog posts.

These are the Top 10 posts in terms of social media mentions created in 2015.

  1. McGraw Hill’s New Personalized Learning Authoring Product (2015)
  2. Instructure Is Truly Anomalous (2015)
  3. How Much Do College Students Actually Pay For Textbooks? (2015)
  4. The Starling: Pre-K Ed Tech (2015)
  5. Exclusive: University of Phoenix moving from homegrown platform to Blackboard Learn Ultra (2015)
  6. No Discernible Growth in US Higher Ed Online Learning (2015)
  7. Instructure: Accelerating growth in 3 parallel markets (2015)
  8. Reuters: Blackboard up for sale, seeking up to $3 billion in auction (2015)
  9. Why LinkedIn Matters (2015)
  10. Bad Data Can Lead To Bad Policy: College students don’t spend $1,200+ on textbooks (2015)
Top 10 pages views originating from social media mentions

If we combine these two concepts, the next view is the find the greatest number of pages views in 2015 originating from social media mentions. We only get ~13% of our traffic from social media sources, but this is the first time we’ve analyzed the data from this source.

These are the top 10 most-viewed blog posts in 2015 that originated from social media mentions.

  1. State of the US Higher Education LMS Market: 2015 Edition (2015)
  2. Reuters: Blackboard up for sale, seeking up to $3 billion in auction (2015)
  3. Harmonizing Learning and Education (2015)
  4. Cracks In The Foundation Of Disruptive Innovation (2015)
  5. Back To The Future: Looking at LMS forecasts from 2011 – 2014 (2015)
  6. Why LinkedIn Matters (2015)
  7. How Much Do College Students Actually Pay For Textbooks? (2015)
  8. Blueprint for a Post-LMS, Part 1 (2015)
  9. U of Phoenix: Losing hundreds of millions of dollars on adaptive-learning LMS bet (2015)
  10. Bad Data Can Lead To Bad Policy: College students don’t spend $1,200+ on textbooks (2015)
Notes

I’m not sure why these views are so different. I would note, however, that all of the posts in the second and third lists are from 2015, indicating that social media mentions are shorter-lasting than search rankings and direct views.

Another issue to consider is time lag. The post on McGraw-Hill’s new product has more than double the social media mentions as any other post, yet these mentions have not (yet) driven high page views (at least to the level of becoming top 10).

What do you notice?

Whatever the reasons, there they are – three quite different top 10 lists. Here’s to the new year and a lot more blogging to come.

The post Three Views of Top 10 e-Literate Posts in 2015 appeared first on e-Literate.

Mystats utility

Adrian Billington - Tue, 2015-12-29 14:45
A variation on Jonathan Lewis's SNAP_MY_STATS package to report the resource consumption of a unit of work between two snapshots. Designed to work under constrained developer environments, this version has enhancements such as time model statistics and the option to report on specific statistics. ***Update*** Now available in two formats: 1) as a PL/SQL package and 2) as a free-standing SQL*Plus script (i.e. no installation/database objects needed). June 2007 (updated November 2015)

Very Practical CRUD with JET and ADF BC - POST and DELETE Methods

Andrejus Baranovski - Tue, 2015-12-29 12:13
In my previous post I have described how to implement PATCH method with Oracle JET and call ADF BC REST service to update row attributes - Very Practical CRUD with JET and ADF BC - PATCH Method. Today I'm going to complete CRUD implementation with POST and DELETE methods, to create and delete rows through ADF BC REST service and JET UI.

Here you can watch video, where I demonstrate PATCH, POST and DELETE methods invoked from JET UI. Also you are going to see how pagination works with JET paging control and ADF BC range paging mode (Oracle JET Collection Paging Control and ADF BC REST Pagination Perfect Combination):


Download sample application - JETCRUDApp_v3.zip. This sample app contains JET UI and ADF BC REST implementation. I'm going to explain in more detail steps recorded in the above video.

Salary column was updated with right align in the new sample, in JET we can assign class name for table cell:


JET supports date formatters and converters. I was using yyyy-MM-dd pattern to format Hire Date attribute value:


POST method - row creation

New row is created with JET collection API method create. I have implemented success callback, if POST method in REST was successful and new row was inserted - collection is refreshed to display new row. There is no need to refresh, if collection is not virtualized (fetchSize and pagination is not used):


DELETE method - row removal

Row is removed from JET model with API method destroy. This calls DELETE method triggering REST service and removes row from JET collection, by refreshing current page (invoking GET method for collection page):


POST method test. Example of new row creation:


POST method is invoked by JET, which in turns calls ADF BC REST service and inserts new row through ADF BC into DB:


PATCH method test. Example of attribute value update. I have changed Salary attribute value:


Row with ID is updated by PATCH method through REST service call:


DELETE method test. Example of row removal:


Row with ID is removed by DELETE method through REST service call:


In my next posts I'm going to describe how implement validation and handle REST call errors.

Star Wars: The Force Awakens

Tim Hall - Tue, 2015-12-29 12:02

I just got back from watching Star Wars: The Force Awakens.

I won’t give any spoilers, so don’t worry if you’ve not seen it yet!

Overall I thought it was a really good film. I went to see it with some friends and their kids, so ages in our group ranged from 6 to 60+. Everyone came out saying it was good, and the kids wanted all the toys and were arguing over which one of the characters they would be… So they pretty much nailed it as far as setting up this trilogy! :)

A move back to physical sets was really welcome. Everything felt so much more real in this film compared to Episodes 1-3, which felt like 100% green screen.

I watched the film in 3D IMAX. There was once seen where a space ship was totally sticking out at me and it took all my will power not to reach out and try and touch it! It was pretty amazing. I still don’t like 3D, but this was a pretty good experience.

Having said all that, this was basically a remake of Episode 4. There is pretty much a 1:1 mapping between most of the characters in this film and those of Episode 4. I don’t think that is a bad thing and it probably needed to happen so that all generations could come away happy. I just hope the next films take a different route. It would be very easy to recycle the past again and they would be enjoyable, but I think it’s important the next two films have their own identity and secure the legacy.

Cheers

Tim…

Star Wars: The Force Awakens was first posted on December 29, 2015 at 7:02 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Upgrade a Pluggable Database in #Oracle 12c

The Oracle Instructor - Tue, 2015-12-29 09:27

This is how an upgrade with pluggable databases looks conceptually:
You have two multitenant databases from different versions in place. Preferably they share the same storage, which allows to do the upgrade without having to move any datafiles
Initial state

You unplug the pluggable database from the first multitenant database, then you drop it. That is a fast logical operation that does not delete any files

unplug drop

Next step is to plug in the pluggable database into the multitenant database from the higher version

plug in

So far the operations were very fast (seconds). Next step takes longer, when you upgrade the pluggable database in its new destination

catupgrade.sql

Now let’s see that with details:

 

SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
PL/SQL Release 12.1.0.1.0 - Production
CORE    12.1.0.1.0      Production
TNS for Linux: Version 12.1.0.1.0 - Production
NLSRTL Version 12.1.0.1.0 - Production

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/oradata/CDB1/system01.dbf
/oradata/CDB1/pdbseed/system01.dbf
/oradata/CDB1/sysaux01.dbf
/oradata/CDB1/pdbseed/sysaux01.dbf
/oradata/CDB1/undotbs01.dbf
/oradata/CDB1/users01.dbf

6 rows selected.

SQL> host mkdir /oradata/PDB1

SQL> create pluggable database PDB1 admin user adm identified by oracle
  2  file_name_convert=('/oradata/CDB1/pdbseed/','/oradata/PDB1/');

Pluggable database created.

SQL> alter pluggable database all open;

Pluggable database altered.

SQL> alter session set container=PDB1;

Session altered.

SQL> create tablespace users datafile '/oradata/PDB1/users01.dbf' size 100m;

Tablespace created.

SQL> alter pluggable database default tablespace users;

Pluggable database altered.

SQL> grant dba to adam identified by adam;

Grant succeeded.

SQL> create table adam.t as select * from dual;

Table created.

The PDB should have its own subfolder underneath /oradata respectively in the DATA diskgroup IMHO. Makes not much sense to have the PDB subfolder underneath the CDBs subfolder because it may get plugged into other CDBs. Your PDB names should be unique across the enterprise anyway, also because of the PDB service that is named after the PDB.

I’m about to upgrade PDB1, so I run the pre upgrade script that comes with the new version

SQL> connect / as sysdba
Connected.

SQL> @/u01/app/oracle/product/12.1.0.2/rdbms/admin/preupgrd.sql

Loading Pre-Upgrade Package...


***************************************************************************
Executing Pre-Upgrade Checks in CDB$ROOT...
***************************************************************************


      ************************************************************

                 ====>> ERRORS FOUND for CDB$ROOT <<==== The following are *** ERROR LEVEL CONDITIONS *** 
that must be addressed prior to attempting your upgrade. Failure to do so will result in a failed upgrade. 
You MUST resolve the above errors prior to upgrade 
************************************************************ 
************************************************************ 
====>> PRE-UPGRADE RESULTS for CDB$ROOT <<==== ACTIONS REQUIRED: 
1. Review results of the pre-upgrade checks: /u01/app/oracle/cfgtoollogs/CDB1/preupgrade/preupgrade.log 
2. Execute in the SOURCE environment BEFORE upgrade: /u01/app/oracle/cfgtoollogs/CDB1/preupgrade/preupgrade_fixups.sql 
3. Execute in the NEW environment AFTER upgrade: /u01/app/oracle/cfgtoollogs/CDB1/preupgrade/postupgrade_fixups.sql 
************************************************************ 
*************************************************************************** 
Pre-Upgrade Checks in CDB$ROOT Completed. 
*************************************************************************** 
*************************************************************************** 
*************************************************************************** 
SQL> @/u01/app/oracle/cfgtoollogs/CDB1/preupgrade/preupgrade_fixups
Pre-Upgrade Fixup Script Generated on 2015-12-29 07:02:21  Version: 12.1.0.2 Build: 010
Beginning Pre-Upgrade Fixups...
Executing in container CDB$ROOT

**********************************************************************
                      [Pre-Upgrade Recommendations]
**********************************************************************

                        *****************************************
                        ********* Dictionary Statistics *********
                        *****************************************

Please gather dictionary statistics 24 hours prior to
upgrading the database.
To gather dictionary statistics execute the following command
while connected as SYSDBA:
    EXECUTE dbms_stats.gather_dictionary_stats;

^^^ MANUAL ACTION SUGGESTED ^^^


           **************************************************
                ************* Fixup Summary ************

No fixup routines were executed.

           **************************************************
**************** Pre-Upgrade Fixup Script Complete *********************
SQL> EXECUTE dbms_stats.gather_dictionary_stats

Not much to fix in this case. I’m now ready to unplug and drop the PBD

SQL> alter pluggable database PDB1 close immediate;
SQL> alter pluggable database PDB1 unplug into '/home/oracle/PDB1.xml';
SQL> drop pluggable database PDB1;

PDB1.xml contains a brief description of the PDB and needs to be available for the destination CDB. Keep in mind that no files have been deleted

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
oracle@localhost:~$ . oraenv
ORACLE_SID = [CDB1] ? CDB2
The Oracle base remains unchanged with value /u01/app/oracle
oracle@localhost:~$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Tue Dec 29 07:11:16 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
PL/SQL Release 12.1.0.2.0 - Production
CORE    12.1.0.2.0      Production
TNS for Linux: Version 12.1.0.2.0 - Production
NLSRTL Version 12.1.0.2.0 - Production

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/oradata/CDB2/system01.dbf
/oradata/CDB2/pdbseed/system01.dbf
/oradata/CDB2/sysaux01.dbf
/oradata/CDB2/pdbseed/sysaux01.dbf
/oradata/CDB2/undotbs01.dbf
/oradata/CDB2/users01.dbf

6 rows selected.

The destination CDB is on 12.1.0.2 and shares the storage with the source CDB running on 12.1.0.1. Actually, they are both running on the same server. Now I will check if there are any potential problems with the plug in

SQL> SET SERVEROUTPUT ON
DECLARE
compatible CONSTANT VARCHAR2(3) := CASE
DBMS_PDB.CHECK_PLUG_COMPATIBILITY(
pdb_descr_file => '/home/oracle/PDB1.xml',
pdb_name => 'PDB1')
WHEN TRUE THEN 'YES' ELSE 'NO'
END;
BEGIN
DBMS_OUTPUT.PUT_LINE(compatible);
END;
/SQL>   2    3    4    5    6    7    8    9   10   11
NO

PL/SQL procedure successfully completed.

SQL> select message, status from pdb_plug_in_violations where type like '%ERR%';

MESSAGE
--------------------------------------------------------------------------------
STATUS
---------
PDB's version does not match CDB's version: PDB's version 12.1.0.0.0. CDB's vers
ion 12.1.0.2.0.
PENDING

Now that was to be expected: The PDB is coming from a lower version. Will fix that after the plug in

SQL> create pluggable database PDB1 using '/home/oracle/PDB1.xml' nocopy;

Pluggable database created.

SQL> alter pluggable database PDB1 open upgrade;

Warning: PDB altered with errors.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

We saw the first three phases so far and everything was quite fast. Not so with the next step

oracle@localhost:~$ cd $ORACLE_HOME/rdbms/admin
oracle@localhost:/u01/app/oracle/product/12.1.0.2/rdbms/admin$ $ORACLE_HOME/perl/bin/perl catctl.pl -c 'PDB1' catupgrd.sql

Argument list for [catctl.pl]
SQL Process Count     n = 0
SQL PDB Process Count N = 0
Input Directory       d = 0
Phase Logging Table   t = 0
Log Dir               l = 0
Script                s = 0
Serial Run            S = 0
Upgrade Mode active   M = 0
Start Phase           p = 0
End Phase             P = 0
Log Id                i = 0
Run in                c = PDB1
Do not run in         C = 0
Echo OFF              e = 1
No Post Upgrade       x = 0
Reverse Order         r = 0
Open Mode Normal      o = 0
Debug catcon.pm       z = 0
Debug catctl.pl       Z = 0
Display Phases        y = 0
Child Process         I = 0

catctl.pl version: 12.1.0.2.0
Oracle Base           = /u01/app/oracle

Analyzing file catupgrd.sql
Log files in /u01/app/oracle/product/12.1.0.2/rdbms/admin
catcon: ALL catcon-related output will be written to catupgrd_catcon_17942.lst
catcon: See catupgrd*.log files for output generated by scripts
catcon: See catupgrd_*.lst files for spool files, if any
Number of Cpus        = 2
Parallel PDB Upgrades = 2
SQL PDB Process Count = 2
SQL Process Count     = 0
New SQL Process Count = 2

[CONTAINER NAMES]

CDB$ROOT
PDB$SEED
PDB1
PDB Inclusion:[PDB1] Exclusion:[]

Start processing of PDB1
[/u01/app/oracle/product/12.1.0.2/perl/bin/perl catctl.pl -c 'PDB1' -I -i pdb1 -n 2 catupgrd.sql]

Argument list for [catctl.pl]
SQL Process Count     n = 2
SQL PDB Process Count N = 0
Input Directory       d = 0
Phase Logging Table   t = 0
Log Dir               l = 0
Script                s = 0
Serial Run            S = 0
Upgrade Mode active   M = 0
Start Phase           p = 0
End Phase             P = 0
Log Id                i = pdb1
Run in                c = PDB1
Do not run in         C = 0
Echo OFF              e = 1
No Post Upgrade       x = 0
Reverse Order         r = 0
Open Mode Normal      o = 0
Debug catcon.pm       z = 0
Debug catctl.pl       Z = 0
Display Phases        y = 0
Child Process         I = 1

catctl.pl version: 12.1.0.2.0
Oracle Base           = /u01/app/oracle

Analyzing file catupgrd.sql
Log files in /u01/app/oracle/product/12.1.0.2/rdbms/admin
catcon: ALL catcon-related output will be written to catupgrdpdb1_catcon_18184.lst
catcon: See catupgrdpdb1*.log files for output generated by scripts
catcon: See catupgrdpdb1_*.lst files for spool files, if any
Number of Cpus        = 2
SQL PDB Process Count = 2
SQL Process Count     = 2

[CONTAINER NAMES]

CDB$ROOT
PDB$SEED
PDB1
PDB Inclusion:[PDB1] Exclusion:[]

------------------------------------------------------
Phases [0-73]         Start Time:[2015_12_29 07:19:01]
Container Lists Inclusion:[PDB1] Exclusion:[NONE]
------------------------------------------------------
Serial   Phase #: 0    PDB1 Files: 1     Time: 14s
Serial   Phase #: 1    PDB1 Files: 5     Time: 46s
Restart  Phase #: 2    PDB1 Files: 1     Time: 0s
Parallel Phase #: 3    PDB1 Files: 18    Time: 17s
Restart  Phase #: 4    PDB1 Files: 1     Time: 0s
Serial   Phase #: 5    PDB1 Files: 5     Time: 17s
Serial   Phase #: 6    PDB1 Files: 1     Time: 10s
Serial   Phase #: 7    PDB1 Files: 4     Time: 6s
Restart  Phase #: 8    PDB1 Files: 1     Time: 0s
Parallel Phase #: 9    PDB1 Files: 62    Time: 68s
Restart  Phase #:10    PDB1 Files: 1     Time: 0s
Serial   Phase #:11    PDB1 Files: 1     Time: 13s
Restart  Phase #:12    PDB1 Files: 1     Time: 0s
Parallel Phase #:13    PDB1 Files: 91    Time: 6s
Restart  Phase #:14    PDB1 Files: 1     Time: 0s
Parallel Phase #:15    PDB1 Files: 111   Time: 13s
Restart  Phase #:16    PDB1 Files: 1     Time: 0s
Serial   Phase #:17    PDB1 Files: 3     Time: 1s
Restart  Phase #:18    PDB1 Files: 1     Time: 0s
Parallel Phase #:19    PDB1 Files: 32    Time: 26s
Restart  Phase #:20    PDB1 Files: 1     Time: 0s
Serial   Phase #:21    PDB1 Files: 3     Time: 7s
Restart  Phase #:22    PDB1 Files: 1     Time: 0s
Parallel Phase #:23    PDB1 Files: 23    Time: 104s
Restart  Phase #:24    PDB1 Files: 1     Time: 0s
Parallel Phase #:25    PDB1 Files: 11    Time: 40s
Restart  Phase #:26    PDB1 Files: 1     Time: 0s
Serial   Phase #:27    PDB1 Files: 1     Time: 1s
Restart  Phase #:28    PDB1 Files: 1     Time: 0s
Serial   Phase #:30    PDB1 Files: 1     Time: 0s
Serial   Phase #:31    PDB1 Files: 257   Time: 23s
Serial   Phase #:32    PDB1 Files: 1     Time: 0s
Restart  Phase #:33    PDB1 Files: 1     Time: 1s
Serial   Phase #:34    PDB1 Files: 1     Time: 2s
Restart  Phase #:35    PDB1 Files: 1     Time: 0s
Restart  Phase #:36    PDB1 Files: 1     Time: 1s
Serial   Phase #:37    PDB1 Files: 4     Time: 44s
Restart  Phase #:38    PDB1 Files: 1     Time: 0s
Parallel Phase #:39    PDB1 Files: 13    Time: 67s
Restart  Phase #:40    PDB1 Files: 1     Time: 0s
Parallel Phase #:41    PDB1 Files: 10    Time: 6s
Restart  Phase #:42    PDB1 Files: 1     Time: 0s
Serial   Phase #:43    PDB1 Files: 1     Time: 6s
Restart  Phase #:44    PDB1 Files: 1     Time: 0s
Serial   Phase #:45    PDB1 Files: 1     Time: 1s
Serial   Phase #:46    PDB1 Files: 1     Time: 0s
Restart  Phase #:47    PDB1 Files: 1     Time: 0s
Serial   Phase #:48    PDB1 Files: 1     Time: 140s
Restart  Phase #:49    PDB1 Files: 1     Time: 0s
Serial   Phase #:50    PDB1 Files: 1     Time: 33s
Restart  Phase #:51    PDB1 Files: 1     Time: 0s
Serial   Phase #:52    PDB1 Files: 1     Time: 0s
Restart  Phase #:53    PDB1 Files: 1     Time: 0s
Serial   Phase #:54    PDB1 Files: 1     Time: 38s
Restart  Phase #:55    PDB1 Files: 1     Time: 0s
Serial   Phase #:56    PDB1 Files: 1     Time: 12s
Restart  Phase #:57    PDB1 Files: 1     Time: 0s
Serial   Phase #:58    PDB1 Files: 1     Time: 0s
Restart  Phase #:59    PDB1 Files: 1     Time: 0s
Serial   Phase #:60    PDB1 Files: 1     Time: 0s
Restart  Phase #:61    PDB1 Files: 1     Time: 0s
Serial   Phase #:62    PDB1 Files: 1     Time: 1s
Restart  Phase #:63    PDB1 Files: 1     Time: 0s
Serial   Phase #:64    PDB1 Files: 1     Time: 1s
Serial   Phase #:65    PDB1 Files: 1 Calling sqlpatch [...] Time: 42s
Serial   Phase #:66    PDB1 Files: 1     Time: 1s
Serial   Phase #:68    PDB1 Files: 1     Time: 8s
Serial   Phase #:69    PDB1 Files: 1 Calling sqlpatch [...] Time: 53s
Serial   Phase #:70    PDB1 Files: 1     Time: 91s
Serial   Phase #:71    PDB1 Files: 1     Time: 0s
Serial   Phase #:72    PDB1 Files: 1     Time: 5s
Serial   Phase #:73    PDB1 Files: 1     Time: 0s

------------------------------------------------------
Phases [0-73]         End Time:[2015_12_29 07:35:06]
Container Lists Inclusion:[PDB1] Exclusion:[NONE]
------------------------------------------------------

Grand Total Time: 966s PDB1

LOG FILES: (catupgrdpdb1*.log)

Upgrade Summary Report Located in:
/u01/app/oracle/product/12.1.0.2/cfgtoollogs/CDB2/upgrade/upg_summary.log

Total Upgrade Time:          [0d:0h:16m:6s]

     Time: 969s For PDB(s)

Grand Total Time: 969s

LOG FILES: (catupgrd*.log)

Grand Total Upgrade Time:    [0d:0h:16m:9s]

Even this tiny PDB with very few objects in it took 16 minutes. I have seen this step taking more than 45 minutes on other occasions.

oracle@localhost:/u01/app/oracle/product/12.1.0.2/rdbms/admin$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Tue Dec 29 12:45:36 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select name,open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
PDB1                           MOUNTED

SQL> alter pluggable database PDB1 open;

Pluggable database altered.

SQL> @/u01/app/oracle/cfgtoollogs/CDB1/preupgrade/postupgrade_fixups
Post Upgrade Fixup Script Generated on 2015-12-29 07:02:21  Version: 12.1.0.2 Build: 010
Beginning Post-Upgrade Fixups...

**********************************************************************
                     [Post-Upgrade Recommendations]
**********************************************************************

                        *****************************************
                        ******** Fixed Object Statistics ********
                        *****************************************

Please create stats on fixed objects two weeks
after the upgrade using the command:
   EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;

^^^ MANUAL ACTION SUGGESTED ^^^


           **************************************************
                ************* Fixup Summary ************

No fixup routines were executed.

           **************************************************
*************** Post Upgrade Fixup Script Complete ********************

PL/SQL procedure successfully completed.

SQL> EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS

PL/SQL procedure successfully completed.

Done! I was using the excellent Pre-Built Virtualbox VM prepared by Roy Swonger, Mike Dietrich and The Database Upgrade Team for this demonstration. Great job guys, thank you for that!
In other words: You can easily test it yourself without having to believe it :-)


Tagged: 12c New Features, Multitenant, upgrade
Categories: DBA Blogs

Column Groups

Jonathan Lewis - Tue, 2015-12-29 07:13

I think the “column group” variant of extended stats is a wonderful addition to the Oracle code base, but there’s a very important detail about using the feature that I hadn’t really noticed until a question came up on the OTN database forum recently about a very bad join cardinality estimate.

The point is this: if you have a multi-column equality join and the optimizer needs some help to get a better estimate of join cardinality then column group statistics may help if you create matching stats at both ends of the join. There is a variation on this directive that helps to explain why I hadn’t noticed it before – multi-column indexes (with exactly the correct columns) have the same effect and, most significantly, the combination of  one column group and a matching multi-column index will do the trick.

Here’s some code to demonstrate the effect:

create table t8
as
select
        trunc((rownum-1)/125)   n1,
        trunc((rownum-1)/125)   n2,
        rpad(rownum,180)        v1
from
        all_objects
where
        rownum <= 1000
;

create table t10
as
select
        trunc((rownum-1)/100)   n1,
        trunc((rownum-1)/100)   n2,
        rpad(rownum,180)        v1
from
        all_objects
where
        rownum <= 1000
;
begin
        dbms_stats.gather_table_stats(
                user,
                't8',
                method_opt => 'for all columns size 1'
        );
        dbms_stats.gather_table_stats(
                user,
                't10',
                method_opt => 'for all columns size 1'
        );
end;
/

set autotrace traceonly

select
        t8.v1, t10.v1
from
        t8,t10
where
        t10.n1 = t8.n1
and     t10.n2 = t8.n2
/

set autotrace off

Table t8 has eight distinct values for n1 and n2, and 8 combinations (though the optimizer will assume there are 64 combinations); table t10 has ten distinct values for n1 and n2, and ten combinations (though the optimizer will assume there are 100 combinations). In the absence of any column group stats (or histograms, or indexes) and with no filter predicates on either table, the join cardinality will be “{Cartesian Join cardinality} * {join selectivity}”, and in the absence of any nulls the join selectivity – thanks to the “multi-column sanity check” – will be 1/(greater number of distinct combinations). So we get 1,000,000 / 100 = 10,000.

Here’s the output from autotrace in 11.2.0.4 to prove the point:


---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      | 10000 |  3652K|    11  (10)| 00:00:01 |
|*  1 |  HASH JOIN         |      | 10000 |  3652K|    11  (10)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| T8   |  1000 |   182K|     5   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| T10  |  1000 |   182K|     5   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T10"."N1"="T8"."N1" AND "T10"."N2"="T8"."N2")


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
        835  consistent gets
          0  physical reads
          0  redo size
   19965481  bytes sent via SQL*Net to client
      73849  bytes received via SQL*Net from client
       6668  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     100000  rows processed

As you can see, the query actually returns 100,000 rows. The estimate of 10,000 is badly wrong thanks to the correlation between the n1 and n2 columns. So let’s check the effect of creating a column group on t10:


begin
        dbms_stats.gather_table_stats(
                user,
                't10',
                method_opt => 'for all columns size 1 for columns (n1,n2) size 1'
        );
end;
/

At this point you might think that the optimizer’s sanity check might say something like: t8 table: 64 combinations, t10 table column group 10 combinations so use the 64 which is now the greater num_distinct. It doesn’t – maybe it will in some future version, but at present the optimizer code doesn’t seem to recognize this as a possibility. (I won’t bother to reprint the unchanged execution plan.)

But, at this point, I could create an index on t8(n1,n2) and run the query again:


create index t8_i1 on t8(n1, n2);

select
        t8.v1, t10.v1
from
        t8,t10
where
        t10.n1 = t8.n1
and     t10.n2 = t8.n2
/

Index created.


100000 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 216880280

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |   100K|    35M|    12  (17)| 00:00:01 |
|*  1 |  HASH JOIN         |      |   100K|    35M|    12  (17)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| T8   |  1000 |   182K|     5   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| T10  |  1000 |   182K|     5   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T10"."N1"="T8"."N1" AND "T10"."N2"="T8"."N2")

Alternatively I could create a column group at the t8 table:



drop index t8_i1;

begin
        dbms_stats.gather_table_stats(
                user,
                't8',
                method_opt => 'for all columns size 1 for columns (n1,n2) size 1'
        );
end;
/

select  
        t8.v1, t10.v1 
from
        t8,t10
where
        t10.n1 = t8.n1
and     t10.n2 = t8.n2
/

Index dropped.


PL/SQL procedure successfully completed.


100000 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 216880280

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |   100K|    35M|    12  (17)| 00:00:01 |
|*  1 |  HASH JOIN         |      |   100K|    35M|    12  (17)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| T8   |  1000 |   182K|     5   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| T10  |  1000 |   182K|     5   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T10"."N1"="T8"."N1" AND "T10"."N2"="T8"."N2")


If you’re wondering why I’ve not picked up this “both ends” detail in the past – it’s because I’ve usually been talking about replacing indexes with column groups and my examples have probably started with indexes at both end of the join before I replaced one index with a column group. (The other examples I’ve given of column groups are typically about single-table access rather than joins.)

 


ADF BC Groovy Improvements in ADF 12.2.1

Andrejus Baranovski - Mon, 2015-12-28 14:28
Groovy scripting is improved in ADF 12.2.1. There are no inline Groovy expressions anymore. Expressions are saved in separate Groovy language file, external codesource. Each EO/VO will be assigned with separate file to keep Groovy expressions. This improves Groovy script maintenance (easier to check all Groovy expressions from EO/VO located in one file), also it improves runtime performance (as JDeveloper ADF code audit rule suggests).

Inline Groovy expressions created with previous JDeveloper versions are compatible in 12.2.1. All new expressions are created in separate *.bcs file. Sample application - ADFGroovyApp.zip, comes with validation rule implement in Groovy script expression:


We should check how validation rule definition looks in EO source. There is one inline Groovy expression I have created with previous JDeveloper version. It is marked as deprecated and suggested to be moved to external codebase:


Validation rule with Groovy expression created in JDeveloper 12.2.1 looks different. There is no inline Groovy code anymore, instead it points to the external codesource name:


Codesource name is registered inside operations definition:


I have defined two validation rules with Groovy for Jobs EO attributes MinSalary/MaxSalary. Code for both of them are included into single Jobs.bcs file:


On runtime it works correctly, entered values are validated against logic coded in Groovy:


Separate Groovy file approach can be turned off in ADF BC project settings, not recommended though:

New Version Of XPLAN_ASH Utility

Randolf Geist - Mon, 2015-12-28 13:35
A new version 4.22 of the XPLAN_ASH utility is available for download.

As usual the latest version can be downloaded here.

This version primarily addresses an issue with 12c - if the HIST mode got used to pull ASH information from AWR in 12c it turned out that Oracle forgot to add the new "DELTA_READ_MEM_BYTES" columns to DBA_HIST_ACTIVE_SESS_HISTORY - although it got officially added to V$ACTIVE_SESSION_HISTORY in 12c. So now I had to implement several additional if/then/else constructs to the script to handle this inconsistency. It's the first time that the HIST view doesn't seem to reflect all columns from the V$ view - very likely an oversight rather than by design I assume.

Apart from that the I/O figures (Read Bytes / Write Bytes etc.) in the "Activity Timeline" make more sense for those cases where a process hasn't been sampled for several sample points (see below for more details).

Also in case an execution plan could not be found it is now made more obvious with a corresponding message that you might be able to pull the execution plan from AWR by using different ASH modes (MIXED / HIST).

Here are the notes from the change log:

- Fixed a funny bug that in 12c they have forgotton to add the DELTA_READ_MEM_BYTES to DBA_HIST_ACTIVE_SESS_HISTORY, so in HIST mode with 12c prior XPLAN_ASH versions could error out with invalid column name
- Change the way the I/O figures are treated in the "Activity Timeline based on ASH". Now the I/O per second is spread over the (previous) samples covered by DELTA_TIME. This should give a smoother representation of the I/O performed and much closer to what you see in Real-Time SQL Monitoring reports. The difference to prior versions is only visible in cases where a session wasn't sampled for quite a while and hence has a DELTA_TIME spanning multiple previous sample points. This also means that the I/O related columns in the "Activity Timeline based on ASH" now show only the PER SECOND values, no longer to the totals like prior versions
- Added a SET NULL "" in the configuration and initialization section for SQL*Plus environments that use a non-default SET NULL setting. This screwed up some internal switches so that XPLAN_ASH for example thought it's running in a S-ASH repository
- Added a note to the end of the output if no execution plan could be found and falling back to retrieving plan operation details from ASH. Also added the note to use MIXED or HIST ASH source option if no execution plan could be found in CURR mode, so execution plan has been purged from Shared Pool in the meanwhile
- Cloned the "cleanup" section from the end to the beginning of the script to ensure no current SQL*Plus environment settings influence the script execution. This is particularly relevant if the script execution gets cancelled before the final cleanup section is reached or some other, previous scripts left a mess behind

Selfies. Social. And Style: Smartwatch UX Trends

Usable Apps - Sun, 2015-12-27 16:58

From Antiques to Apple

“I don’t own a watch myself”, a great parting shot by Kevin of Timepiece Antique Clocks in the Liberties, Dublin.

I had popped in one rainy day in November to discover more about clock making and to get an old school perspective on smartwatches. Kevin’s comment made sense. “Why would he need to own a watch?” I asked myself, surrounded by so many wonderful clocks from across the ages, all keeping perfect time.

This made me consider what might influence people to use smartwatches? Such devices offer more than just telling the time.

Timepiece Antiques in Dublin

From antiques to Apple: UX research in the Liberties, Dublin

2015 was very much the year of the smartwatch. The arrival of the Apple Watch earlier in 2015 sparked much press excitement and Usable Apps covered the enterprise user experience (UX) angle with two much-read blog pieces featuring our Group Vice President, Jeremy Ashley (@jrwashley).

Although the Apple Watch retains that initial consumer excitement (at the last count about 7 million units have shipped), we need to bear in mind that the Oracle Applications User Experience cloud strategy is not about one device. The Glance UX framework runs just as well on Pebble and Android Wear devices, for example.

 Exciting Offerings in 2015

It's not all about the face. Two exciting devices came my way in 2015 for evaluation against the cloud user experience: The Basis (left) and Vector Watch. 

Overall, the interest in wearable tech and what it can do for the enterprise is stronger than ever. Here's my (non-Oracle endorsed) take on what's going to be hot and why in 2016 for smartwatch UX.

Trending Beyond Trendy 

There were two devices that came my way in 2015 for evaluation that for me captured happening trends in smartwatch user experience. 

First there was the Basis Peak (now just Basis). I covered elsewhere my travails in setting up the Basis and how my perseverance eventually paid off.

 The Ultimate Fitness and Sleep Tracker

Basis: The ultimate fitness and sleep tracker. Quantified self heaven for those non-fans of Microsoft Excel and notebooks. Looks great too! 

Not only does the Basis look good, but its fitness functionality, range of activity and sleep monitoring "habits," data gathering, and visualizations matched and thrilled my busy work/life balance. Over the year, the Basis added new features that reflected a more personal quantified self angle (urging users to take a "selfie") and then acknowledged that fitness fans might be social creatures (or at least in need of friends) by prompting them to share their achievements, or "bragging rights," to put it the modern way.

Bragging Rights notification on Basis

Your bragging rights are about to peak: Notifications on Basis (middle) 

Second there was the Vector Watch, which came to me by way of a visit to Oracle EPC in Bucharest. I was given a device to evaluate.

A British design, with development and product operations in Bucharest and Palo Alto too, the Vector looks awesome. The sophisticated, stylish appearance of the watch screams class and quality. It is easily worn by the most fashionable people around and yet packs a mighty user experience.  

 Style and function together

Vector Watch: Fit executive meets fashion 

I simply love the sleek, subtle, How To Spend It positioning, the range of customized watch faces, notifications integration, activity monitoring capability, and the analytics of the mobile app that it connects with via Bluetooth. Having to charge the watch battery only 12 times (or fewer) each year means one less strand to deal with in my traveling Kabelsalat

The Vector Watch affordance for notifications is a little quirky, and sure it’s not the Garmin or Suunto that official race pacers or the hardcore fitness types will rely on, and maybe the watch itself could be a little slimmer. But it’s an emerging story, and overall this is the kind of device for me, attracting positive comments from admirers (of the watch, not me) worldwide, from San Francisco to Florence, mostly on its classy looks alone.

I'm so there with the whole #fitexecutive thing.

Perhaps the Vector Watch exposes that qualitative self to match the quantified self needs of our well-being that the Basis delivers on. Regardless, the Vector Watch tells us that wearable tech is coming of age in the fashion sense. Wearable tech has to. These are deeply personal devices, and as such, continue the evolution of wristwatches looking good and functioning well while matching the user's world and responding to what's hot in fashion.

Heck, we are now even seeing the re-emergence of pocket watches as tailoring adapts and facilitates their use. Tech innovation keeps time and keeps up, too, and so we have Kickstarter wearabletech solutions for pocket watches appearing, designed for the Apple Watch.

The Three "Fs"

Form and function is a mix that doesn't always quite gel. Sometimes compromises must be made trying to make great-looking, yet useful, personal technology. Such decisions can shape product adoption. The history of watch making tells us that.

Whereas the “F” of the smartwatch era of 2014–2015 was “Fitness,” it’s now apparent that the “F” that UX pros need to empathize with in 2016 will be "Fashion." Fashionable technology (#fashtech) in the cloud, the device's overall style and emotional pull, will be as powerful a driver of adoption as the mere outer form and the inner functionality of the watch.

The Beauty of Our UX Strategy 

The Oracle Applications Cloud UX strategy—device neutral that it is—is aware of such trends, ahead of them even.

The design and delivery of beautiful things has always been at the heart of Jeremy Ashley’s group. Watching people use those beautiful things in a satisfied way and hearing them talk passionately about them is a story that every enterprise UX designer and developer wants the bragging rights to.

So, what will we see on the runway from Usable Apps in 2016 in this regard?

Stay tuned, fashtechistas!