Feed aggregator

CASE statement

Tom Kyte - Tue, 2016-07-12 17:26
Hello Gurus, Please correct me if am framing wrong CASE statement using multiple columns (CASE WHEN (ENA_PRE1 IS NULL AND ENA_RT1 IS NULL) THEN RT_UNE IS NULL ELSE RT_UNE END) Thank you in Advance.
Categories: DBA Blogs

How to allow 100% CPU?

Tom Kyte - Tue, 2016-07-12 17:26
Hi I am running Oracle Database 11g Release (Standard Edition) - 64bit Production on Windows 2008 R2, on a physical machine with a 4 core CPU. Whatever I do, the maximum CPU utilization of oracle.exe process is 25%. This is annoying b...
Categories: DBA Blogs

Column view definition in data dictionary

Tom Kyte - Tue, 2016-07-12 17:26
Hello there, Is it possible to get view column definition from data dictionary? I mean something similar as for tables - user_tab_columns. Thanks, Dusan
Categories: DBA Blogs

SELECT column from TABLE3, UPDATE column in TABLE2, INSERT all column in TABLE1

Tom Kyte - Tue, 2016-07-12 17:26
Hi I need to solve some difficult logic process. create table aa ( id int, name_child varchar2(25) ); create table bb ( id int, name_master varchar2(25) ); insert into bb values('-1', 'DUMMY'); bb is the master table, aa is the child...
Categories: DBA Blogs

difference b/w row database and column database, how the data will compressed in database

Tom Kyte - Tue, 2016-07-12 17:26
Hi Tom, I have to questions i.e; 1.what is difference b/w row database and column database, how can we create indexes on column database? 2.how the data will be compressed in database(on what basis data will be compressed in database)?
Categories: DBA Blogs


Tom Kyte - Tue, 2016-07-12 17:26
Hi Tom, I am running into an issue while using BULK COLLECT INTO in a stored procedure. I use a cursor to get data (42 million rows, 70 columns) from a remote database (a table in Teradata), then insert those records into a table in Oracle. After ...
Categories: DBA Blogs

SQL Server 2016 AlwaysOn: Direct seeding and backup considerations

Yann Neuhaus - Tue, 2016-07-12 15:27

As promised in my prior blog post, we’ll talk now about direct seeding feature and backups. We found an interesting LIMIT_CONCURRENT_BACKUPS value for current state of the hadr_automatic_seeding_state_transition event. As said the last time, this value is meaningful by itself.

So let’s try to perform a backup while the database is concerned by an active seeding session. In fact, my backup was blocked (as I supposed) and here what I found from the sys.dm_tran_locks and sys.dm_exec_requests DMVs.

	DB_NAME(tl.resource_database_id) as [database_name],
	r.session_id as blocked_session_id,
FROM sys.dm_tran_locks as tl
join sys.dm_exec_requests as r
	on tl.request_session_id = r.session_id
WHERE tl.request_session_id = <concerned session>


  • The blocked session (my backup)

blog 100 - AG direct seeding - 1 - blocked session

  • Blocking session (my direct seeding session)

blog 100 - AG direct seeding - 2 - blocking session

In short, my backup session is waiting on database object with a LCK_M_U lock type that is already held by my direct seeding session. But the interesting part is that adding a database with direct seeding mode seems to be similar to a performing a backup operation from a locking perspective. We also recognize characteristic wait types that occur during a backup operation (ASYNC_IO_COMPLETION, BACKUPBUFFER). So direct seeding as a streaming backup? Well that sounds good!

Well, let’s continue with this other query

from sys.dm_exec_requests as r
join sys.dm_os_schedulers as s
	on s.scheduler_id = r.scheduler_id
where r.session_id = 82
	AND s.status = 'VISIBLE ONLINE'


blog 100 - AG direct seeding - 3 - direct seeding internal

Do you notice the command text related to the direct seeding session? VDI_CLIENT_WORKER seems to indicate that SQL Server is using the virtual device interface internally during the seeding operation (VDI prefix as Virtual Device Interface). If SQL Server uses backup stream and VDI interface internally during seeding operation, it may explain why it has to limit concurrent backups in our case.

How to check if SQL Server uses VDI during direct seeding operation? Well, we know that using VDI implies to use sqlvdi.dll. So my first idea consisted in checking if the corresponding dll is loaded as module used by the sqlservr.exe process and the tasklist tool is a good candidate to answer to our need.

blog 100 - AG direct seeding - 4 - direct seeding sqlvdi dll

The above picture seems to confirm my assumption but this test is not perfect. Indeed, we may also claim that anything else of the  could have trigger the load of the sqlvdi.dll. So my second idea was to capture the stack dump from the windbg tool while running the seeding session and here what I was able to see:

blog 100 - AG direct seeding - 5 - direct seeding and windbg

We may recognize CHadrBuildSourceStateLimitConcurrentBackups (similar to value LIMIT_CONCURRENT_BACKUPS value from the hadr_automatic_seeding_state_transition event) what is a parameter to the function CallFunctAndResumeBuildTask(). Let’s also notice the use of sqlmin!CHadrBuildSourceStateSeeding::CreateVdiSession function that seems to confirm that SQL Server is using VDI session during seeding process.

Ok after this funny test, let’s back to something more important to keep in mind: we know that direct seeding will limit concurrent backups. Thus it will imply than our existing RPO may be impacted especially if the seeding operation takes a very long time.

Happy database seeding!!




Cet article SQL Server 2016 AlwaysOn: Direct seeding and backup considerations est apparu en premier sur Blog dbi services.

Enrich Your Fluid Applications with Third-Party Integrations

PeopleSoft Technology Blog - Tue, 2016-07-12 14:04

Content provided by Balaji Pattabhiraman

PeopleSoft applications teams are delivering lots of great content in their images for the Fluid User experience. In addition, though, there are many things customers can do to enhance their applications to meet unique requirements. This post is the next in a series of capabilities that PeopleTools provides.

In this post we'll cover how to invoke javascript APIs to integrate third party elements within PeopleSoft Fluid pages. In this example, we're calling a google maps API for Resource location. PeopleTools 8.54 (and higher) provides the ability to create responsive Fluid components. In addition to making the layout responsive, PeopleTools provides developers with mechanisms to run any javascript API (your own or third party) within your PeopleSoft application.

We'll cover the following in this example:

  • Build a sample fluid application with two panel layout. The left panel loads a set of projects along with details such as the number of resources. The right panel loads the map.
  • Set a default so the user sees the map centered to the user's current location using HTML5 Geolocation. Resources nearby their current location are plotted on the map.
  • Selecting a Project on the left panel updates the map and indicates where resources are located.
  • Searching for a location (using the google places lookup API) centers the map to the selected location and shows the resources near the location in the map.

Note that this blog does not get into the details of the Google Maps or Places JavaScript API. Neither does Oracle PeopleSoft certify usage of Google maps. This is a technology demonstration on how to integrate PeopleSoft with third party JavaScript APIs. Please read the following on Custom JavaScript Guidelines.

First let’s look at the completed application and how it behaves for users, then we’ll show how you build a page to work like this.

Below is a PeopleSoft page.  When the page loads, the HTML5 geo location pops up asking for permission to use the user’s current location.  (If this is not granted, the map will center on the resource of the first project.)

Select Allow so that the map is centered based on the user's current location and the resources nearest that location.

Click on the caret tab to expand the left panel.  This shows us a list of projects.

In this scenario, we’ve selected the Implementation Project.  Notice that this project has 21 resources. The map now shows the resources by locations (using markers).

Clicking a marker (flag) shows the resource details.  In this case, we click on the Flag located on San Jose, CA, which shows us the 13 resources located at that location.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

You can also search for resources at a different location using the Location box. This will use the Place Finder to suggest locations as you type. Here we’ve entered Tokyo, Japan. This loads the map centered on that location and the resources available nearest to the location.

In this example, the resource isn’t exactly geographically close!

So how did we achieve this?  The following are the high-level steps used to build this.

In App Designer, create a two panel layout using the delivered layout page PSL_TWOPANEL.  (Note that we provide many Fluid page templates in App Designer.)

This creates a default two panel layout page, where you can place your page fields.  Remember, in Fluid, designing a page no longer uses a WYSIWYG page design.  Instead, you have a series of group boxes into which you place your page elements.  The style sheets will take care of placement on the displayed screen.

Next we place the required fields on the page.  There are three main sections to consider:   1) A group box that will be the map field (named Map Canvas in the screen below)  2) A list grid that will hold the Project list, which is inside the subpage PC_MAP_PRJ_SBF, and 3) The section on top of the map (named PC_MAP_PNLHDR_SBF) that contains fields like the location.  We include them on the page illustrated below.  This example is available in a delivered Financials application (although it uses Oracle maps instead of Google maps).

Next we add the PeopleCode to load the list of project/resources and update the left panel with those details (list grid rows). The APIs to populate the grid rowset is as same as in Classic PeopleSoft, so we will skip those details in this blog.  In addition we can specify style classes to the page fields using the Fluid tab (Default Style Name) section on the form fields. These style classes can be delivered out of the box, or you can create your style sheet and include them for your page using the AddStyleSheet API in component post build. 

Now we get to the interesting part of populating the map.  First we define the JavaScript in an HTML file that will load the map initially or upon user action. You can write any JavaScript function routines.  You can look at example on google map JavaScript API doc here. These methods are defined in the HTML object PC_MAP_JS.  Here the method pc_loadMap is responsible for doing geo location, populating the map, enabling the places API and also calling the routine to set the markers.

The following lines enable the page's Location edit box (with page field name PACINPUT) as control for google places API input:

var input = document.getElementById('PACINPUT');
var searchBox = new google.maps.places.SearchBox(input);

The following lines associate the page's map canvas as the section for map. Initially, the map is centered by current location:

navigator.geolocation.getCurrentPosition(function(position) { posGeo = { lat: position.coords.latitude, lng: position.coords.longitude };
var map = new google.maps.Map(document.getElementById('%formnamedivPC_MAP_NUI_WRK2_MAP_CANVAS'), {
center: posGeo,
zoom: 10,
mapTypeId: google.maps.MapTypeId.ROADMAP

The following lines of code add the markers by looping through the resource details for a given project. The details are read from the database and stored in a page field in the component PeopleCode. It is read here in JavaScript to populate the markers:

for (var i = 0; i < gdata_PC_MAP_DATA_DVW$0.length; i++) {
var data = window.gdata_PC_MAP_DATA_DVW$0[i];
var marker = new google.maps.Marker({ position: {lat: parseFloat(resource[1]), lng: parseFloat(resource[2])}, map: map, icon: image, shape: shape, title: resource[5] });

The lines below associate the  event when you do a click a marker on the map to open the PeopleSoft modal page to display data. The openResourceInfoWindow will do a form submit (submitAction_%formname) to invoke the PeopleCode back from JavaScript to open the modal:

marker.addListener('click', function() {
     openResourceInfoWindow(marker, data[3],data[4], "M");
//Update the map and set the zoom level after adding the markers

Now include the Style Sheet (if you have custom style classes applied on the page fields) and the JavaScript (we just defined) to be used for the component.

Now call the JavaScript function using AddOnLoadScript when the user selects a project on the left panel as part of the field change PeopleCode. (You can call JavaScript from PeopleCode events in fluid!).  This will reload the map.

You off and running!

TNS-00583: Valid node checking: unable to parse configuration parameters

Learn DB Concepts with me... - Tue, 2016-07-12 11:19
 TNS-12560: TNS:protocol adapter error  TNS-00583: Valid node checking: unable to parse configuration parametersI  Received following errors when trying to startup my listener. I verified the listener.ora and sqlnet.ora files and everything seemed to look normal.
[oracle@Linux03 encryption_keystore]$ lsnrctl start

LSNRCTL for Linux: Version - Production on 12-JUL-2016 10:05:32

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Starting /u01/app/oracle/product/ please wait...

TNSLSNR for Linux: Version - Production
System parameter file is /u01/app/oracle/product/
Log messages written to /u01/app/oracle/diag/tnslsnr/Linux03/listener/alert/log.xml
Error listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=Linux03)(PORT=1521)))
TNS-12560: TNS:protocol adapter error
 TNS-00583: Valid node checking: unable to parse configuration parameters

Listener failed to start. See the error message(s) above...

[oracle@Linux03 encryption_keystore]$ lsnrctl start

LSNRCTL for Linux: Version - Production on 12-JUL-2016 10:07:41

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Starting /u01/app/oracle/product/ please wait...

TNSLSNR for Linux: Version - Production
System parameter file is /u01/app/oracle/product/
Log messages written to /u01/app/oracle/diag/tnslsnr/Linux03/listener/alert/log.xml
Error listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=Linux03)(PORT=1521)))
TNS-12560: TNS:protocol adapter error
 TNS-00583: Valid node checking: unable to parse configuration parameters

Listener failed to start. See the error message(s) above...

I had below line in my sqlnet.ora file.

[oracle@Linux03 admin]$ cat sqlnet.ora_bak
# sqlnet.ora Network Configuration File: /u01/app/oracle/product/
# Generated by Oracle configuration tools.


    (DIRECTORY = /u01/app/oracle/admin/$ORACLE_SID/encryption_keystore/)  

Solution (In my case):

1. Tried to remove the ENCRYPTION_WALLET_LOCATION info and it worked for me. I knew there was somthing wrong with the syntax.
2. Copied the content from ENCRYPTION_WALLET* into notepad ++ . It helped me realize that I was missing two closing parenthesis )) at end.
3. Added them at end. Started the listener and worked.

[oracle@Linux03 encryption_keystore]$ lsnrctl start

LSNRCTL for Linux: Version - Production on 12-JUL-2016 10:31:41

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Starting /u01/app/oracle/product/ please wait...

TNSLSNR for Linux: Version - Production
System parameter file is /u01/app/oracle/product/
Log messages written to /u01/app/oracle/diag/tnslsnr/Linux03/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=Linux03)(PORT=1521)))

Alias                     LISTENER
Version                   TNSLSNR for Linux: Version - Production
Start Date                12-JUL-2016 10:31:41
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/
Listener Log File         /u01/app/oracle/diag/tnslsnr/Linux03/listener/alert/log.xml
Listening Endpoints Summary...
The listener supports no services
The command completed successfully

Also try these if above trick doesn't help.

1.  Oracle listener.ora or sqlnet.ora file contains any special characters.
2.  Oracle Listener.ora or sqlnet.ora file are in wrong format or syntax.
3.  Oracle listener.ora or sqlnet.ora file have some left justified parenthesis which are not accepted by oracle parser.
Categories: DBA Blogs

Eight Ways To Ensure Your Applications Are Enterprise-Ready

Pythian Group - Tue, 2016-07-12 10:00

When it comes to building database applications and solutions, developers, DBAs, engineers and architects have a lot of new and exciting tools and technologies to play with, especially with the Hadoop and NoSQL environments growing so rapidly.

While it’s easy to geek out about these cool and revolutionary new technologies, at some point in the development cycle you’ll need to stop to consider the real-world business implications of the application you’re proposing. After all, you’re bound to face some tough questions, like:

Why did you choose that particular database for our mission-critical application? Can your team provide 24/7 support for the app? Do you have a plan to train people on this new technology? Do we have the right hardware infrastructure to support the app’s deployment? How are you going to ensure there won’t be any bugs or security vulnerabilities?

If you don’t have a plan for navigating and anticipating these kinds of questions in advance, you’re likely to face difficulty getting approval for and implementing your application.

Any database applications or solutions you build or adopt for your organization must be “enterprise-ready”: secure, stable and scalable with a proven, tested capacity for deployment. They must be easy to administer and easy to support. But how do you make sure that happens?

Here are eight things to consider before declaring your proposed solution enterprise-ready:

  1. Open communications: A close working relationship between the development and operations teams goes a long way toward seamless integration of your database applications. By working together (from start to finish, as early on as possible), you can better anticipate the issues to be solved so your app or solution gets up and running faster.
  2. Platform reliability: Open source databases are great for obvious reasons: they’re free and easily customizable. But if your app is revenue-generating or mission-critical, it’s better to use a tested and proven distribution platform like Datastax Enterprise for Cassandra, Cloudera or HortonWorks for Hadoop, and Oracle or Percona for MySQL.
  3. Continuous quality: No matter which technology you use to build your app, make sure it passes rigorous quality assurance, scale and performance testing — both initially and with every new release of the software. Your vendor also needs to be proactive when it comes to releasing patches and fixing bugs.
  4. Suitable infrastructure: Consider whether the infrastructure you’re running is appropriate for the app you’re developing. If the database is set to run on multiple nodes of commodity hardware — to cover your bases in case one fails — but your operations team likes to store everything on an expensive SAN device, you might want to look into other alternatives.
  5. Experienced hosting: You’ll want to find a hosting company that is reliable, cost-effective and meets your company’s security policies. It should also have experience hosting the database technology you plan on using; that way, it knows how to respond when issues or challenges arise.
  6. Expert talent: Bring in a team of experts that can support your entire environment. While your operations team may want to take care of some elements themselves (everything up to the OS level, for instance), you’ll still want to show them that you have 24/7 support coverage available if needed. This team should be committed to continuous training and have enough people skilled with your technology to provide uninterrupted coverage.
  7. Comprehensive skills: Your team should be able to check your configurations against best practices for security, performance and availability — but don’t forget to ensure that they’re also set up for the more mundane things like systems monitoring, responding to alerts and fault finding.
  8. Ongoing costs: When tallying the running cost of your application, keep in mind that you need to incorporate the cost of the distributed version, its hosting, and 24/7 support and optimization.

With all the elements that go into getting an application enterprise-ready, it might be easier to work with a reputable partner who has the experience and expertise to help you deploy the right solution for your organization and ensure its long-term success.

Find out how Pythian’s solutions can help you succeed.

Categories: DBA Blogs

ADF BC REST Authentication with JSESSIONID Cookie

Andrejus Baranovski - Tue, 2016-07-12 09:46
I have described how to apply ADF Security for ADF BC REST in my previous post - Oracle JET and ADF BC REST Basic Authentication. I will show how you can authenticate on first request and for the next requests rely on JSESSIONID cookie from the first request. This is useful for mobile clients and JET, there is no need to keep user credentials during requests (enough to keep cookie), as this is sensitive data.

Let's see how it works. In the first request we must authenticate. We are going to use basic authentication with GET operation and provide user credentials:

Request is authenticated and data is returned:

Along with data, extra information is returned with response - cookie and header. Cookie JSESSIONID value identifies authenticated session context. We can use this value for the next requests, this way server would assume us as trusted and would not ask to authenticate again (similar principle as in ADF web). Copy cookie value:

Now you can close and open Postman, to guarantee nothing is shared from previous request. Remove authentication header and add new header variable called Cookie. Value must be set as JSESSIONID=Cookie Value. Execute GET operation:

If session associated to this cookie is still active on the server, you will get response data, as it would be expected (no need to provide user credentials again):

ADF BC REST sample application - ADFBCRESTApp_v7.zip is protected by ADF Security with authentication and authorization:

REST servlet is mapped with appropriate security constraint in web.xml:

Remote DBA Benefits Comparison Series- Part 4- Can I Still Get Personalized Service Remotely?

Chris Foot - Tue, 2016-07-12 09:25

This is Part 4 of a series that compares in-house personnel, consultants and remote DBA service providers.  In Part 1, we talked about the cost reduction benefits that remote DBA service providers offer customers.  In Part 2, we discussed service continuity and quality as well as the flexibility benefits customers receive when utilizing a remote DBA service provider.  In Part 3, we reviewed the benefits of collective knowledge and deep dive subject matter expertise.

In part 4 of the Remote DBA Benefits Comparison Series, we’ll discuss how remote DBA providers deliver personalized services to customers.  I use RDX’s service delivery approach as an example of how remote DBA providers can exceed the quality of work performed by in-house personnel or consultants.

Oracle Cloud Solves Undiscovered Stadium Revenue Growth as Sports Fans Desire Mobile Food & Beverage Ordering

Oracle Press Releases - Tue, 2016-07-12 09:02
Press Release
Oracle Cloud Solves Undiscovered Stadium Revenue Growth as Sports Fans Desire Mobile Food & Beverage Ordering

Redwood Shores, Calif.—Jul 12, 2016

More than 40% of sports fans worldwide abandoned concession lines in the past 12 months—without making a purchase—because of excessive waiting, according to a just-released Oracle Hospitality Sports and Entertainment research report The Fan Experience: Changing the Game with Food and Beverage, published in partnership with Turnkey Intelligence, an industry-leading market research provider.

The global study surveyed more than 3,500 sports fans in eight countries—Australia, Brazil, China, France, Germany, Japan, United Kingdom and United States—and focused on uncovering their top priorities. Most importantly, the survey aids venue operators by identifying “pain points” and solutions designed to trigger greater attendance and spark sales.

“Aside from safety and overall venue environment, fans across the world place food and beverage above all else when ranking the elements that are most important to their fan experience,” said, Jay Upchurch, Oracle Hospitality, vice president. “Our cloud platforms provide operators with the enterprise agility they need to remain innovative and responsive to change, readily incorporating the most advanced technologies that come along to make the fan experience even more memorable.”

Counter to current trends of spending exorbitantly on video boards and pre-game entertainment to fill seats, the research indicates employing technology to enhance food and beverage may yield a greater return on investment. Indeed, survey findings reveal the underutilization of food and beverage technology and confirm significant demand for it. For example, mobile technology to improve ordering, payment and delivering loyalty programs is a promising path to greater sales. That premise is supported by results, including:

  • 68% of U.S. fans and 60% of international fans said they would “probably” or “definitely” use in-seat ordering if available at their stadium.
  • American fans, on average, said they would spend an additional $20 for food & beverage if wait times were cut in half—representing a 43% increase in typical expenditure (per party, per game).
  • 27% of U.S. fans and 14% of international fans stated they are members of a loyalty rewards program offered by a team or stadium.

“The live event experience continues to be the optimal method of consuming entertainment,” said Haynes Hendrickson, Turnkey Intelligence’s President. “We have found that technology can positively impact the event experience and the bottom line of a venue. By embarking on this research study with Turnkey Intelligence, Oracle has demonstrated its commitment to excellence in our industry.”

Oracle Hospitality pursued the research project to advance the company’s commitment of delivering foundational technology that helps solve venue operators’ biggest challenges.

About Oracle Hospitality Sports & Entertainment

Oracle Hospitality Sports & Entertainment brings 35+ years of experience providing industry-leading technology solutions to sports and entertainment venues worldwide. The company’s point-of-sale platform is the premier food and beverage solution, enabling operators to deliver an efficient and innovative fan experience throughout their venue. Key features include: inventory management, loyalty rewards, third-party integrations for in-seat ordering, and more.

About Turnkey Intelligence

Turnkey Intelligence is home to a custom research department that has conducted and analyzed more than 500 studies for sports and entertainment’s top leagues, properties, brands and events. TI is also the proud creator of the Audience Portal, the sports and entertainment industry’s online data and analytics hub. Over 220 clients utilize the Portal to sell and market smarter. For more information about Turnkey Intelligence, visit Intel.TurnkeySE.com.

Contact Info
Katie Barron
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Katie Barron

  • 202-904-1138

From BI to AI: Whose Intelligence is Better?

Oracle AppsLab - Tue, 2016-07-12 08:54

Numbers are a property of the universe. Once Earthians figured that out, there was no stopping them. They went as far as the Moon.

We use numbers in business and life. We measure, we look for oddities, we plan. We think of ourselves as rational.

I, for one, like to look at the thermometer before deciding if I shall go out in flip-flops or uggs. But I cannot convince my daughter to do the same. She tosses a coin.


More often than we like to think, business decisions are made the way my daughter decides on what to wear. I need an illustration here, so let me pick on workers’ compensation. If you have workers, you want to reward them for good work, and by doing that, encourage the behaviors you want to see more of – you want them to work harder, better, quicker, and be happy. You can measure productivity by amount of, say, shoes sold. You can measure quality by, say, number of customers who came back to buy more shoes. You can measure happiness, by, say . . .  okay, let’s not measure happiness. How do you calculate what the worker compensation shall be based on these two measures?

50/50? 75/25? 25/75? Why? Why not? This is where most businesses toss a coin.

Here is an inventory of types of questions people tend to answer by tossing a coin:

  • Should you monitor the dollar amount of sales, or the percentage of sale increase?
  • Which of the two measures lets you better predict future performance?
  • Why would it?
  • How accurate are the predictions?
  • How big shall the errors be until you feel the measure doesn’t make accurate predictions? Why?
  • Which measures shall be combined and looked at together?
  • In which way?
  • Where would you set up thresholds between good, bad, and ugly?
  • Why? Why not?
  • If some numbers are way off, how do you know it is an exception and not part of some pattern that you don’t see?

If not tossing a coin, it is common practice to answer these kinds of questions based on a gut feeling. To answer these questions based on evidence instead, there shall be a way to evaluate the gut feeling, together with bunch of other hypotheses, in order to choose a hypothesis that actually true and works. This is hard for humans. Not only because it requires a lot of modeling and computations.

Conceptually, as humans, we tend to look for reasons and explain things. It is hard for us to see a pattern if we don’t see why it works. “I wouldn’t have seen it if I hadn’t believed it” as one wise person said. Admit it, we are biased. We won’t even consider evaluating a hypothesis that looks like a complete nonsense.

Computers, on the other hand, don’t have such a problem. Machine learning can create and test thousands of crazy hypotheses for you and select the best one. That is, the best one in predicting, not explaining. They can also keep updating the hypotheses as conditions change.

That’s why I believe AI is a new BI. It is more thorough and less biased then us humans. Therefore, it is often more rational.

I am fascinated to learn about ML algorithms, and what they can do for us. Applying the little I learned about Decision Trees to the worker’s compensation dilemma above, this is what I get. Let’s pretend the workers get a bonus at the end of the year. The maximum amount of the bonus is based on their salary, but the exact amount is a percent of the maximum based on performance – partially on the amount of sales, partially on the number of returned customers. These are your predictors. Your goal for paying off the bonus is that next year your workers have increased amount of sales AND increased number of returned customers at the same time. That’s your outcome.


Decision Tree algorithm will look at each possible combination of your predictors, and will measure which one better divides your outcomes into categories. (They say it is a division that minimizes the entropy and increases information gain).


Would we try to do that “by hand,” it would’ve taken so much time. But here we have the most effective bonus recipes figured out for us. Some of the recipes may look counter-intuitive; we may find out that the largest bonus is not the best encouragement, or some such. But, again, figuring out “whys” is a different problem.

And here is my little classification of business intelligence tasks that I believe AI can take over and improve upon.


As a human and a designer who welcomes our machine learning overlords, I see their biggest challenge in overcoming our biggest bias, the one of our superior rationality.Possibly Related Posts:

Set the SQL Native Client Default Port with PowerShell

Yann Neuhaus - Tue, 2016-07-12 08:06

I written an article about “SQL Server 2012: Configuring your TCP Port via PowerShell” and I received a question from PaulJ:
“How do you set the port for the SQL Native Client 11.0 Configuration (32bit) – as seen in the SQL Configuration Manager?”

This is a very good question and I decide to write this blog as an answer to this question.
The first step is always the same, initialization of my object:

[reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.SqlWmiManagement") | Out-Null

$wmi = New-Object ("Microsoft.SqlServer.Management.Smo.Wmi.ManagedComputer")

The second step is used to know for which client protocol the setting belongs to.
In the class “Microsoft.SqlServer.Management.Smo.Wmi.ManagedComputer”, you find a property “ClientProtocols” as you can see in the msdn web page:
I display the name and the protocol Properties with this command:

$wmi.ClientProtocols | Select displayname -ExpandProperty ProtocolProperties

As you can see, I have 4 client protocols (Named Pipes, default port, KEEPALIVE and KEEPALIVEINTERVAL).
The next step is to select the default port:

$tcp_list = $wmi.ClientProtocols  | Where-Object {$_.displayname -eq "TCP/IP"}
$default_tcp = $tcp_list.ProtocolProperties | Where-Object {$_.Name -eq "Default Port"}

As you can see, the default client port is set to 1433 and now, I will set another value for this port:


Note: The port has a System.Int32 type
Validate this change with an Alter:


To finish, do not forget to restart your services to activate the port change:

$sql_service = ($wmi.Services | Where-Object { $_.Type -eq "SqlServer" })

Et voilà! The default port for the client protocol is changed!


Cet article Set the SQL Native Client Default Port with PowerShell est apparu en premier sur Blog dbi services.

How to view database patching information in Oracle 12.1.0 and higher

Syed Jaffar - Tue, 2016-07-12 06:58
We are currently engaged with an assignment to migrate over 50 productions database to Exadata of different sizes , different OS and different business priority. Each passing day, we encounter some technical challenges and of course each situation educates and adds more knowledge.

I would like to share a small post on how to list/view the patching information  in Oracle >= 12.1.0 version:

As of, you can extract the database patches information using the following methods:
1. Traditional 'opatch -lsinventory'
2. Review database alert.log
3. Use the dbms_qopatch package

Review database alert.log

We notice an Oracle instance (at least on Exadata x5-2) dumping current patch information in its alert.log. This is pretty good, you no longer need to query lsinventory to know the patches that are applied. Just review your alert.log to know the patching information.

Below is the excerpt of the alert.log:

Wed May 25 17:59:05 2016
Dumping current patch information
Patch Id: 20243804
Patch Description: DATABASE BUNDLE PATCH: (20243804)
Patch Apply Time: 2016-04-19 17:01:42 GMT+03:00
Bugs Fixed: 13498243,13640676,14165431,14254610,14643995,15894842,15996428,

Use the dbms_qoptch package:

with a as (select dbms_qopatch.get_opatch_lsinventory patch_output from dual)
select x.*
from a,
passing a.patch_output
patch_id number path 'patchID',
patch_uid number path 'uniquePatchID',
description varchar2(80) path 'patchDescription',
applied_date varchar2(30) path 'appliedDate',
sql_patch varchar2(8) path 'sqlPatch',
rollbackable varchar2(8) path 'rollbackable'
) x;

Apparently there is a bug wen the query is executed on Read " datapatch issue : ORA-27477: "SYS"."LOAD_OPATCH_INVENTORY_1" already exists (Doc ID 1934882.1)"

Nulls in composite keys

Yann Neuhaus - Tue, 2016-07-12 06:36

Comparison of NULL can be misleading and it’s even worse for unique constraint validation. Having partial nulls in a composite key can be tricky because the SQL ANSI specification is not very easy to understand, and implementation can depend on the RDBMS. Here is an example with composite unique key and foreign key on Oracle.

Unique constraint

I create a table with a composite unique constraint:
SQL> create table TABLE1 (a char, b char, unique(a,b));
Table TABLE1 created.

I can insert a row with a=’X’ and b=’X':
SQL> insert into TABLE1 values ('X','X');
1 row inserted.

I cannot insert the same row:
SQL> insert into TABLE1 values ('X','X');
ERROR at line 1:
ORA-00001: unique constraint (SYS.SYS_C0015464) violated

I insert another row with same value for column a but different value for column b:
SQL> insert into TABLE1 values ('X','Y');
1 row inserted.

And another row with same value for column a but a null for column b:
SQL> insert into TABLE1 values ('X',null);
1 row inserted.

However, I cannot insert the same a second time:
SQL> insert into TABLE1 values ('X',null);
ERROR at line 1:
ORA-00001: unique constraint (SYS.SYS_C0015464) violated

If you look at documentation, this is documented as:
Because of the search mechanism for unique key constraints on multiple columns, you cannot have identical values in the non-null columns of a partially null composite unique key constraint.

It looks like an implementation reason (the search mechanism is the index that enforces the unique constraint). What is documented in SQL-92?
A unique constraint is satisfied if and only if no two rows in a table have the same non-null values in the unique columns.

How to interpret this? We cannot insert two (‘X’,null) because that would be two rows with same non-null value (a=’X’) and the Oracle implementation is compilent.

Or is it? We can also read the definition as the unique constraint being violated only when we find rows that have non-null values and they are the same. This is what MySQL and PostgresSQL do: accept duplicates when there is at least one null.
This is also what I found more intuitive: I usually consider NULL as a value that is not known at insert time but that will be assigned a value later during the lifecycle of the row. Thus, I expect to be able to insert rows where there is a null and check the constraint only when all columns have a value.

It is probably an implementation choice from Oracle which stores nulls as a zero-length string and then cannot have two identical entries in a unique index.

Now inserting a row where a is null and b is null:
SQL> insert into TABLE1 values (null,null);
1 row inserted.

And because that do not violate the rule whatever the way we read it (non-null values are not the same as there are no non-null values at all here) I can insert a second one:
SQL> insert into TABLE1 values (null,null);
1 row inserted.

This is documented as
Unless a NOT NULL constraint is also defined, a null always satisfies a unique key constraint

About implementation, there is no problem because full null entries are not stored in the index. They are stored in bitmap indexes, but bitmap indexes cannot be used to enforce a unique constraint.

In summary, here is what can be stored on a table where (a,b) is unique but nullable:

SQL> select rownum,TABLE1.* from TABLE1;
---------- - -
1 X X
2 X Y
3 X

Foreign key

Now that I have a unique key, I can reference it:
SQL> create table TABLE2 (a char, b char, foreign key(a,b) references TABLE1(a,b));
Table TABLE2 created.

Yes. You don’t need to reference the primary key. Any unique key, even with nullable columns, can be referenced.

I can insert a row where parent exists:
SQL> insert into TABLE2 values('X','X');
1 row inserted.

As I’ve no unique key on the child, it’s many to one relationship:
SQL> insert into TABLE2 values('X','X');
1 row inserted.

I also have a parent with a=’X’ and b=’Y':
SQL> insert into TABLE2 values('X','Y');
1 row inserted.

But I’ve no parent with a=’Y':
SQL> insert into TABLE2 values('Y','Y');
ERROR at line 1:
ORA-02291: integrity constraint (SYS.SYS_C0015465) violated - parent key not found

So far so good. I said that I have a many to one relationship, but it’s a many to one or zero because my columns are nullable:
SQL> insert into TABLE2 values(null,null);
1 row inserted.

So far so good. But I have a composite key with nullable columns here, and I can insert a row where a=’X’ and b is null:
SQL> insert into TABLE2 values('X',null);
1 row inserted.

But do you think that all non null parent values must exist?
SQL> insert into TABLE2 values('Y',null);
1 row inserted.

Once again, this is documented as:
If any column of a composite foreign key is null, then the non-null portions of the key do not have to match any corresponding portion of a parent key.

And this is what is specified in SQL-92:
If no <match type> was specified then, for each row R1 of the referencing table, either at least one of the values of the referencing columns in R1 shall be a null value, or the value of each referencing column in R1 shall be equal to the value of the corresponding referenced column in some row of the referenced table. More detail about the other match types in Oracle Development Guide.

That may look strange, but, still thinking about NULLS as unknown values, you can consider that constraints cannot be validated until we know all values.

Here is what I was able to insert into my table even with no a=’Y’ in the parent:

SQL> select rownum,TABLE2.* from TABLE2;
---------- - -
1 X X
2 X X
3 X Y
4 X
6 Y

So what?

Having nulls in composite unique key or foreign key can be misleading, then it’s better to ensure that what you define fits what you expect. It’s probably better to prevent partial nulls in foreign key (a check constraint can ensure that if one column is null then all columns must be null) or to have and additional referential integrity constraint which ensures that you can set only the allowed values for a subset of columns (in our case, a table with column a as primary key that we can reference).


Cet article Nulls in composite keys est apparu en premier sur Blog dbi services.

Union All MV

Jonathan Lewis - Tue, 2016-07-12 04:10

In an article I wrote last week about Bloom filters disappearing as you changed a SELECT to a (conventional) INSERT/SELECT I suggested using the subquery_pruning() hint to make the optimizer fall back to an older strategy of partition pruning. My example showed this working with a range partitioned table but one of the readers reported a problem when trying to apply the strategy to a composite range/hash partitioned table and followed this up with an execution plan of a select statement with a Bloom filter where the subquery_pruning() hint didn’t introduced subquery pruning when the select was used for an insert.

A couple of standard ways to work around this probelm are to embed the select statement in a pipeline function so that we can “insert into table select from table(pipeline_function)”, or to write a pl/sql block that opens a cursor to do a select with bulk collect and loops through an array insert. The overhead in both cases is likely to be relatively small (especially when compared with the overhead of failing to filter). In this case, however, the reader suggested that maybe the problem appeared because the driving table (i.e. the one that would have been query to derive the pruning values) was actually an inline view with a union all.

After modifying my working model to try a couple of different tests I was inclined to agree. Since the two tables in the view looked as if they were likely to be relatively tiny and static I suggested that it would be safe to create a materialized view defined to “refresh on commit” and then use the materialized view explicitly in the query. This, finally, brings me to the point of today’s article – how do you create such a materialized view ?

I’m going to start by creating a couple of small base tables from a familiar object:

create table tt as select * from all_objects where object_type = 'TABLE';
create table tv as select * from all_objects where object_type = 'VIEW';

alter table tt add constraint tt_pk primary key (object_id);
alter table tv add constraint tv_pk primary key (object_id);

                ownname          => user,
                tabname          =>'TT',
                method_opt       => 'for all columns size 1'
                ownname          => user,
                tabname          =>'TV',
                method_opt       => 'for all columns size 1'

Assume, now, that I need an inline view that is interested in the things you will recognise from the above as the tables owned by OUTLN (which will apper in tt) and the views owned by SYSTEM (which will appear in tv) – in the system I’m playing on at the moment that’s three rows from each of the two tables). Here’s the SQL I’d put into the inline view:

        object_id, object_type, object_name
from    tt
where   owner = 'OUTLN'
union all
        object_id, object_type, object_name
from    tv
where   owner = 'SYSTEM'

Since this view won’t give me partition pruning I have to replace it with a table and because I want to ensure that the table is always up to date I have to generate it as the container for a materialized view with refresh on commit. First I need some materialized view logs so that I can do a fast refresh:

create materialized view log on tt
        rowid, primary key
        (object_type, object_name, owner)
including new values

create materialized view log on tv
        rowid, primary key
        (object_type, object_name, owner)
including new values

I’ve included the primary key in the definition because I happen to want the object_id column in the log – but I could just have included it as a column in the filter list. I’ve included the rowid in the definition because Oracle needs the rowid if it’s going to be able to do a fast refresh. I can now create a materialized view:

create materialized view mv_t
        build immediate
        refresh fast on commit
        'T' mv_marker,
        rowid rid,
        object_id, object_type, object_name
from    tt
where   owner = 'OUTLN'
union all
        'V' mv_marker,
        rowid rid,
        object_id, object_type, object_name
from    tv
where   owner = 'SYSTEM'

I’ve taken the option to “build immediate” and specified – most importantly for my needs – “refresh on commit”. You’ll notice I haven’t chosen to “enable query rewrite”; for the purposes of this demo I don’t need that particular feature.

There are two key features to the materialized view that are a little special – first I’ve included the rowid of each source table as a named column in the materialized view; as I mentioned above Oracle will not allow the view to be fast refreshable without the rowid. The second feature is that I’ve introduced a literal value into the view which I’ve named mv_marker; this makes it easy to see which table a row comes from when you query the materialized view … and Oracle needs to see this.

That’s the job done. Just to demonstrate that my materialized view is working as required here’s a little more SQL (following by the output):

select * from mv_t;

delete from tt where object_name = 'OL$';
update tv set object_name = 'PRODUCT_PRIVILEGES' where object_name = 'PRODUCT_PRIVS';


select * from mv_t;


- ------------------ ---------- ------------------- --------------------------------
T AAA6tXAAFAAAAEBAAI        471 TABLE               OL$
T AAA6tXAAFAAAAEBAAJ        474 TABLE               OL$HINTS
T AAA6tXAAFAAAAEBAAK        478 TABLE               OL$NODES
V AAA6tWAAFAAAACuAA7      14233 VIEW                PRODUCT_PRIVS

6 rows selected.

2 rows deleted.

1 row updated.

Commit complete.

- ------------------ ---------- ------------------- --------------------------------
T AAA6tXAAFAAAAEBAAJ        474 TABLE               OL$HINTS
T AAA6tXAAFAAAAEBAAK        478 TABLE               OL$NODES

5 rows selected.

If you’re wondering why you see “2 rows deleted” but a reduction by just one row in the final output, remember that we’re deleting from table tt but the materialized view holds information about just the subset of tables owned by OUTLN – I happen to have a row in tt that says SYSTEM also owns a table called OL$.


If you have trouble working out why your attempts to create a particular materialized view aren’t working the dbms_mview package has a procedure called explain_mview that may give you enough ideas to work out what you’re doing wrong. For example, here’s how I could find out that I needed a literal column to tag the two parts of my union all view:


        dbms_mview.explain_mview (
                create materialized view mv_t
                        build immediate
                        refresh fast
                        enable query rewrite
                select  -- 'T' mv_marker,
                        rowid rid,
                        object_id, object_type, object_name from tt
                union all
                select  -- 'V' mv_marker,
                        rowid rid,
                        object_id, object_type, object_name from tv

column cap_class noprint
column related_text format a7
column short_msg format a72
break on cap_class skip 1

        substr(capability_name,1,3) cap_class,
        capability_name, possible, related_text, substr(msgtxt,1,70) short_msg
        mvname = 'MV_T'
order by
        substr(capability_name,1,3), related_num, seq

The first line calls a supplied script to create a table called mv_capabilities_table in the current schema. The call to dbms_mview.explain_mview passes the text of a “create materialized view” statement to the procedure (there are a couple of variations possible) then, after a couple of SQL*Plus formatting commands I’ve queried the table to see Oracle’s analysis for the statement. (You can tag each call to this procedure using a second parameter that I haven’t bothered to use.)

Here’s the output for the failed attempt above, which has commented out the literals that tag the two parts of the UNION ALL:

------------------------------ --- ------- ------------------------------------------------------------------------
PCT_TABLE                      N   TT      relation is not a partitioned table
PCT_TABLE_REWRITE              N   TT      relation is not a partitioned table
PCT_TABLE                      N   TV      relation is not a partitioned table
PCT_TABLE_REWRITE              N   TV      relation is not a partitioned table
PCT                            N

REFRESH_COMPLETE               Y
REFRESH_FAST                   N
REFRESH_FAST_AFTER_INSERT      N           the materialized view does not have a UNION ALL marker column
REFRESH_FAST_AFTER_INSERT      N           set operator in a context not supported for fast refresh
REFRESH_FAST_AFTER_ONETAB_DML  N           see the reason why REFRESH_FAST_AFTER_INSERT is disabled
REFRESH_FAST_AFTER_ANY_DML     N           see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
REFRESH_FAST_PCT               N           PCT FAST REFRESH is not possible if query has set operand query blocks

REWRITE                        Y
REWRITE_PARTIAL_TEXT_MATCH     N           set operator encountered in mv
REWRITE_GENERAL                N           set operator encountered in mv
REWRITE_PCT                    N           general rewrite is not possible or PCT is not possible on any of the d

17 rows selected.

The query manages to split the output into three sections (but that depends on a side-effect in a way that I would normally call bad design): elements relating to “Partition Change Tracking”, elements relating to “Materialized View Refresh” and elements relating to “Query Rewrite”. You’ll notice that the rewrite section tells me that (even though I haven’t chosen to enable it) my view could be enabled to do query rewrite.

Critically, though, this version of the materialized view can’t be fast refreshed, and we see the key reason in the first “Refresh fast after insert” line: “the materialized view does not have a UNION ALL marker column”. That’s how I know I have to include a literal column that has a different value in each of the two parts of the UNION ALL.

Links for 2016-07-11 [del.icio.us]

Categories: DBA Blogs

Decide Query performance

Tom Kyte - Mon, 2016-07-11 23:06
How we define which query is run slower? Means how we can say that query takes more time and require performance tuning?
Categories: DBA Blogs


Subscribe to Oracle FAQ aggregator