Feed aggregator

Oracle Looking to Buy Accenture? Stranger Things Have Happened.

Abhinav Agarwal - Fri, 2017-04-07 07:30
Image credit: pixels.comThe Register reported that Oracle may be exploring the "feasibility of buying multi-billion dollar consultancy Accenture."

To summarize the numbers involved here, Oracle had FY16 revenues of $37 billion, net income of $8.9 billion, and a market cap of $180 billion.

On the other hand, Accenture had FY16 revenues of US$34.8 billion, net income of $4.1 billion, and a market cap of $77 billion.

Some questions that come to mind:
  1. Why? Oracle buying NetSuite in 2016 made sense. Oracle buying Salesforce would make even more sense. Oracle buying a management consulting and professional services company, and that too one with more than a quarter million employees, on the face of it, makes little sense. Would it help Oracle leapfrog Amazon's AWS cloud business? Would it help Oracle go after a new market segment? The answers are not clear, at all.
  2. Who would be in charge of this combined entity? Both have similar revenues, though Accenture has a market cap that is less than half Oracle's and a workforce that is roughly three times Oracle's. The cultural meshing itself would prove to be a challenge. Mark Hurd, one of two CEOs of Oracle (the other CEO is Safra Catz, a former investment banker), has the experience running a large, heterogeneous organization. Prior to his stint at Oracle, he was credited with making the HP and Compaq merger work. At Oracle, however, he has not run software product development, which has been run by Thomas Kurian, and who reports to Larry Ellison, and not Hurd. A merger between Oracle and Accenture would place an even greater emphasis on synergies between Oracle's software division and Accenture's consulting business.
  3. Oracle would need to spend close to $100 billion to buy Accenture, if it does. How would it finance it, even assuming it spends all its $68 billion in cash to do so? Keep in mind that its largest acquisition was in the range of $10 billion. The financial engineering would be staggering. It helps that it has a former investment banker as one of two CEOs.
  4. Will Oracle make Accenture focus on the Oracle red stack of software products and applications - both on-premise and in the cloud? If yes, it would need a much smaller-sized workforce than Accenture has. That in turn would diminish the value of Accenture to Oracle, and make the likely sticker price of $100 billion look even costlier.
  5. Is Oracle looking to become the IBM of the twenty-first century? It's certainly been a public ambition of Larry Ellison. In 2009, he said he wanted to pattern Oracle after Thomas Watson Jr's IBM, "combining both hardware and software systems." If Oracle keeps Accenture as a business unit free to pursue non-Oracle deals, does it mean Oracle is keen on morphing into a modern-day avatar of IBM and IBM Global Services, offering hardware, software, and professional services - all under one red, roof?
  6. Is Oracle serious about such a merger? An acquisition of this size seems more conjecture than in the realms of possibility, at least as of now. One is reminded of the time in 2003 when Microsoft explored the possibility of buying SAP. Those discussions went nowhere, and the idea was dropped. Combining two behemoths is no easy task, even for a company like Oracle, that has stitched together almost 50 acquisitions in just the last five years.
  7. If such an acquisition did go through, there would likely be few anti-trust concerns. That's a big "if".
  8. Stranger things have happened in the software industry, like HP buying Autonomy.
  9. I hope the Register piece was not an example of an early April Fool's joke.
(HT Sangram Aglave whose LinkedIn post alerted me to this article)

I first published this in LinkedIn Pulse on April 1, 2017.

© 2017, Abhinav Agarwal.

Oracle E-Business Suite 12.2 Web Services Security: Authentication and Authorization

This is the seventh posting in a blog series summarizing the new Oracle E-Business Suite 12.2 Mobile and web services functionality and recommendations for securing them.

Once traffic is accepted and passed by the URL Firewall, WebLogic initiates the standard Oracle E-Business Suite authentication and authorization procedures. Web services are authenticated and authorized no differently than for end-users.

Authorization rules for web services are relatively easy to configure in that all web services are defined as functions. The Oracle E-Business Suite's function security scheme and rules engine apply the same to GUI forms as for web services. In other words, the table APPLSYS.FND_FORM_FUNCTIONS defines all the forms that users use as well as defines all web services deployed. Menus then are built referencing these functions and Oracle E-Business Suite user accounts (APPLSYS.FND_USER) are given responsibilities with the menus of functions. These user accounts can be staff members or can be generic accounts (e.g. to support specific web services). Ensuring that appropriate users and responsibilities can call and use specific web services is the same critical step as ensuring that only appropriate users can use specific forms.

There are two authentication options for web services, local FND_USER passwords and tokens. Tokens can be SAML send vouchers/E-Business Suite Session Ids). Whichever is used, ensure that accounts are not inappropriately over privileged and the passwords and tokens not widely known and/or shared.

If you have any questions, please contact us at info@integrigy.com

-Michael Miller, CISSP-ISSMP, CCSP, CCSK

References
 
 
 
 
 
Web Services, DMZ/External, Oracle E-Business Suite
Categories: APPS Blogs, Security Blogs

Machine learning: Getting started with random forests in R

Amis Blog - Fri, 2017-04-07 02:08

According to Gartner, machine learning is on top of the hype cycle at the peak of inflated expectations. There is a lot of misunderstanding about what machine learning actually is and what it can be done with it.

Machine learning is not as abstract as one might think. If you want to get value out of known data and do predictions for unknown data, the most important challenge is asking the right questions and of course knowing what you are doing, especially if you want to optimize your prediction accuracy.

In this blog I’m exploring an example of machine learning. The random forest algorithm. I’ll provide an example on how you can use this algorithm to do predictions. In order to implement a random forest, I’m using R with the randomForest library and I’m using the iris data set which is provided by the R installation.

The Random Forest

A popular method of machine learning is by using decision tree learning. Decision tree learning comes closest to serving as an off-the-shelf procedure for data mining (see here). You do not need to know much about your data in order to be able to apply this method. The random forest algorithm is an example of a decision tree learning algorithm.

Random forest in (very) short

How it works exactly takes some time to figure out. If you want to know details, I recommend watching some youtube recordings of lectures on the topic. Some of its most important features of this method:

  • A random forest is a method to do classifications based on features. This implies you need to have features and classifications.
  • A random forest generates a set of classification trees (an ensemble) based on splitting a subset of features at locations which maximize information gain. This method is thus very suitable for distributed parallel computation.
  • Information gain can be determined by how accurate the splitting point is in determining the classification. Data is split based on the feature at a specific point and the classification on the left and right of the splitting point are checked. If for example the splitting point splits all data of a first classification from all data of a second classification, the confidence is 100%; maximum information gain.
  • A splitting point is a branching in the decision tree.
  • Splitting points are based on values of features (this is fast)
  • A random forest uses randomness to determine features to look at and randomness in the data used to construct the tree. Randomness helps reducing compute time.
  • Each tree gets to see a different dataset. This is called bagging.
  • Tree classification confidences are summed and averaged. Products of the confidences can also be taken. Individual trees have a high variance because they have only seen a small subset of data. Averaging helps creating a better result.
  • With correlated features, strong features can end up with low scores and the method can be biased towards variables with many categories.
  • A random forest does not perform well with unbalanced datasets; samples where there are more occurrences of a specific class.
Use case for a random forest

Use cases for a random forest can be for example text classification such as spam detection. Determine if certain words are present in a text can be used as a feature and the classification would be spam/not spam or even more specific such as news, personal, etc. Another interesting use case lies in genetics. Determining if the expression of certain genes is relevant for a specific disease. This way you can take someone’s DNA and determine with a certain confidence if someone will contract a disease. Of course you can also take other features into account such as income, education level, smoking, age, etc.

R Why R

I decided to start with R. Why? Mainly because it is easy. There are many libraries available and there is a lot of experience present worldwide; a lot of information can be found online. R however also has some drawbacks.

Some benefits

  • It is free and easy to get started. Hard to master though.
  • A lot of libraries are available. R package management works well.
  • R has a lot of users. There is a lot of information available online
  • R is powerful in that if you know what you are doing, you require little code doing it.

Some challenges

  • R loads datasets in memory
  • R is not the best at doing distributed computing but can do so. See for example here
  • The R syntax can be a challenge to learn
Getting the environment ready

I decided to install a Linux VM to play with. You can also install R and R studio (the R IDE) on Windows or Mac. I decided to start with Ubuntu Server. I first installed the usual things like a GUI. Next I installed some handy things like a terminal emulator, Firefox and stuff like that. I finished with installing R and R-studio.

So first download and install Ubuntu Server (next, next, finish)

sudo apt-get update
sudo apt-get install aptitude

–Install a GUI
sudo aptitude install –without-recommends ubuntu-desktop

— Install the VirtualBox Guest additions
sudo apt-get install build-essential linux-headers-$(uname -r)
Install guest additions (first mount the ISO image which is part of VirtualBox, next run the installer)

— Install the below stuff to make Dash (Unity search) working
http://askubuntu.com/questions/125843/dash-search-gives-no-result
sudo apt-get install unity-lens-applications unity-lens-files

— A shutdown button might come in handy
sudo apt-get install indicator-session

— Might come in handy. Browser and fancy terminal application
sudo apt-get install firefox terminator

–For the installation of R I used the following as inspiration: https://www.r-bloggers.com/how-to-install-r-on-linux-ubuntu-16-04-xenial-xerus/
sudo echo “deb http://cran.rstudio.com/bin/linux/ubuntu xenial/” | sudo tee -a /etc/apt/sources.list
gpg –keyserver keyserver.ubuntu.com –recv-key E084DAB9
gpg -a –export E084DAB9 | sudo apt-key add –
sudo apt-get update
sudo apt-get install r-base r-base-dev

— For the installation of R-studio I used: https://mikewilliamson.wordpress.com/2016/11/14/installing-r-studio-on-ubuntu-16-10/

wget http://ftp.ca.debian.org/debian/pool/main/g/gstreamer0.10/libgstreamer0.10-0_0.10.36-1.5_amd64.deb
wget http://ftp.ca.debian.org/debian/pool/main/g/gst-plugins-base0.10/libgstreamer-plugins-base0.10-0_0.10.36-2_amd64.deb
sudo dpkg -i libgstreamer0.10-0_0.10.36-1.5_amd64.deb
sudo dpkg -i libgstreamer-plugins-base0.10-0_0.10.36-2_amd64.deb
sudo apt-mark hold libgstreamer-plugins-base0.10-0
sudo apt-mark hold libgstreamer0.10

wget https://download1.rstudio.org/rstudio-1.0.136-amd64.deb
sudo dpkg -i rstudio-1.0.136-amd64.deb
sudo apt-get -f install

Doing a random forest in R

R needs some libraries to do random forests and create nice plots. First give the following commands:

#to do random forests
install.packages(“randomForest”)

#to work with R markdown language
install.packages(“knitr”)

#to create nice plots
install.packages(“ggplot2”)

In order to get help on a library you can give the following command which will give you more information on the library.

library(help = “randomForest”)

 Of course, the randomForest implementation does have some specifics:

  • it uses the reference implementation based on CART trees
  • it is biased in favor of continuous variables and variables with many categories

A simple program to do a random forest looks like this:

#load libraries
library(randomForest)
library(knitr)
library(ggplot2)

#random numbers after the set.seed(10) are reproducible if I do set.seed(10) again
set.seed(10)

#create a training sample of 45 items from the iris dataset. replace indicates items can only be present once in the dataset. If replace is set to true, you will get Out of bag errors.
idx_train <- sample(1:nrow(iris), 45, replace = FALSE)

#create a data.frame from the data which is not in the training sample
tf_test <- !1:nrow(iris) %in% idx_train

#the column ncol(iris) is the last column of the iris dataset. this is not a feature column but a classification column
feature_columns <- 1:(ncol(iris)-1)

#generate a randomForest.
#use the feature columns from training set for this
#iris[idx_train, ncol(iris)] indicates the classification column
#importance=TRUE indicates the importance of features in determining the classification should be determined
#y = iris[idx_train, ncol(iris)] gives the classifications for the provided data
#ntree=1000 indicates 1000 random trees will be generated
model <- randomForest(iris[idx_train, feature_columns], y = iris[idx_train, ncol(iris)], importance = TRUE, ntree = 1000)

#print the model
#printing the model indicates how the sample dataset is distributed among classes. The sum of the sample classifications is 45 which is the sample size. OOB rate indicates ‘out of bag’ (the overall classification error).

print(model)

#we use the model to predict the class based on the feature columns of the dataset (minus the sample used to train the model).
response <- predict(model, iris[tf_test, feature_columns])

#determine the number of correct classifications
correct <- response == iris[tf_test, ncol(iris)]

#determine the percentage of correct classifications
sum(correct) / length(correct)

#print a variable importance (varImp) plot of the randomForest
varImpPlot(model)

#in this dataset the petal length and width are more important measures to determine the class than the sepal length and width.

The post Machine learning: Getting started with random forests in R appeared first on AMIS Oracle and Java Blog.

Workaround for ADF BC View Object Attribute Order Problem in JDeveloper 12c

Andrejus Baranovski - Thu, 2017-04-06 21:23
I'm sure probably every ADF developer sooner or later faced this issue. When you create VO based on EO, JDEV gives you alphabetically ordered list of attributes. As a result - order of attributes in EO and VO becomes different. While this doesn't influence runtime functionality, it becomes quite annoying for application maintenance. Hard to match attributes between VO and EO, developer need to search through the list to locate attribute he is looking for. But there is a workaround, I will describe it here.

Let's see what the problem first. Assume we have Employees EO, attributes are generated in the same order as DB table columns:


Now if you are using VO creation wizard, list of attributes will be displayed in alphabetic order. This is frustrating:


Without any other choice, developer would select EO attributes and select them in such order as it is listed:


But wait, there is a workaround. Don't select all attributes, instead select only EO item itself. Then use Add button to add entire list of EO attributes:


This time attributes will be added in original order, as the order is set in EO.


Enjoy this small, but useful hint.

OUAF 4.3.0.4.0 Release Summary

Anthony Shorten - Thu, 2017-04-06 20:28

The next release of the Oracle Utilities Application Framework (4.3.0.4.0) is in its final implementation across our product lines over the next few months. This release improves the existing Oracle Utilities Application Framework with exciting new features and enhanced existing features for our cloud and non-cloud implementations. Here is a summary of the key features of the new Oracle Utilities Application Framework.

Main Features CMA Improvements

The following highlights some improvements to CMA processing.

Ad-hoc Migration Requests

A new migration request BO has been provided to allow for building ‘ad-hoc’ migration requests using a list of specific objects.  It’s called the “entity list” migration request.

A special zone is included to find records to include in the migration request.  This zone allows you to choose a maintenance object that is configured for CMA and enter search criteria to get a list of objects to choose.  The zone supports linking one or more objects for the same MO en masse.


Once records are linked, a zone allows you to view the existing records and remove any if needed.

Selection

Grouping Migration Requests

Migration requests may now be grouped so that you can maintain more granular migration requests that get grouped together to orchestrate a single export of data for a ‘wholesale’ migration.  The framework supplies a new ‘group’ migration request that includes other migration requests that logically group migration plans.  Edge products or implementations may include this migration request into their own migration request.


Mass Actions During Migration Import Approval

When importing data sets, a user may now perform mass actions on migration objects to approve or reject or mark as ‘needs review’.


Groovy Library Support

Implementers may now define a Groovy library script for common functionality that may be included in other Groovy scripts.

There’s a new script type:


Scripts of this type define a Groovy Library Interface step type to list the Groovy methods defined within the script that are available for use by other scripts.


Additional script steps using the Groovy Member step type are used to define the Groovy code that the script implements.

Groovy scripts that choose to reference the Groovy Library Script can use the createLibraryScript method provided by the system to instantiate the library interface.

Search Menu Capability

A new option in the toolbar allows a user to search for a page rather than using the menu to find the desired page.


All menu items whose label matches what the user types are shown (as you type):


Additional Features

The following is a subset of additional features that are included.   Refer to the published release notes for more details.

  • URI validation / substitution. Any place where a URI is configured can now use substitution variables to support transparency across environment. The fully substituted value can also be validated against a whitelist for added security.
  • Minimizing the dashboard suppresses refresh. This allows a user to improve response when navigating throughout the system by delaying the refresh of zones in the dashboard while it is minimized.
  • New support for UI design. Input maps may now support half width sections.  Both display and input maps may support “floating” half width sections that fill in available space on the UI based on what is displayed.
  • Individual batch controls may now be secured independently.
  • Ad-hoc batch parameters are supplied to all batch related plug-in spots. Additionally, plug-in driven batch programs may now support ad-hoc parameters.
  • Elements in a schema that include the private=true attribute will no longer appear in the WSDL of any Inbound Web Service based upon that schema.

E-Business Suite Technology Stack Blog in Migration

Steven Chan - Thu, 2017-04-06 18:05

This blog is being migrated to a new blogging platform (at last!). This is our fifth migration since 2006, so I expect a bit of reorganization of content.  We're going on hiatus for a bit until the dust settles.

Heads up: all comments posted from now to the new blog's appearance will be lost. If you post a comment that's gotten lost in the transition, please re-post when the new blog is up and running.


Categories: APPS Blogs

12cR2 DML monitoring and Statistics Advisor

Yann Neuhaus - Thu, 2017-04-06 15:40

Monitoring DML to get an idea of the activity on our tables is not new. The number of insert/delete/update/truncate since last stats gathering is tracked automatically. The statistics gathering job use it to list and prioritize tables that need fresh statistics. This is for slow changes on tables. In 12.2 we have the statistics advisor that goes further, with a rule that detects volatile tables:

SQL> select * from V$STATS_ADVISOR_RULES where rule_id=14;
 
RULE_ID NAME RULE_TYPE DESCRIPTION CON_ID
------- ---- --------- ----------- ------
14 LockVolatileTable OBJECT Statistics for objects with volatile data should be locked 0

But to detect volatile tables, you need to track DML frequency with finer grain. Let’s investigate what is new here in 12.2

Statistics Advisor tracing

DBMS_STATS has its trace mode enabled as a global preference. It is not documented, but it works with powers of two. 12.1.0.2 introduced 262144 to trace system statistics gathering, so let’s try the next one: 524288

SQL> exec dbms_stats.set_global_prefs('TRACE',0+524288)
PL/SQL procedure successfully completed.

After a while, I grepped my trace directory for DBMS_STATS and found the MMON slave trace (ORCLA_m001_30694.trc here):

*** 2017-04-06T14:10:11.979283+02:00
*** SESSION ID:(81.2340) 2017-04-06T14:10:11.979302+02:00
*** CLIENT ID:() 2017-04-06T14:10:11.979306+02:00
*** SERVICE NAME:(SYS$BACKGROUND) 2017-04-06T14:10:11.979309+02:00
*** MODULE NAME:(MMON_SLAVE) 2017-04-06T14:10:11.979313+02:00
*** ACTION NAME:(Flush KSXM hash table action) 2017-04-06T14:10:11.979317+02:00
*** CLIENT DRIVER:() 2017-04-06T14:10:11.979320+02:00
 
...
 
DBMS_STATS: compute_volatile_flag: objn=74843, flag=0, new_flag=0, inserts_new=619, updates_new=0, deletes_new=0, inserts_old=619, updates_old=0, deletes_old=0, rowcnt=, rowcnt_loc=, stale_pcnt=10, gather=NO_GATHER, flag_result=0
DBMS_STATS: compute_volatile_flag: objn=74862, flag=0, new_flag=0, inserts_new=4393, updates_new=0, deletes_new=0, inserts_old=4393, updates_old=0, deletes_old=0, rowcnt=, rowcnt_loc=, stale_pcnt=10, gather=NO_GATHER, flag_result=0
DBMS_STATS: compute_volatile_flag: objn=74867, flag=1, new_flag=0, inserts_new=4861477, updates_new=584000, deletes_new=13475192, inserts_old=3681477, updates_old=466000, deletes_old=12885192, rowcnt=, rowcnt_loc=, stale_pcnt=10, gather=NO_GATHER, flag_result=1

Those entries appear every hour. Obviously, they are looking at some table (by their object_id) and computes a new flag from an existing flag and statistics about new and old DML (insert, update, delete). There’s a mention or row count and stale percentage. Obviously, the volatility of tables est computed every hour (mentions gather=NO_GATHER) or when we gather statistics (gather=GATHER). This goes beyond the DML monitoring from previous release, but is probably based on it.

Testing some DML

SQL> delete from DEMO;
10000 rows deleted.
 
SQL> insert into DEMO select rownum from xmltable('1 to 10000');
10000 rows created.
 
SQL> commit;
Commit complete.
 
SQL> select count(*) numrows from DEMO;
NUMROWS
----------
10000
 
SQL> update demo set n=n+1 where rownum lt;= 2000;
 
2000 rows updated.
 
SQL> insert into DEMO select rownum from xmltable('1 to 10000');
 
10000 rows created.

I deleted 10000 rows and inserted 10000, with a commit at the end. I updated 2000 ones and inserted 10000 again, without commit.

x$ksxmme

DML monitoring is done in memory, I order to see the changes in DBA_TAB_MODIFICATIONS, we need to flush it. But this in-memory information is visible in X$ fixed view:

SQL> select * from X$KSXMME where objn=&object_id;
old 1: select * from X$KSXMME where objn=&object_id
new 1: select * from X$KSXMME where objn= 74867
 
ADDR INDX INST_ID CON_ID CHUNKN SLOTN OBJN INS UPD DEL DROPSEG CURROWS PAROBJN LASTUSED FLAGS
---------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
00007F526E0B81F0 0 1 0 64 256 74867 20000 2000 10000 0 2350000 0 1491467123 128

Here are my 10000 deletes + 10000 inserts + 2000 updates + 10000 inserts. Of course the uncommitted ones are there because DML tracking do not keep the numbers for each transaction in order to update later what is committed or not.

The proof is that when I rollback, the numbers do not change:

SQL> rollback;
Rollback complete.
 
SQL> select * from X$KSXMME where objn=&object_id;
old 1: select * from X$KSXMME where objn=&object_id
new 1: select * from X$KSXMME where objn= 74867
 
ADDR INDX INST_ID CON_ID CHUNKN SLOTN OBJN INS UPD DEL DROPSEG CURROWS PAROBJN LASTUSED FLAGS
---------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
00007F526DDF47F8 0 1 0 64 256 74867 20000 2000 10000 0 2350000 0 1491467123 128

Yes, there is an estimation of the current number of rows here, in real-time. This is used to compare the changes with the total number, but you can use it to see the progress of a big transaction, giving a view of uncommitted changes.

sys.mon_mods_all$

The table sys.mon_mods_all$ is what is behind DBA_TAB_MODIFICATIONS (not exactly, but that will be for another blog post) and you have to flush what’s in memory to see the latest changes there:

SQL> exec dbms_stats.flush_database_monitoring_info;
PL/SQL procedure successfully completed.
 
SQL> select * from sys.mon_mods_all$ where obj#=&object_id;
old 1: select * from sys.mon_mods_all$ where obj#=&object_id
new 1: select * from sys.mon_mods_all$ where obj#= 74867
 
OBJ# INSERTS UPDATES DELETES TIMESTAMP FLAGS DROP_SEGMENTS
---------- ---------- ---------- ---------- ------------------ ---------- -------------
74867 5581477 656000 13835192 06-APR 15:10:53 1 0

The flag 1 means that the table has been truncated since the latest stats gathering.

This is what we already know from previous release. Nothing to do with the trace we see every hour in MMON slave.

sys.optstat_snapshot$

What happens every hour is that a snapshot of sys.mon_mods_all$ is stored in sys.optstat_snapshot$:

SQL> select * from sys.optstat_snapshot$ where obj#=&object_id order by timestamp;
old 1: select * from sys.optstat_snapshot$ where obj#=&object_id order by timestamp
new 1: select * from sys.optstat_snapshot$ where obj#= 74867 order by timestamp
 
OBJ# INSERTS UPDATES DELETES FLAGS TIMESTAMP
---------- ---------- ---------- ---------- ---------- ------------------
74867 999 0 0 32 05-APR-17 17:27:01
74867 1997 0 0 32 05-APR-17 17:33:25
74867 1997 0 0 32 05-APR-17 17:33:31
74867 1997 0 0 32 05-APR-17 17:33:32
74867 80878 0 160 0 05-APR-17 18:59:37
74867 90863 0 210 0 05-APR-17 20:53:07
74867 10597135 0 410 0 05-APR-17 21:53:13
74867 10598134 0 410 32 05-APR-17 22:02:38
74867 38861 0 10603745 1 06-APR-17 08:17:58
74867 38861 0 10603745 1 06-APR-17 09:18:04
74867 581477 124000 11175192 1 06-APR-17 10:11:27
74867 1321477 230000 11705192 1 06-APR-17 11:09:50
74867 2481477 346000 12285192 1 06-APR-17 12:09:56
74867 3681477 466000 12885192 1 06-APR-17 01:10:04
74867 4861477 584000 13475192 1 06-APR-17 02:10:11
74867 5561477 654000 13825192 1 06-APR-17 03:10:19

You see snapshots every hour, the latest being 03:10, 02:10, 01.10, 12:09, 11:09, …
You see additional snapshots at each statistics gathering. I’ve run dbms_stats.gather_table_stats at 17:27 and 17:33 several times the day before. Those snapshots are flagged 32.
The statistics was gathered again at 20:02 (the auto job) and I’ve truncated the table after that which is why the flag is 1.

dbms_stats_advisor.compute_volatile_flag

My guess is that there should be a flag for volatile tables here, because I’ve seen a trace for compute_volatile_flag in MMON trace, so I’ve enabled sql_trace for the MMON slave, and here is the query which takes the snapshot:

insert /* KSXM:TAKE_SNPSHOT */ into sys.optstat_snapshot$ (obj#, inserts, updates, deletes, timestamp, flags) (select m.obj#, m.inserts, m.updates, m.deletes, systimestamp, dbms_stats_advisor.compute_volatile_flag( m.obj#, m.flags, :flags, m.inserts, m.updates, m.deletes, s.inserts, s.updates, s.deletes, null, nvl(to_number(p.valchar), :global_stale_pcnt), s.gather) flags from sys.mon_mods_all$ m, (select si.obj#, max(si.inserts) inserts, max(si.updates) updates, max(si.deletes) deletes, decode(bitand(max(si.flags), :gather_flag), 0, 'NO_GATHER', 'GATHER') gather, max(si.timestamp) timestamp from sys.optstat_snapshot$ si, (select obj#, max(timestamp) ts from sys.optstat_snapshot$ group by obj#) sm where si.obj# = sm.obj# and si.timestamp = sm.ts group by si.obj#) s, sys.optstat_user_prefs$ p where m.obj# = s.obj#(+) and m.obj# = p.obj#(+) and pname(+) = 'STALE_PERCENT' and dbms_stats_advisor.check_mmon_policy_violation(rownum, 6, 2) = 0)

It reads the current values (from sys.mon_mods_all$) and the last values (from sys.optstat_snapshot$), reads the stale percentage parameter, and calls the dbms_stats_advisor.compute_volatile_flag function that updates the flag with one passed as :flag, probably adding the value 64 (see below) when table is volatile (probably when sum of DML is over the row count + stale percentage). The function is probably different when the snapshots comes from statistics gathering (‘GATHER’) or from DML monitoring (‘NO_GATHER’) because the number of rows is absolute or relative to the previous one.

From the trace of bind variables, or simply from the dbms_stats trace, I can see all values:
DBMS_STATS: compute_volatile_flag: objn=74867, flag=1, new_flag=0, inserts_new=5701477, updates_new=668000, deletes_new=13895192, inserts_old=5701477, updates_old=668000, deletes_old=13895192, rowcnt=, rowcnt_loc=, stale_pcnt=10, gather=NO_GATHER, flag_result=1
DBMS_STATS: compute_volatile_flag: objn=74867, flag=1, new_flag=0, inserts_new=4861477, updates_new=584000, deletes_new=13475192, inserts_old=3681477, updates_old=466000, deletes_old=12885192, rowcnt=, rowcnt_loc=, stale_pcnt=10, gather=NO_GATHER, flag_result=1
DBMS_STATS: compute_volatile_flag: objn=74867, flag=1, new_flag=0, inserts_new=5561477, updates_new=654000, deletes_new=13825192, inserts_old=4861477, updates_old=584000, deletes_old=13475192, rowcnt=, rowcnt_loc=, stale_pcnt=10, gather=NO_GATHER, flag_result=1

The input flag is 1 and the output flag is 1. And I think that, whatever the number of DML we have, this is because the new_flag=0

This explains why I was not able to have snapshots flagged as volatile even when changing a lot of rows. Then How can the statistics advisor detect my volatile table?

Statistics Advisor

I’ve traced the statistics advisor

set long 100000 longc 10000
variable t varchar2(30)
variable e varchar2(30)
variable r clob
exec :t:= DBMS_STATS.CREATE_ADVISOR_TASK('my_task');
exec :e:= DBMS_STATS.EXECUTE_ADVISOR_TASK('my_task');
exec :r:= DBMS_STATS.REPORT_ADVISOR_TASK('my_task');
print r

No ‘LockVolatileTable’ rule has raised a recommendation, but I’ve seen a call to the DBMS_STATS.CHECK_VOLATILE function with an object_id as parameter.

dbms_stats_internal.check_volatile

In order to understand what are the criteria, I’ve run (with sql_trace) the function on my table:

SQL> select dbms_stats_internal.check_volatile(&object_id) from dual;
old 1: select dbms_stats_internal.check_volatile(&object_id) from dual
new 1: select dbms_stats_internal.check_volatile( 74867) from dual
 
DBMS_STATS_INTERNAL.CHECK_VOLATILE(74867)
------------------------------------------
F

I suppose ‘F’ is false, which explains why my table was not considered as volatile.

Here is the trace with binds:

PARSING IN CURSOR #140478915921360 len=191 dep=1 uid=0 oct=3 lid=0 tim=99947151021 hv=976524548 ad='739cb468' sqlid='1r3ujfwx39584'
SELECT SUM(CASE WHEN ISVOLATILE > 0 THEN 1 ELSE 0 END) FROM (SELECT OBJ#, BITAND(FLAGS, :B2 ) ISVOLATILE FROM OPTSTAT_SNAPSHOT$ WHERE OBJ# = :B1 ORDER BY TIMESTAMP DESC) O WHERE ROWNUM < :B3
END OF STMT
...
BINDS #140478915921360:
 
Bind#0
oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=72 off=0
kxsbbbfp=7fc3cbe1c158 bln=22 avl=02 flg=05
value=64
Bind#1
oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=0 off=24
kxsbbbfp=7fc3cbe1c170 bln=22 avl=04 flg=01
value=74867
Bind#2
oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=0 off=48
kxsbbbfp=7fc3cbe1c188 bln=22 avl=02 flg=01
value=24

So, here is what the algorithm looks like:

  1. sys.opstat_snapshot$ is read for the latest 24 snapshots (remember that we have snapshots every hour + at each statistics gathering)
  2. ‘ISVOLATILE’ is 1 when the flags from the snapshots has flag 64. This is how I guessed that snapshots should me flagged with 64 by compute_volatile_flag.
  3. And finally, the number of ‘ISVOLATILE’ ones is summed.

So, it seems that the Statistics Advisor will raise a recommendation when the table has been flagged as volatile multiple times over the last 24 hour. How many? let’s guess:

SQL> insert into sys.optstat_snapshot$ select &object_id,0,0,0,64,sysdate from xmltable('1 to 12');
old 1: insert into sys.optstat_snapshot$ select &object_id,0,0,0,64,sysdate from xmltable('1 to 12')
new 1: insert into sys.optstat_snapshot$ select 74867,0,0,0,64,sysdate from xmltable('1 to 12')
 
12 rows created.
 
SQL> select dbms_stats_internal.check_volatile(&object_id) from dual;
old 1: select dbms_stats_internal.check_volatile(&object_id) from dual
new 1: select dbms_stats_internal.check_volatile( 74867) from dual
 
DBMS_STATS_INTERNAL.CHECK_VOLATILE(74867)
-----------------------------------------
F
 
SQL> rollback;
 
Rollback complete.

I’ve called the function after inserting various number of lines with flag=63 into sys.optstat_snapshot$ and up to 12 snapshots, it is still not considered as volatile.
Please remember that this is a lab, we are not expected to update the internal dictionary tables ourselves.

Now inserting one more:

SQL> insert into sys.optstat_snapshot$ select &object_id,0,0,0,64,sysdate from xmltable('1 to 13');
old 1: insert into sys.optstat_snapshot$ select &object_id,0,0,0,64,sysdate from xmltable('1 to 13')
new 1: insert into sys.optstat_snapshot$ select 74867,0,0,0,64,sysdate from xmltable('1 to 13')
 
13 rows created.
 
SQL> select dbms_stats_internal.check_volatile(&object_id) from dual;
old 1: select dbms_stats_internal.check_volatile(&object_id) from dual
new 1: select dbms_stats_internal.check_volatile( 74867) from dual
 
DBMS_STATS_INTERNAL.CHECK_VOLATILE(74867)
-----------------------------------------
T
 
SQL> rollback;
 
Rollback complete.

Good I have a ‘T’ here for ‘true. I conclude that the Statistics Advisor recommends to lock the stats on tables when half of the last 24h hours snapshots have encountered more than STALE_PERCENT modifications.

So what?

My table was not considered as volatile. None of the snapshots have been flagged as volatile. I’m quite sure that the number of DML is sufficient, so I suppose that this is disabled by default and I don’t know how to enable it. What I want to see is the compute_volatile_flag called with new_flag=64 so that snapshots are flagged when a large percentage or rows have been modified, so that enough snapshots have been flagged to be considered by the the check_volatile function.
Even if it is enabled, I think that there are more cases where tables should have statistics locked. Even if a table is empty for 5 minutes per day, we must be sure that the statistics are not gathered at that time. And looking at the Statistics Advisor thresholds, this case is far from being detected.
Final thought here: do you realize that you buy an expensive software to detect the changes happening on your tables, guess how the tables are updated, and recommend (and even implement) a general best practice? Does it mean that, today, we put in production some applications where we have no idea about what it does? Aren’t we supposed to design the application, document which tables are volatile and when they are loaded in bulk, and when to gather stats and lock them?

 

Cet article 12cR2 DML monitoring and Statistics Advisor est apparu en premier sur Blog dbi services.

Performance issue - Cardinality Feedbacks

Tom Kyte - Thu, 2017-04-06 14:46
Hi, DB version - 11.2.0.4 optimizer compatible - 11.2.0.4 -Query incorporates much subqueries and joins,in some tables data changes daily and in some monthly but stats are recent. 10-11 tables are incorporated where only one table contains aro...
Categories: DBA Blogs

pls-00222: no function with 'to_date' exists in this scope

Tom Kyte - Thu, 2017-04-06 14:46
I have written below query as embedded SQL in Cobol. the procobol compiler giver error - pls-00222: no function with 'to_date' exists in this scope But the query works fine in oracle SQL developer Note : to_date is a column of PAY_SUM table. I...
Categories: DBA Blogs

Consistent Gets

Tom Kyte - Thu, 2017-04-06 14:46
Hi TOM, <code>SQL>create table t_20_rows as select object_name, object_id from all_objects where rownum < 21; SQL> EXEC SHOW_SPACE('T_20_ROWS'); Free Blocks.............................0 Total Blocks............................8 ...
Categories: DBA Blogs

latch: cache buffers chains

Tom Kyte - Thu, 2017-04-06 14:46
Dear Experts, I work in a telecom domain, We have AIA and SOA application which we are tunning for increasing the tps. We have observed below even when same session are hitting the database. I would request you to please provide me suggestion or s...
Categories: DBA Blogs

import of individual sub-partition.

Tom Kyte - Thu, 2017-04-06 14:46
I am trying to import individual sub-partition to existing Partitioned table have same structure as source table. But giving me this error. <code> $impdp livedb/livedb@***** dumpfile=branchoct15_%U.dmp directory=IMP15 tables=APP_BRANCH remap_...
Categories: DBA Blogs

Transaction Rollback logging during recovery after a Physical Standby failover

Tom Kyte - Thu, 2017-04-06 14:46
I am wondering about the behavior and if any details are appended to the redo/archived logs for transactions which were not committed at the time of a DB Failover, and specifically on a Failover to a Data Guard Physical Standby when Maximum Perfor...
Categories: DBA Blogs

Altering sequences in Oracle

Tom Kyte - Thu, 2017-04-06 14:46
We recently ran into an issue where altering a sequence resulted in an invalid number, a negative number. Working with Oracle Support, we came across this, Bug 5881343: ALTERING "INCREMENT BY" VALUE FOR A SEQUENCE CORRUPTS LAST_NUMBER This bug wa...
Categories: DBA Blogs

How to return CLOB datatype as output Parameter in Oracle stored procedure?

Tom Kyte - Thu, 2017-04-06 14:46
I want to return an output param of CLOB data type(XML data) from an oracle stored procedure. My procedure looks like this create or replace PROCEDURE myProcedure ( myParam1 IN NUMBER ,myParam2 IN NUMBER ,myParam3 OUT CLOB ) AS xml...
Categories: DBA Blogs

DBMS_CRYPTO how to obtain a key from a OS file

Tom Kyte - Thu, 2017-04-06 14:46
Tom, Looking to use DBMS_CRYPTO to secure some sensitive data in the database. I'd like to hold my key in an OS file so looking at using UTL_FILE to extract the key. The concern I have with this as a concept is that each time I want to decrypt...
Categories: DBA Blogs

Partner Webcast – Oracle MySQL cloud service : An Enterprise-grade Cloud Platform for your MySQL customers

Many customers are implementing a dual database strategy, running some of their applications on the Oracle Database, and using Oracle MySQL, the leading open source database for others. If some of...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Changes Are Coming to the Usable Apps Blog

Usable Apps - Thu, 2017-04-06 08:41

The Usable Apps blog will be transitioning to a new Oracle blogging platform and URL soon. This location will remain. 

Stay tuned.

Ultan (@ultan

OUD – Oracle Unified Directory 11.1.2.3, Oracle generates more and more LDAP lookups with every release

Yann Neuhaus - Thu, 2017-04-06 04:50

After installing OUD some time ago, I was doing some tests to see how it performs, and as long as I do ldap searching on the command line it looks very good. I am running Unified Directory 11.1.2.3.170117 (latest PSU), just for the protocol and I use the OUD only for TNS resolving and nothing else. However, Oracle clients are not connecting with “ldapsearch”, they are using “sqlplus” and the TNS name is resolved automatically in background.

I do have the following ldap.ora and sqlnet.ora. Very simply and nothing special.

oracle@dbidg03:/u01/app/oracle/network/admin/ [DBIT122] cat ldap.ora
DIRECTORY_SERVERS= (dbidg01:1389)
DEFAULT_ADMIN_CONTEXT = "dc=dbi,dc=com"
DIRECTORY_SERVER_TYPE = OID

oracle@dbidg03:/u01/app/oracle/network/admin/ [DBIT122] cat sqlnet.ora
NAMES.DIRECTORY_PATH = (TNSNAMES,LDAP,EZCONNECT)

Here is a little quiz: How many LDAP search requests do you expect when you connect to a 12.2 databases with the following command?

sqlplus system/manager@dbit122_ldap

Only one, right? Oracle looks up the TNS name dbit122_ldap in the OUD and retrieves the connect string. As soon as Oracle has the connect details, OUD does not play any role anymore. In case you do a ldapsearch from the 12.2 Oracle Home, then this is exactly the case.

oracle@dbidg03:/u01/app/oracle/network/admin/ [DBIT122] which ldapsearch
/u01/app/oracle/product/12.2.0/dbhome_1/bin/ldapsearch
oracle@dbidg03:/u01/app/oracle/network/admin/ [DBIT122] ldapsearch -v -h dbidg01 -p 1389 -b "cn=DBIT122_LDAP,cn=OracleContext,dc=dbi,dc=com" -s base "(objectclass=*)" "objectclass,orclNetDescString,orclNetDescName,orclVersion"
ldap_open( dbidg01, 1389 )
filter pattern: (objectclass=*)
returning: objectclass,orclNetDescString,orclNetDescName,orclVersion
filter is: ((objectclass=*))
cn=dbit122_ldap,cn=OracleContext,dc=dbi,dc=com
1 matches

In the OUD access log, you can see it clearly. One connect, the bind, the search request and finally the disconnect. Exactly how it should be, and the etime is 1 millisecond. That’s the elapsed time to deliver the search request which is very fast.

[dbafmw@dbidg01 logs]$ tail -40f /u01/app/oracle/product/Middleware/11.1.2.3/asinst_1/OUD/logs/access
...
[06/Apr/2017:10:46:49 +0200] CONNECT conn=877 from=192.168.56.203:21971 to=192.168.56.201:1389 protocol=LDAP
[06/Apr/2017:10:46:49 +0200] BIND REQ conn=877 op=0 msgID=1 type=SIMPLE dn="" version=3
[06/Apr/2017:10:46:49 +0200] BIND RES conn=877 op=0 msgID=1 result=0 authDN="" etime=0
[06/Apr/2017:10:46:49 +0200] SEARCH REQ conn=877 op=1 msgID=2 base="cn=DBIT122_LDAP,cn=OracleContext,dc=dbi,dc=com" scope=base filter="(objectclass=*)" attrs="objectclass,orclNetDescString,orclNetDescName,orclVersion"
[06/Apr/2017:10:46:49 +0200] SEARCH RES conn=877 op=1 msgID=2 result=0 nentries=1 etime=1
[06/Apr/2017:10:46:49 +0200] UNBIND REQ conn=877 op=2 msgID=3
[06/Apr/2017:10:46:49 +0200] DISCONNECT conn=877 reason="Client Disconnect"

Ok. Let’s do the first test with Oracle 10.2.0.5. I know, it is not supported, however, regarding LDAP searches it is a version  where everything is ok. My test is very simple, just a sqlplus connection and then an exit. Nothing else.

oracle@dbidg03:/u01/app/oracle/network/admin/ [DBIT102] sqlplus -V

SQL*Plus: Release 10.2.0.5.0 - Production

oracle@dbidg03:/u01/app/oracle/network/admin/ [DBIT102] sqlplus system/manager@dbit122_ldap

SQL*Plus: Release 10.2.0.5.0 - Production on Thu Apr 6 11:00:02 2017

Copyright (c) 1982, 2010, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

In the OUD access log I see, like expected only one search request.

[dbafmw@dbidg01 logs]$ tail -40f /u01/app/oracle/product/Middleware/11.1.2.3/asinst_1/OUD/logs/access
...
[06/Apr/2017:11:01:18 +0200] CONNECT conn=879 from=192.168.56.203:21974 to=192.168.56.201:1389 protocol=LDAP
[06/Apr/2017:11:01:18 +0200] BIND REQ conn=879 op=0 msgID=1 type=SIMPLE dn="" version=3
[06/Apr/2017:11:01:18 +0200] BIND RES conn=879 op=0 msgID=1 result=0 authDN="" etime=0
[06/Apr/2017:11:01:18 +0200] SEARCH REQ conn=879 op=1 msgID=2 base="cn=dbit122_ldap,cn=OracleContext,dc=dbi,dc=com" scope=base filter="(objectclass=*)" attrs="objectclass,orclNetDescString,orclNetDescName,orclVersion"
[06/Apr/2017:11:01:18 +0200] SEARCH RES conn=879 op=1 msgID=2 result=0 nentries=1 etime=2
[06/Apr/2017:11:01:18 +0200] UNBIND REQ conn=879 op=2 msgID=3
[06/Apr/2017:11:01:18 +0200] DISCONNECT conn=879 reason="Client Disconnect"

Let’s to the same now with 11.2.0.4. This time with a fully supported version. Yes. It still is. :-)

oracle@dbidg03:/u01/app/oracle/network/admin/ [DBIT112] sqlplus -V

SQL*Plus: Release 11.2.0.4.0 Production

oracle@dbidg03:/u01/app/oracle/network/admin/ [DBIT112] sqlplus system/manager@dbit122_ldap

SQL*Plus: Release 11.2.0.4.0 Production on Thu Apr 6 11:03:17 2017

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

Wowwwww … now I see already two search request on the OUD. To be honest, I haven’t expected that. One should be sufficient from my point of view.

[dbafmw@dbidg01 logs]$ tail -40f /u01/app/oracle/product/Middleware/11.1.2.3/asinst_1/OUD/logs/access
...
[06/Apr/2017:11:03:43 +0200] CONNECT conn=882 from=192.168.56.203:21979 to=192.168.56.201:1389 protocol=LDAP
[06/Apr/2017:11:03:43 +0200] BIND REQ conn=882 op=0 msgID=1 type=SIMPLE dn="" version=3
[06/Apr/2017:11:03:43 +0200] BIND RES conn=882 op=0 msgID=1 result=0 authDN="" etime=0
[06/Apr/2017:11:03:43 +0200] SEARCH REQ conn=882 op=1 msgID=2 base="cn=dbit122_ldap,cn=OracleContext,dc=dbi,dc=com" scope=base filter="(objectclass=*)" attrs="objectclass,orclNetDescString,orclNetDescName,orclVersion"
[06/Apr/2017:11:03:43 +0200] SEARCH RES conn=882 op=1 msgID=2 result=0 nentries=1 etime=1
[06/Apr/2017:11:03:43 +0200] UNBIND REQ conn=882 op=2 msgID=3
[06/Apr/2017:11:03:43 +0200] DISCONNECT conn=882 reason="Client Disconnect"
[06/Apr/2017:11:03:43 +0200] CONNECT conn=883 from=192.168.56.203:21980 to=192.168.56.201:1389 protocol=LDAP
[06/Apr/2017:11:03:43 +0200] BIND REQ conn=883 op=0 msgID=1 type=SIMPLE dn="" version=3
[06/Apr/2017:11:03:43 +0200] BIND RES conn=883 op=0 msgID=1 result=0 authDN="" etime=1
[06/Apr/2017:11:03:43 +0200] SEARCH REQ conn=883 op=1 msgID=2 base="cn=dbit122_ldap,cn=OracleContext,dc=dbi,dc=com" scope=base filter="(objectclass=*)" attrs="objectclass,orclNetDescString,orclNetDescName,orclVersion"
[06/Apr/2017:11:03:43 +0200] SEARCH RES conn=883 op=1 msgID=2 result=0 nentries=1 etime=2
[06/Apr/2017:11:03:43 +0200] UNBIND REQ conn=883 op=2 msgID=3
[06/Apr/2017:11:03:43 +0200] DISCONNECT conn=883 reason="Client Disconnect"

But when you think, it can’t get worse, then do the same simple test with a 12.1.0.2 Oracle client.

oracle@dbidg03:/u01/app/oracle/network/admin/ [DBIT121] sqlplus -V

SQL*Plus: Release 12.1.0.2.0 Production

oracle@dbidg03:/u01/app/oracle/network/admin/ [DBIT121] sqlplus system/manager@dbit122_ldap

SQL*Plus: Release 12.1.0.2.0 Production on Thu Apr 6 11:06:18 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Thu Apr 06 2017 11:03:43 +02:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

Incredible, it is issuing three ldap search requests against the OUD for a simple sqlplus connection.

[dbafmw@dbidg01 logs]$ tail -40f /u01/app/oracle/product/Middleware/11.1.2.3/asinst_1/OUD/logs/access
...
[06/Apr/2017:11:06:41 +0200] CONNECT conn=887 from=192.168.56.203:21986 to=192.168.56.201:1389 protocol=LDAP
[06/Apr/2017:11:06:41 +0200] BIND REQ conn=887 op=0 msgID=1 type=SIMPLE dn="" version=3
[06/Apr/2017:11:06:41 +0200] BIND RES conn=887 op=0 msgID=1 result=0 authDN="" etime=0
[06/Apr/2017:11:06:41 +0200] SEARCH REQ conn=887 op=1 msgID=2 base="cn=dbit122_ldap,cn=OracleContext,dc=dbi,dc=com" scope=base filter="(objectclass=*)" attrs="objectclass,orclNetDescString,orclNetDescName,orclVersion"
[06/Apr/2017:11:06:41 +0200] SEARCH RES conn=887 op=1 msgID=2 result=0 nentries=1 etime=1
[06/Apr/2017:11:06:41 +0200] UNBIND REQ conn=887 op=2 msgID=3
[06/Apr/2017:11:06:41 +0200] DISCONNECT conn=887 reason="Client Disconnect"
[06/Apr/2017:11:06:41 +0200] CONNECT conn=888 from=192.168.56.203:21987 to=192.168.56.201:1389 protocol=LDAP
[06/Apr/2017:11:06:41 +0200] BIND REQ conn=888 op=0 msgID=1 type=SIMPLE dn="" version=3
[06/Apr/2017:11:06:41 +0200] BIND RES conn=888 op=0 msgID=1 result=0 authDN="" etime=0
[06/Apr/2017:11:06:41 +0200] SEARCH REQ conn=888 op=1 msgID=2 base="cn=dbit122_ldap,cn=OracleContext,dc=dbi,dc=com" scope=base filter="(objectclass=*)" attrs="objectclass,orclNetDescString,orclNetDescName,orclVersion"
[06/Apr/2017:11:06:41 +0200] SEARCH RES conn=888 op=1 msgID=2 result=0 nentries=1 etime=2
[06/Apr/2017:11:06:41 +0200] UNBIND REQ conn=888 op=2 msgID=3
[06/Apr/2017:11:06:41 +0200] DISCONNECT conn=888 reason="Client Disconnect"
[06/Apr/2017:11:06:41 +0200] CONNECT conn=889 from=192.168.56.203:21988 to=192.168.56.201:1389 protocol=LDAP
[06/Apr/2017:11:06:41 +0200] BIND REQ conn=889 op=0 msgID=1 type=SIMPLE dn="" version=3
[06/Apr/2017:11:06:41 +0200] BIND RES conn=889 op=0 msgID=1 result=0 authDN="" etime=1
[06/Apr/2017:11:06:41 +0200] SEARCH REQ conn=889 op=1 msgID=2 base="cn=dbit122_ldap,cn=OracleContext,dc=dbi,dc=com" scope=base filter="(objectclass=*)" attrs="objectclass,orclNetDescString,orclNetDescName,orclVersion"
[06/Apr/2017:11:06:41 +0200] SEARCH RES conn=889 op=1 msgID=2 result=0 nentries=1 etime=2
[06/Apr/2017:11:06:41 +0200] UNBIND REQ conn=889 op=2 msgID=3
[06/Apr/2017:11:06:41 +0200] DISCONNECT conn=889 reason="Client Disconnect"

The last test is now with a 12cR2 client. Will it increase now to 4?

oracle@dbidg03:/u01/app/oracle/network/admin/ [DBIT122] sqlplus system/manager@dbit122_ldap

SQL*Plus: Release 12.2.0.1.0 Production on Thu Apr 6 11:09:08 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Last Successful login time: Thu Apr 06 2017 11:06:41 +02:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

No, it did not increase to 4. But with 12cR2 you will see like with 12cR1 also 3 search requests against the OUD.

[dbafmw@dbidg01 logs]$ tail -40f /u01/app/oracle/product/Middleware/11.1.2.3/asinst_1/OUD/logs/access
...
[06/Apr/2017:11:09:07 +0200] CONNECT conn=890 from=192.168.56.203:21990 to=192.168.56.201:1389 protocol=LDAP
[06/Apr/2017:11:09:07 +0200] BIND REQ conn=890 op=0 msgID=1 type=SIMPLE dn="" version=3
[06/Apr/2017:11:09:07 +0200] BIND RES conn=890 op=0 msgID=1 result=0 authDN="" etime=1
[06/Apr/2017:11:09:07 +0200] SEARCH REQ conn=890 op=1 msgID=2 base="cn=dbit122_ldap,cn=OracleContext,dc=dbi,dc=com" scope=base filter="(objectclass=*)" attrs="objectclass,orclNetDescString,orclNetDescName,orclVersion"
[06/Apr/2017:11:09:07 +0200] SEARCH RES conn=890 op=1 msgID=2 result=0 nentries=1 etime=2
[06/Apr/2017:11:09:07 +0200] UNBIND REQ conn=890 op=2 msgID=3
[06/Apr/2017:11:09:07 +0200] DISCONNECT conn=890 reason="Client Disconnect"
[06/Apr/2017:11:09:07 +0200] CONNECT conn=891 from=192.168.56.203:21991 to=192.168.56.201:1389 protocol=LDAP
[06/Apr/2017:11:09:07 +0200] BIND REQ conn=891 op=0 msgID=1 type=SIMPLE dn="" version=3
[06/Apr/2017:11:09:07 +0200] BIND RES conn=891 op=0 msgID=1 result=0 authDN="" etime=0
[06/Apr/2017:11:09:07 +0200] SEARCH REQ conn=891 op=1 msgID=2 base="cn=dbit122_ldap,cn=OracleContext,dc=dbi,dc=com" scope=base filter="(objectclass=*)" attrs="objectclass,orclNetDescString,orclNetDescName,orclVersion"
[06/Apr/2017:11:09:07 +0200] SEARCH RES conn=891 op=1 msgID=2 result=0 nentries=1 etime=1
[06/Apr/2017:11:09:07 +0200] UNBIND REQ conn=891 op=2 msgID=3
[06/Apr/2017:11:09:07 +0200] DISCONNECT conn=891 reason="Client Disconnect"
[06/Apr/2017:11:09:07 +0200] CONNECT conn=892 from=192.168.56.203:21992 to=192.168.56.201:1389 protocol=LDAP
[06/Apr/2017:11:09:07 +0200] BIND REQ conn=892 op=0 msgID=1 type=SIMPLE dn="" version=3
[06/Apr/2017:11:09:07 +0200] BIND RES conn=892 op=0 msgID=1 result=0 authDN="" etime=0
[06/Apr/2017:11:09:07 +0200] SEARCH REQ conn=892 op=1 msgID=2 base="cn=dbit122_ldap,cn=OracleContext,dc=dbi,dc=com" scope=base filter="(objectclass=*)" attrs="objectclass,orclNetDescString,orclNetDescName,orclVersion"
[06/Apr/2017:11:09:07 +0200] SEARCH RES conn=892 op=1 msgID=2 result=0 nentries=1 etime=1
[06/Apr/2017:11:09:07 +0200] UNBIND REQ conn=892 op=2 msgID=3
[06/Apr/2017:11:09:07 +0200] DISCONNECT conn=892 reason="Client Disconnect"

So what is the reason for this high increase in ldap searches. Instead of 1, it is doing 3 with 12cR1 and 12cR2, and 2 with 11gR2. That is 66% more than with Oracle 10gR2 clients. That’s enormous from my point view. Quite a huge extra load on your OUD server, when  you upgrade your Oracle clients.

To make it short, I have no answer. It might be related to the old Oracle Names code, which seems that it is still there. I have found errors in the client trace file regarding a A.SMD query. The A.SMD call is coming from the old Oracle Names server, where you could have done stuff like “NAMESCTL> QUERY DB920.oracle.com A.SMD”. But this is really a long time ago. My last Oracle Name server, I have seen in 2002.

oracle@dbidg02:/u01/app/oracle/network/trc/ [DBIT122] cat 12.2_client.trc | grep A.SMD
(4144394624) [04-APR-2017 14:38:18:633] nnfttran: Error querying DBIT122_LDAP of attribute A.SMD errcode 408
(4144394624) [04-APR-2017 14:38:18:642] nnfttran: Error querying DBIT122_LDAP of attribute A.SMD errcode 408
(4144394624) [04-APR-2017 14:38:18:646] nnfttran: Error querying DBIT122_LDAP of attribute A.SMD errcode 408

If I take a look at my 12cR2 adapters I have no Oracle Names compiled in. I don’t know if this is possible at all, with 12c.

oracle@dbidg03:/u01/app/oracle/network/admin/ [DBIT122] adapters | egrep -A 5 "Installed Oracle Net naming methods"
Installed Oracle Net naming methods are:

    Local Naming (tnsnames.ora)
    Oracle Directory Naming
    Oracle Host Naming
Conclusion

Ok. What should I say … take care if you upgrade your clients to more recent versions, in case you use OUD to resolve your names. It might generate some extra load on your OUD servers. More and more with every release since 10gR2. By the way … I have opened a SR at Oracle, because this seems to be a bug for me. I was very surprised, that I was the first one facing this issue. Will keep you posted as soon as I have results. ;-)

 

Cet article OUD – Oracle Unified Directory 11.1.2.3, Oracle generates more and more LDAP lookups with every release est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator