Feed aggregator

How to identify null columns vs null rows in left join

Tom Kyte - Fri, 2017-08-04 11:26
Hi Chris/Connor, I have a below query output: <code>select tab1.usr, tab2.salary from tab1 LEFT JOIN tab2 ON (tab1.usr = tab2.usr and tab1.name = tab2.name); USR salary ----- ------- 1111 5001 2222 NULL 3333 NULL Since usr: 22...
Categories: DBA Blogs

Exadata Capacity on Demand and Elastic Rack

Yann Neuhaus - Fri, 2017-08-04 11:18

Since X4 we can do Capacity on Demand on Exadata: disable some CPU cores to lower cost of Oracle Database licenses. Depending on the models, and the configuration, there are different minimums and here is a recap table about those.

Here is the summary of Capacity on Demand minimum, maximum and increment. Those numbers come from the configuration file of OEDA, the Oracle Exadata Deployment Assistant (es.properties) and you can see that it already has an option for Exadata X7-2

Exadata model sockets cores per socket cores per server thread per core Capacity on Demand minimum Cod maximum CoD increment X2-2 2 6 12 2 X3-2 2 8 16 2 X4-2 2 12 24 2 12
(not for 1/8th) 24 2 X5-2 2 18 36 2 14 36 2 X6-2 2 22 44 2 14
(8 for 1/8th) 44 2 X7-2 2 24 48 2 14
(8 for 1/8th) 48 2 X2-8 8 8 12 2 X3-8 8 10 12 2 X4-8 8 15 32 2 48 120 8 X5-8 8 18 32 2 56 144 8 X6-8 8 18 32 2 56 X7-8 8 24 32 2 SL6 2 32 64 8 14
(8 for 1/8th) 64 2 T7-2 2 32 62 8

 

Special minimums for 1/8th of Rack

The smallest configuration (1/8th of Rack) is a bit special. First, because it is physically identical to the 1/4th one with just some processors and disks disabled. But also, for this entry-level, the minimum required is lower – 8 cores per node – in X6.

Here is the Oracle Exadata Deployment Assistant for X6-2 1/8th of Rack:

CaptureOEDAx68002

When having selected 1/8th of Rack we are allowed to enable a minimum of 8 cores per nodes, as mentioned in the table above:

CaptureOEDAx68006

Elastic Rack

Elastic Rack configuration allows to configure any combination of database nodes and storage cells:

CaptureOEDAx68004

With Elastic Rack configuration, the next screen is not only displaying the configuration, but you can customize it.
Here I define the same configuration as an 8th of RAC:

CaptureOEDAx68005

However, because it is not an 1/8th Rack configuration, the minimum is 14 cores per node and not 8:

CaptureOEDAx68001

So be careful. Elastic configuration gives more flexibility, but CoD minimums are is different than the equivalent configuration.

/opt/oracle.SupportTools/resourcecontrol

As I’m talking about elastic configuration here is how the cores are enabled. The configuration assistant calls /opt/oracle.SupportTools/resourcecontrol which displays or updates the BIOS configuration. You may wonder why you can do that here and not in your own servers? Because here Oracle can trace what happened. You will find the log in /var/log/oracleexa/systemconfig.log and here is an example where the Elastic Rack has been deployed with 16 cores per database node Capacity on Demand:

Fri Aug 04 16:12:18 CEST 2017
Executing command: /opt/oracle.SupportTools/resourcecontrol -show
[INFO] Validated hardware and OS. Proceed.
[SHOW] Number of physical cores active per socket: 22
[SHOW] Total number of cores active: 44
 
Mon Aug 07 11:24:31 CEST 2017
Executing command: /opt/oracle.SupportTools/resourcecontrol -core 16 -force
[INFO] Validated hardware and OS. Proceed.
[INFO] Enabling 8 cores on each socket.
[INFO] Import all bios settings
[INFO] All bios settings have been imported with success
[ACTION] Reboot server for settings to take effect
[SHOW] Number of physical cores active per socket: 8
[SHOW] Total number of cores active: 16
 
Mon Aug 07 11:31:24 CEST 2017
Executing command: /opt/oracle.SupportTools/resourcecontrol -show
[INFO] Validated hardware and OS. Proceed.
[SHOW] Number of physical cores active per socket: 8
[SHOW] Total number of cores active: 16

This does not stay on your server. There is a rule that you can do Capacity on Demand only if you have configured Platinum support, or use Oracle Configuration Manager, or Enterprise Manager. All those may store history of the CPU count, which means that it is auditable.

 

Cet article Exadata Capacity on Demand and Elastic Rack est apparu en premier sur Blog dbi services.

Can Nologging Be Enabled for EBS 12.2?

Steven Chan - Fri, 2017-08-04 09:59

The nologging Oracle database feature is used to enhance performance in certain areas of Oracle E-Business Suite. For example, it may be used during patch installation, and when building summary data for Business Intelligence.

What are the tradeoffs of using nologging?

Use of nologging in an operation means that the database redo logs will contain incomplete information about the changes made, with any data blocks that have been updated during the nologging operation being marked as invalid. As a result, a database restoration to a point in time (whether from a hot backup or a cold backup) may require additional steps in order to bring the affected data blocks up-to-date, and make the restored database usable. These additional steps may involve taking new backups of the associated datafiles, or by dropping and rebuilding the affected objects. The same applies to activation of a standby database.

Can nologging be enabled for EBS 12.2?

Yes.  See the following documentation for additional considerations and operational implications of using nologging:

Related Articles

Categories: APPS Blogs

Oracle FY18 Partners Immersion Training

This Oracle fiscal year, the FY18 Partners Immersion Training to our OPN partners promises to be a modern learning engagement designed to provide the building blocks to: Articulate the...

We share our skills to maximize your revenue!
Categories: DBA Blogs

A wonderful PostgreSQL feature: default privileges

Yann Neuhaus - Fri, 2017-08-04 02:52

Imagine this scenario (which is not so uncommon): You have a lot of objects in a user schema and you want to grant another user access to that tables. You can easily do this by granting select on the tables to the user and you’re fine. Really? Maybe now, but what will happen when the user which owns the objects creates new objects? Then you will need to grant those to the second user as well. In PostgreSQL there is an easier solution. Lets go …

Again we start by creating two users each with its own schema:

postgres=# create user a with login password 'a';
CREATE ROLE
postgres=# create schema a authorization a;
CREATE SCHEMA
postgres=# alter user a set search_path=a;
ALTER ROLE
postgres=# create user b with login password 'b';
CREATE ROLE
postgres=# create schema b authorization b;
CREATE SCHEMA
postgres=# alter user b set search_path=b;
ALTER ROLE
postgres=# \du
                                   List of roles
 Role name |                         Attributes                         | Member of 
-----------+------------------------------------------------------------+-----------
 a         |                                                            | {}
 b         |                                                            | {}
 postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

postgres=# \dn
  List of schemas
  Name  |  Owner   
--------+----------
 a      | a
 b      | b
 public | postgres
(3 rows)

User “a” shall be the one owning the objects:

postgres=# \c postgres a
You are now connected to database "postgres" as user "a".
postgres=> create table t1 ( a int );
CREATE TABLE
postgres=> create table t2 ( a int );
CREATE TABLE
postgres=> insert into t1 (a) values (1);
INSERT 0 1
postgres=> insert into t2 (a) values (2);
INSERT 0 1
postgres=> \d
       List of relations
 Schema | Name | Type  | Owner 
--------+------+-------+-------
 a      | t1   | table | a
 a      | t2   | table | a
(2 rows)

When you want to give user “b” access to these tables you could do:

postgres=> grant select on table t1 to b;
GRANT
postgres=> grant select on table t2 to b;
GRANT

From now on user “b” should be able to select from the two tables owned by user “a”, right?:

postgres=> \c postgres b
You are now connected to database "postgres" as user "b".
postgres=> select count(*) from a.t1;
ERROR:  permission denied for schema a
LINE 1: select count(*) from a.t1;

This is not how it works in PostgreSQL. What you need to do is this:

postgres=> \c postgres a
You are now connected to database "postgres" as user "a".
postgres=> grant usage on schema a to b;
GRANT

This allows user “b” access to the schema “a” (remember that a user and a schema are different things in PostgreSQL):

postgres=> \c postgres b
You are now connected to database "postgres" as user "b".
postgres=> select count(*) from a.t1;
 count 
-------
     1
(1 row)

postgres=> select count(*) from a.t2;
 count 
-------
     1
(1 row)

What happens now when user “a” creates another object:

postgres=> \c postgres a
You are now connected to database "postgres" as user "a".
postgres=> create table t3 as select * from t1;
SELECT 1
postgres=> \d
       List of relations
 Schema | Name | Type  | Owner 
--------+------+-------+-------
 a      | t1   | table | a
 a      | t2   | table | a
 a      | t3   | table | a
(3 rows)

Will user “b” be able to select data from it?

postgres=> \c postgres b
You are now connected to database "postgres" as user "b".
postgres=> select count(*) from a.t3;
ERROR:  permission denied for relation t3

Of course not. The “usage” on a schema grants only access to that schema but not access to the objects in the schema. When we want user “b” being able to select from all tables in schema “a” even when user “a” creates new objects then we can modify the default privileges:

postgres=# \c postgres postgres
You are now connected to database "postgres" as user "postgres".
postgres=# alter default privileges in schema a grant select on tables to b;
ALTER DEFAULT PRIVILEGES

Should user “b” now be able to select from the “t3″ table in schema “a”?

postgres=> select current_user;
 current_user 
--------------
 b
(1 row)

postgres=> select count(*) from a.t3;
ERROR:  permission denied for relation t3
postgres=> 

No. When you modify the default privileges this will affect only objects created after your modification. Lets create a new table with user “a” in schema “a”:

postgres=> \c postgres a
You are now connected to database "postgres" as user "a".
postgres=> create table t4 as select from t1;
SELECT 1

As this table was created after the modification to the default privileges user “b” is allowed to select from it automatically:

postgres=> \c postgres b
You are now connected to database "postgres" as user "b".
postgres=> select count(*) from a.t4;
 count 
-------
     1
(1 row)

When you check the link to the documentation above you’ll notice that you can not only grant select on tables but much more. Hope this helps …

 

Cet article A wonderful PostgreSQL feature: default privileges est apparu en premier sur Blog dbi services.

What Employers Want : Communication Skills

Tim Hall - Fri, 2017-08-04 01:28

It’s important you can present yourself in a confident and professional manner when it comes to interviews, but this also carries over into a work environment.

Once you get a job you need to be able to communicate effectively with your colleagues and with your customers/users. I know you think your silent genius act makes you look special, but it doesn’t. The initial interaction between humans involves building rapport. It’s kind-of difficult to build rapport with someone who refuses to talk to you and can’t look you in the eye. You need to get your communication skills sorted before you try to enter the job market.

If you are working in IT your written skills will be really important. You will need to communicate with colleagues and customers/users in a concise, but accurate way. People won’t read waffle (TL;DR), but they will demand enough detail to make sense of what you are saying. A quick read through your typical IT forum will make you realise that most people have terrible written skills and are incapable of stringing together a logical argument. You don’t have to be a prize winning novelist, but you need to be able to make yourself understood.

Remember in a previous post I wrote about enthusiasm. It doesn’t matter how enthusiastic you are if you can’t express it!

I’ve written some post on writing tips and public speaking tips. I think this is important for you as someone who is trying to enter the job market, or someone who is trying to move up the ladder. Don’t lose out on an opportunity because you’ve neglected your soft skills!

Check out the rest of this series here.

Cheers

Tim…

What Employers Want : Communication Skills was first posted on August 4, 2017 at 7:28 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Event Mapping: Fluid Landing Page "Dot" Buttons

Jim Marion - Thu, 2017-08-03 23:12

The bottom of a Fluid landing page contains dots that identify how many landing pages the user can view as well as the position of the current landing page within the user's landing page collection. This piece of information is quite useful on a swipe-enabled mobile device, but somewhat meaningless on my mac. A few months ago I was helping a PeopleSoft customer with their Fluid implementation. This rather brilliant customer made those dots clickable to navigate between landing pages. To implement this behavior, the customer modified delivered PeopleTools JavaScript and CSS definitions, which, unfortunately, presents a bit of an upgrade problem. In a recent webinar I mentioned that one of the things that excites me about Fluid is that everything is a component. What that means is we can use Event Mapping anywhere. Landing Pages are no exception. With that in mind, I wondered what this solution would look like with Event Mapping.

Those little dots at the bottom of the page are created through JavaScript. To enhance them, therefore, we will need to write some JavaScript. Because the original implementation of this behavior modified delivered PeopleTools JavaScript, the author was able to leverage existing JavaScript variables and functions. To avoid customizing PeopleTools, we will need to extract the relevant JavaScript and rewrite it in a standalone manner, meaning no dependencies on PeopleTools JavaScript. We'll start with a CSS selector to identify dots: .lpTabIndicators .dot. We will iterate over that dot collection, adding a click event handler to each dot. That click event handler needs to know the target tab on click so we need another CSS selector to identify tabs: .lpTabId span.ps_box-value. For compatibility reasons, I'm not going to use any libraries (like jQuery), just raw JavaScript:

(function () {
"use strict";
// include protection just in case PS inserts this JS twice
if (!window.jm_config_dots) {
window.jm_config_dots = true;
var originalSetTabIndicators = window.setTabIndicators;

// MonkeyPatch setTabIndicators because we want to run some code each
// time PeopleSoft invokes setTabIndicators
window.setTabIndicators = function () {

// JavaScript magic to invoke original setTabIndicators w/o knowing
// anything about the call signature
var returnVal = originalSetTabIndicators.apply(this, arguments);

// The important stuff that adds a click handler to the dots. This
// is the code we want to run each time PS invokes setTabIndicators
var dots = document.querySelectorAll('.lpTabIndicators .dot'),
tabs = document.querySelectorAll('.lpTabId span.ps_box-value'),
titles = document.querySelectorAll('.lpnameedit input');

[].forEach.call(dots, function (d, idx) {
d.setAttribute('title',titles[idx].value);
d.addEventListener("click", function () {
lpSwipeToTabFromDD(tabs[idx].innerHTML);
});
});

return returnVal;
};
}
}());

Funny... didn't I say we would start with a CSS selector? Looking at that code listing, that CSS selector and the corresponding iterator don't appear until nearly halfway through the listing. When testing and mocking up a solution, the objects you want to manipulate are a good place to start. We then surround that starting point with other code to polish the solution.

This code is short, but has some interesting twists, so let me break it down:

  • First, I wrapped the entire JavaScript routine in a self-executing anonymous JavaScript function. To make this functionality work, I have to maintain a variable, but don't want to pollute the global namespace. The self-executing anonymous function makes for a nice closure to keep everything secret.
  • Next, I have C-style include protection to ensure this file is only processed once. I don't know this is necessary. I suspect not because we are going to add this file using an event that only triggers once per component load, but it doesn't hurt.
  • The very next line is why I have a closure: originalSetTabIndicators. Our code needs to execute every time PeopleSoft invokes the delivered JavaScript setTabIndicators function. What better way to find out when that happens than to MonkeyPatch setTabIndicators? setTabIndicators creates those little dots. If our code runs before those dots are created, then we won't have anything to enhance. Likewise, when those dots change, we want our code to be rerun.
  • And, finally, redefine setTabIndicators. Our version is heavily dependent on the original, so first thing we want to do is invoke the original. As I stated, the original creates those dots, so it is important that we let it run first. Then we can go about enhancing those dots, such as adding a title and a click handler.

In Application Designer (or online in Branding Objects), create a new HTML (JavaScript) definition with the above JavaScript. I named mine JM_CLICK_DOTS_JS. You are free to use whatever name you prefer. I point it out because later we will reference this name from an Application Class, and knowing the name will be quite useful.

If we stop our UX improvements here, the dots will become clickable buttons, but users will have no visual indicator. It would be nice if the mouse pointer became a pointer or hand to identify the dot as a clickable region. We will use CSS for this. Create a new Free formed stylesheet for the following CSS. I named mine JM_CLICK_DOTS.

.lpTabIndicators .dot:hover,
.lpTabIndicators .dot.on:hover {
cursor: pointer;
}

With our supporting definitions created, we can move onto Event Mapping. I detailed all of the steps for Event Mapping in my post Event Mapping: Extending "Personal Details" in HCM. According to that blog post, our first step is to create an App Class as an event handler. With that in mind, I added the following PeopleCode to a class named ClickDotsConfig in an Application Class appropriately named JM_CLICK_DOTS (noticing a pattern?). Basically, the code just inserts our new definitions into the page assembly process so the browser can find and interpret them.

import PT_RCF:ServiceInterface;

class ClickDotsConfig implements PT_RCF:ServiceInterface
method execute();
end-class;

method execute
/+ Extends/implements PT_RCF:ServiceInterface.execute +/

AddStyleSheet(StyleSheet.JM_CLICK_DOTS);
AddJavaScript(HTML.JM_CLICK_DOTS_JS);
end-method;

After creating a related content service definition for our Application Class, the only trick is choosing the Landing Page component content reference, which is a hidden content reference located at Fluid Structure Content > Fluid PeopleTools > Fluid PeopleTools Framework > Fluid Homepage

Probably the trickiest part of this whole process was identifying which event to use. For me the choice was between PageActivate and PostBuild. The most reliable way I know to identify the appropriate event is to investigate some of the delivered component PeopleCode. What I found was that the same code that creates the dots, PTNUI_MENU_JS, is added in PostBuild. Considering that our code MonkeyPatches that code, it is important that our code run after that code exists, which means I chose PostBuild Post processing.

Using Event Mapping we have effectively transformed a customization into a configuration. The point of reducing customizations is to simplify lifecycle management. We now have two fewer items on our compare reports. If Oracle changes PTNUI_MENU_JS and PTNUI_LANDING_CSS, the originally modified definitions, these items will no longer show on a compare report. But!, and this is significant: you may have noticed my JavaScript is invoking and manipulating delivered PeopleSoft JavaScript functions: setTabIndicators and lpSwipeToTabFromDD. If Oracle changes those functions, then this code may break. A great example of Oracle changing JavaScript functions was their move from net.ContentLoader to net2.ContentLoader. I absolutely LOVE Event Mapping, but we can't ignore Lifecycle Management. When implementing a change like this, be sure to fully document Oracle changes that may break your code. This solution is heavily dependent on Oracle's code but no Lifecycle Management tool will identify this dependency.

Question: Jim, why did you prefix setTabIndicators with window as in window.setTabIndicators? Answer: I am a big fan of JSLint/JSHint and want clean reports. It is easier to tell a code quality tool I'm running in a browser than to define all of the globals I expect from PeopleSoft. All global functions and variables are really just properties of the Window object. The window variable is a browser-defined global that a good code quality tester will recognize. The window prefix isn't required, but makes for a nice code quality report.

Windows Datapump Export

Michael Dinh - Thu, 2017-08-03 22:45

The purpose of the script is to perform full database export keeping 3 export copies.
If export is successful, then fullexp*.dmp will be renamed with _1.dmp suffix added to filename.
If export is unsuccessful, then code will exit, skipping rename operations.

Note: there should never be .dmp file without # suffix unless export is unsuccessful.

In hindsight, directoryName should be using variable (ORACLE_SID) versus hardcode.

SET ORACLE_SID=DB01
SET directoryName=D:\DB01\export

expdp ‘/ as sysdba’ full=y directory=DATA_PUMP_DIR dumpfile=fullexp_%ORACLE_SID%_%COMPUTERNAME%.dmp logfile=fullexp_%ORACLE_SID%_%COMPUTERNAME%.log flashback_time=SYSTIMESTAMP REUSE_DUMPFILES=YES
IF %ERRORLEVEL% NEQ 0 GOTO ERROR

IF EXIST “%directoryName%\fullexp_%ORACLE_SID%_%COMPUTERNAME%_3.dmp” (DEL “%directoryName%\fullexp_%ORACLE_SID%_%COMPUTERNAME%_3.*”)

IF EXIST “%directoryName%\fullexp_%ORACLE_SID%_%COMPUTERNAME%_2.dmp” (RENAME “%directoryName%\fullexp_%ORACLE_SID%_%COMPUTERNAME%_2.dmp” “fullexp_%ORACLE_SID%_%COMPUTERNAME%_3.dmp”)
IF EXIST “%directoryName%\fullexp_%ORACLE_SID%_%COMPUTERNAME%_2.log” (RENAME “%directoryName%\fullexp_%ORACLE_SID%_%COMPUTERNAME%_2.log” “fullexp_%ORACLE_SID%_%COMPUTERNAME%_3.log”)

IF EXIST “%directoryName%\fullexp_%ORACLE_SID%_%COMPUTERNAME%_1.dmp” (RENAME “%directoryName%\fullexp_%ORACLE_SID%_%COMPUTERNAME%_1.dmp” “fullexp_%ORACLE_SID%_%COMPUTERNAME%_2.dmp”)
IF EXIST “%directoryName%\fullexp_%ORACLE_SID%_%COMPUTERNAME%_1.log” (RENAME “%directoryName%\fullexp_%ORACLE_SID%_%COMPUTERNAME%_1.log” “fullexp_%ORACLE_SID%_%COMPUTERNAME%_2.log”)

IF EXIST “%directoryName%\fullexp_%ORACLE_SID%_%COMPUTERNAME%.dmp” (RENAME “%directoryName%\fullexp_%ORACLE_SID%_%COMPUTERNAME%.dmp” “fullexp_%ORACLE_SID%_%COMPUTERNAME%_1.dmp”)
IF EXIST “%directoryName%\fullexp_%ORACLE_SID%_%COMPUTERNAME%.log” (RENAME “%directoryName%\fullexp_%ORACLE_SID%_%COMPUTERNAME%.log” “fullexp_%ORACLE_SID%_%COMPUTERNAME%_1.log”)

EXIT 0

:ERROR
EXIT 1

Results after 4 runs.

08/03/2017  07:53 PM     2,680,008,704 fullexp_DB01_CMWPHV1_1.dmp
08/03/2017  07:53 PM           161,707 fullexp_DB01_CMWPHV1_1.log
08/03/2017  07:46 PM     2,680,008,704 fullexp_DB01_CMWPHV1_2.dmp
08/03/2017  07:46 PM           161,707 fullexp_DB01_CMWPHV1_2.log
08/03/2017  07:37 PM     2,680,008,704 fullexp_DB01_CMWPHV1_3.dmp
08/03/2017  07:37 PM           161,707 fullexp_DB01_CMWPHV1_3.log

Duplicate records in MLOG

Tom Kyte - Thu, 2017-08-03 17:06
Is it possible to restrict creation of duplicate record in MLOG. In our scenario, same record is updated twice then it is creating two record in MLOG and fast refresh is taking longer time. Is it possible if it create only one record in MLOG n...
Categories: DBA Blogs

Materialized view

Tom Kyte - Thu, 2017-08-03 17:06
Hi Tom Just wanted to know whether materialized view will be available for querying during refresh if refresh is performed using atomic refresh=>false..?
Categories: DBA Blogs

SQL_Target,tablespace Question

Tom Kyte - Thu, 2017-08-03 17:06
1)What happen when SGA_MAX_SIZE & SGA_TARGET not define under Init parameter file? 2)When extending datafile with Auto extend on command without specifying size what will happnen?
Categories: DBA Blogs

ENABLE PARALLEL DML Vs FORCE PARALLEL DML

Tom Kyte - Thu, 2017-08-03 17:06
Hi Chris/Connor, I came across below two piece of codes where only difference is "ALTER SESSION FORCE PARALLEL DML PARALLEL 16" Can you please help to understand if we need to perform DELTE using parallelism, do i need to write both ALTER SESSI...
Categories: DBA Blogs

update bulk row based on case statement

Tom Kyte - Thu, 2017-08-03 17:06
Hello Experts, We have a requirement as per below example, For Example, Need to create a procedure which fulfill below condition, create or replace procedure as <declaration part> Begin select emp_id,emp_name,mgr_id, (case when dep...
Categories: DBA Blogs

Multiple inserts in a PL/SQL block

Tom Kyte - Thu, 2017-08-03 17:06
Hi Gurus, I have a piece of code similar to the one below: I have a table in which I have to make insert as given below : The select fetches multiple rows and my manager doesn't want me to use cursor or bulk collect to make the code usable. ...
Categories: DBA Blogs

Response obtained from SoapUI but not through PLSQL using UTL_HTTP

Tom Kyte - Thu, 2017-08-03 17:06
Hi, Need your advice and help. <b>Problem Description</b> We have a requirement of invoking a web-service for uploading data in tax authority website. We have been provided a WSDL url which contains the data about the web-service we need...
Categories: DBA Blogs

Securing Your JNDI Resources for Other Groups

Anthony Shorten - Thu, 2017-08-03 16:25

As with other applications, the Oracle Utilities Application Framework respects the settings within the Oracle WebLogic domain, including any default settings. One of the default settings for the domain is access to the JNDI resources within the domain. By default, Oracle WebLogic grants access to Everyone that is defined in the security realm definition of the domain. Whilst, this is generally acceptable in the vast majority of domains that are setup (remember you tend to set up a lot of non-production copies in any implementation of the products), it may not be appropriate for production domains. There is a simple setup to correct that.

  • Create a group to designate the specific users outside the application users you want to give access to the JNDI resources. Allocate the user identities to that group in your security repository. If you use the internal LDAP of Oracle WebLogic then you can add them using the console. If you want to designate different groups of people, create different groups.
    • Remember you have groups already for other users, Administrators and the product group. For this documentation we will use the Administrators and cisusers groups. You can vary the values according to your site setup. These will be reused for the setup.
  • Create a Global Role which refers to the above group. If you created multiple then specify each group in the role.
  • On the product server(s) or cluster, select the View JNDI Tree option on the Configuration --> General tab. For example:

View JNDI Tree

  • On the root node of the server definition in the tree remove the Everyone from the node using the Remove button. The Administrators should be the only group that has access at the root level. Do NOT remove Administrators as this will corrupt your access to the domain. The following is an example of the recommended settings:

Root Node Access

  • All child nodes in the JNDI inherit the root node setup. Now for the product to work you need to add cisusers to the following JNDI objects:
    • The servicebean must be accessible for cisusers. This will be under the context value set for your domain.
    • The Data Sources (OUAF_DS in my example) must be accessible to cisusers.
    • The JMX nodes should be accessible to cisusers if you are using JMX monitoring (directly or via OEM).
    • If using the internal JMS processing, wither that is the JMS Senders or MDB, then you must allow cisusers access to the JMS resources in the domain.
  • Add your custom group to the relevant JNDI objects they need to have access to.
  • Set the Enable Remote JDBC Connection Property to false. This can be done using the JAVA_OPTIONS setting in the setDomainEnv[.sh] script shipped with Oracle WebLogic in the bin directory of your domain home (Add -Dweblogic.jdbc.remoteEnabled=false to JAVA_OPTIONS). Check that the variable WLS_JDBC_REMOTE_ENABLED is not set incorrectly.
  • If you are using SSL, you need to set the RMI JDBC Security to Secure to ensure Administrators use SSL as well for connections. For example:

RMI JDBC Security

The domain is now more secure.

 

 

Postgres vs. Oracle access paths III – Partial Index

Yann Neuhaus - Thu, 2017-08-03 10:58

In the previous post I said that an Index Only Access needs to find all rows in the index. Here is a case where, with similar data, Postgres can find all rows but Oracle needs additional considerations.

In the previous post I’ve executed:
select sum(n) from demo1
The execution plan was:

Aggregate (cost=295.29..295.30 rows=1 width=8) (actual time=2.192..2.193 rows=1 loops=1)
Output: sum(n)
Buffers: shared hit=30
-> Index Only Scan using demo1_n on public.demo1 (cost=0.29..270.29 rows=10000 width=4) (actual time=0.150..1.277 rows=10000 loops=1)
Output: n
Heap Fetches: 0
Buffers: shared hit=30

Basically, this reads all values of the column N and then aggregates them to the sum.
If I remove the SUM() I have only the part that reads all values from N:

explain (analyze,verbose,costs,buffers) select n from demo1 ;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------
Index Only Scan using demo1_n on public.demo1 (cost=0.29..270.29 rows=10000 width=4) (actual time=0.150..1.284 rows=10000 loops=1)
Output: n
Heap Fetches: 0
Buffers: shared hit=30
Planning time: 0.440 ms
Execution time: 1.972 ms

Oracle

This sounds logical. Now let’s run the same query, a simple ‘select n from demo1′ in Oracle:

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID ad4z7tpt0dkta, child number 0
-------------------------------------
select /*+ */ n from demo1
--------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
--------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 397 (100)| 10000 |00:00:00.01 | 1451 |
| 1 | TABLE ACCESS FULL| DEMO1 | 1 | 10000 | 397 (0)| 10000 |00:00:00.01 | 1451 |
--------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "N"[NUMBER,22]

Here the access path is different: a full table scan instead of an index only access (Index Fast Full Scan). It is not a cost decision. If we try to force an index access, with INDEX_FFS() or INDEX() hints, the query will still do a Full Table Scan. The reason is that and index only access is possible only if all columns and all rows are present in the index. But Oracle does not always index all rows. The Oracle index has no entry for the rows where all the indexed columns are nulls.

Where n is not null

If I run the same query with the purpose of showing only non-null values, with a ‘where n is not null’ predicate, then an index only access is possible:

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 2gbjpw5u0v9cw, child number 0
-------------------------------------
select /*+ */ n from demo1 where n is not null
-------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 7 (100)| 10000 |00:00:00.01 | 28 |
| 1 | INDEX FAST FULL SCAN| DEMO1_N | 1 | 10000 | 7 (0)| 10000 |00:00:00.01 | 28 |
-------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("N" IS NOT NULL)

Constraints

An alternative, if we know that we will never have null values here, is to give the information to the optimizer that there are no null values in the column N:
In Oracle:
alter table demo1 modify n not null;
This is the equivalent of the PostgreSQL
alter table demo1 alter column n set not null;
Then, in addition to ensuring the verification of the constraint, the constraint informs the optimizer that there is no null values and that all rows can be find in the index:

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID ad4z7tpt0dkta, child number 0
-------------------------------------
select /*+ */ n from demo1
-------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 7 (100)| 10000 |00:00:00.01 | 28 |
| 1 | INDEX FAST FULL SCAN| DEMO1_N | 1 | 10000 | 7 (0)| 10000 |00:00:00.01 | 28 |
-------------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "N"[NUMBER,22]

Additional columns

Even if the column can have some null values, it is easy to have an index on null values in Oracle, just by adding a non-null column or expression. And if you don’t need this additional column, you can even add a constant, such as in the following index definition:

create unique index demo1_n on demo1(n,0);

This works because all index entries have at least one non null value. But looking at the buffers you can see that this additional byte (0 is stored in 1 byte) has a little overhead (31 blocks read here instead of 28):

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID ad4z7tpt0dkta, child number 0
-------------------------------------
select /*+ */ n from demo1
-------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 8 (100)| 10000 |00:00:00.01 | 31 |
| 1 | INDEX FAST FULL SCAN| DEMO1_N | 1 | 10000 | 8 (0)| 10000 |00:00:00.01 | 31 |
-------------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "N"[NUMBER,22]

Oracle Partial Indexes

In Oracle, all indexes that include a nullable column are partial indexes: not all rows are indexed, and an index access is possible only if the WHERE clause, or a constraint, guarantees that we don’t need the non-indexed rows. Combined with expression, it can be a way to implement partial indexes when the expression returns null for a specific condition. Oracle even provides computed columns (aka virtual columns) so that the expression does not have to be coded in the where clause of the query.

As an example with expressions, the following index has entries only for the values lower than 10:
create index demo_top10 on demo1(case when n<=10 then n end)

However, to use it, we must mention the expression explicitly:

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 863drbjwayrt7, child number 0
-------------------------------------
select /*+ */ (case when n<=10 then n end) from demo1 where (case when
n<=10 then n end)<=5
---------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 (100)| 4 |00:00:00.01 | 2 |
|* 1 | INDEX RANGE SCAN| DEMO1_N_TOP10 | 1 | 5 | 1 (0)| 4 |00:00:00.01 | 2 |
---------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("DEMO1"."SYS_NC00004$"<=5)
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "DEMO1"."SYS_NC00004$"[NUMBER,22]

We can see that internally, a virtual column (“SYS_NC00004$”) has been created for the indexed expression, and is used for the predicate and the projection which uses the same expression. There is another possibility with the ‘partial index’ feature introduced in 12c but it has not the flexibility of a predicate: it is based on partitioning where only some partitions can be indexed.

Postgres Partial Indexes

Postgres does not need those workarounds. An index indexes all rows, including null entries, and partial indexes can be defined with a where clause:
create index demo_top10 on demo1(n) where n<=10

No need to change the query. As long as the result can come from the partial index, we can use the column without an expression on it:

explain (analyze,verbose,costs,buffers) select n from demo1 where n<=5 ;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------
Index Only Scan using demo1_n_top10 on public.demo1 (cost=0.14..4.21 rows=4 width=4) (actual time=0.114..0.114 rows=5 loops=1)
Output: n
Index Cond: (demo1.n <= 5)
Heap Fetches: 0
Buffers: shared hit=2
Planning time: 0.557 ms
Execution time: 0.129 ms

Here the smaller partial index (demo1_n_top10) has been chosen by the query planner.

As you see I’ve not used exactly the same condition. The query planner understood that n<=5 (in the WHERE clause) is a subset of n<=10 (in the index definition). However, if the predicate is too different, it cannot use the index:

fpa=# explain (analyze,verbose,costs,buffers) select n from demo1 where 2*n<=10;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------
Index Only Scan using demo1_n on public.demo1 (cost=0.29..320.29 rows=3333 width=4) (actual time=0.020..1.086 rows=5 loops=1)
Output: n
Filter: ((2 * demo1.n) <= 10)
Rows Removed by Filter: 9995
Heap Fetches: 0
Buffers: shared hit=30

Here, instead of “Index Cond” we have a simple “Filter”. The Index Only Scan has read all the rows, and they were filtered afterward (“Rows Removed by Filter”).

Index condition

With the VERBOSE option of EXPLAIN we see the condition used by the index access:
Index Cond: (demo1.n <= 5)
‘Index Cond.’ is not a simple filter removing rows after an operation, but it is the condition which is used for fast access to the index entries in the sorted index structure. We have the equivalent in Oracle with the ‘+predicate’ format of dbms_xplan:

Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("N"<=5)

Before going further on index access for WHERE clause predicate, the next post will show the major characteristic of indexes (besides the fact that it stores a redundant subset of columns and rows): they are maintained sorted and may return the resulting rows in order.

 

Cet article Postgres vs. Oracle access paths III – Partial Index est apparu en premier sur Blog dbi services.

Rebuilding Indexes

Jonathan Lewis - Thu, 2017-08-03 07:00

One of the special events that can make it necessary to rebuild an index is the case of the “massive DML”, typically a bulk delete that purges old data from a table. You may even find cases where it’s a good idea to mark a couple of your indexes as unusable before doing a massive delete and then rebuild them after the delete.

Despite the fact that a massive delete is an obvious special case it’s still not necessary in many cases to worry about a rebuild afterwards because the space made free by the delete will be smoothly reused over time with very little variation in performance. There is, however, one particular feature that increases the probability of a rebuild becoming necessary – global (or globally partitioned) indexes on partitioned tables. The problem (and the absence of problem in non-partitioned tables) is in the nature of the rowid.

For non-partitioned tables, and partitioned tables with local indexes, the rowid stored in an index is (assuming we’re thinking only of heap tables) stored as a sequence of 6 bytes consisting, in order, of: (tablespace relative file number, block number within file, row number within block). If the table is non-partitioned, or if this is an index segment from a locally partitioned index, all the index entries will be pointing to the same table segment and Oracle knows which segment that is from the data dictionary information – so Oracle can derive the data_object_id of the table segment and convert the tablespace relative file number into the absolute file number to navigate to the right row in the table.

When the index is global or globally partitioned any index entry may point to any of the table’s segments, so the rowid that is stored in the index is expanded by a further 4 bytes to hold the data_object_id of the table segment it is pointing to – and the data_object_id is the leading component: (data_object_id, tablespace relative file number, block number within file, row number within block). Think about what this means when you start to drop “old” partitions and add new partitions. Compare this with what happens when you simply delete a large volume of old data from a table and starting inserting new data. There’s an important difference to the way in which indexes will evolve.

Purging data

When you delete a large volume of data from a (simple, heap) table you will create a lot of empty space in a lot of existing table blocks. If that delete is supposed to get rid of “old” data (and to keep the description simple we’ll assume it’s the first time you’ve done this) it’s very likely that the delete will result in lots of empty blocks near the start of the table – rows that were inserted at the same time will tend to be physically close to each other in the table. This means that future inserts will soon start to reuse those table blocks. Think about what this means for index entries – especially for non-unique keys.

Assume you have 100 rows with value ‘XXX’ for an indexed column. Breaking the rowid into its component parts the index entries will be (‘XXX’,{file_id, block_id, row_number}).  Now assume you delete the oldest 10 rows then, over time, insert 10 more rows with the same key value. You’ve deleted the 10 index entries with the lowest values for (file_id, block_id) but the space that’s become available in the table will be in and around exactly that range of blocks – so the new index entries will probably end up looking very similar to the deleted index entries and inserted in and around the existing index entries for value ‘XXX’, so over time the index is unlikely to allocate much new space.

Now think about what happens when your table it partitioned but the index is global; your index entries are (‘XXX’,{data_object_id, file_id, block_id, row_number}). When you drop the oldest partition you will probably[1] delete all the index entries with the lowest data_object_id. When you start inserting new rows for ‘XXX’ the new table partition will have a data_object_id that is going to be higher than any previous data_object_id – which means you’re going to be inserting rows into the right-hand (high-value) edge of this section of the index. In some cases – typically those where you have a couple of leaf blocks per key value – the index may end up growing significantly because the insertion point for rows in the new partition isn’t in the leaf block with the available space, and it won’t be until you’ve done a few more bulk deletes and the leaf blocks relating to the oldest table partitions become completely empty that the space can be reused.

An example of this type of behaviour probably appeared on the OTN database forum quite recently.  Of course, there are various reasons why indexes can become inefficient, and the degree of inefficiency may only become noticeable over a fairly long period of time; moreover there are various reasons why global indexes are a little problematic, and various reasons why a bulk delete (which is what executing “alter table drop partition” does to a global index) has unpleasant side effects dependent somewhat on the number (and size) of the partitions and on how many you try to drop in one go.

There’s not  a lot you can do about this quirk of global indexes, but it’s always worth taking extra care with partitioned tables and focusing even more carefully on a strategic review of indexes:

  • Does this index really need to exist at all
  • Could this index be replaced by a selective function-based index
  • Does this index really need to be global / globally partitioned
  • How big is this index compared to the size it ought to be
  • Should this index be (basic) compressed
  • Is this index likely to be disrupted by a historic purge – or is there another reason for its undesirable behaviour

 

[1] probably delete entries with the lowest data_object_id” – I have to say this because if you’ve executed a “move partition” at any time a new data_object_id will have been generated for the partition, so the oldest partition could, in principal, have the highest data_object_id. The issue of changing data_object_ids brings a whole new level of complexity to global indexes – but only in a few special cases, fortunately.

 

 


Oracle, SafeLogic and OpenSSL Partner on Next Generation FIPS Module

Oracle Press Releases - Thu, 2017-08-03 07:00
Press Release
Oracle, SafeLogic and OpenSSL Partner on Next Generation FIPS Module Oracle dedicates seed funding towards developing FIPS module for OpenSSL 1.1 and calls on corporate sponsors in the FOSS ecosystem to join the effort

Redwood Shores, Calif.—Aug 3, 2017

Oracle, OpenSSL and SafeLogic today announced a seed investment in developing the next generation open source OpenSSL 1.1 FIPS 140-2 module, and called for others to join the effort. OpenSSL is the most widely used and respected cryptographic library protecting data transfers across computer networks. 
 
The Federal Information Processing Standard (FIPS) 140-2 is a joint U.S. and Canadian government security standard for testing cryptographic modules, the objective of which is to ensure the use of strong and validated cryptographic protection in U.S. and Canadian government systems. However, it’s also widely respected and informally accepted by other countries and non-government industries as a strong and trustworthy standard for cryptographic modules used within commercial products. The current FIPS module for OpenSSL has not had a significant upgrade since 2012, during which time encryption standards have significantly evolved. Helping drive the updated OpenSSL FIPS project forward, Oracle has made a $50,000 seed investment to start the project, with another $50,000 to follow based on the progress of the effort.
 
“Ensuring that OpenSSL maintains an up to date FIPS implementation is critical to helping maintain the security posture of sensitive data on government systems and the continuous safety of millions of transactions performed daily. We as a community have a responsibility to maintain the confidence of users in these systems,” said Jim Wright, Chief Architect, Open Source Policy, Strategy, Compliance and Alliances at Oracle. “Given the complexity of the task at hand, we encourage other software vendors to join us in and donate to this project to deliver a free, open-source FIPS module that will benefit everyone.”
 
In addition to working closely with the OpenSSL Foundation’s team, Oracle and SafeLogic have worked closely on both investments in and the project framework of this effort. SafeLogic has been actively working with OpenSSL on this project since July 2016.
 
“This is what we've been waiting for—getting this effort off to a good strong start—and with a few more partners from the community, we'll be on our way toward a complete FIPS 140-2 solution for OpenSSL releases 1.1 and later,” said Steve Marquess, President of OpenSSL Validation Services, Inc. “We're already hard at work on the initial stage of designing a new module to accommodate the many changes in FIPS 140 validations over the past five years, and looking forward to a modernized implementation that can support the community for years to come.” 
 
“Oracle has made a significant pledge, underscoring their crucial role in the future of open source FIPS 140-2 capabilities,” said SafeLogic CEO Ray Potter. “Other sponsors with a vested interest should get in touch with SafeLogic to arrange their own donations, as we are administering contributions to directly fund both the hard and soft costs of the OpenSSL 1.1 FIPS Module project.”
 
For more information about the project, how to contribute or the future roadmap, please contact OpenSSL@SafeLogic.com.
Contact Info
Scott Thornburg
Oracle
+1 (415) 816-8844
scott.thornburg@oracle.com
Kris Reeves
Blanc & Otus
+1 (415) 856-5145
kristin.reeves@blancandotus.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

About SafeLogic

SafeLogic provides innovative encryption products for applications in mobile, server, and appliance environments. Our flagship product, CryptoComply™, provides drop-in FIPS 140-2 compliance with a common API across platforms, while our RapidCert process has revolutionized the way that FIPS 140-2 validations are earned. SafeLogic is privately held and is headquartered in Palo Alto, CA. For more information about SafeLogic, please visit www.SafeLogic.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. 

Talk to a Press Contact

Scott Thornburg

  • +1 (415) 816-8844

Kris Reeves

  • +1 (415) 856-5145

What Employers Want : Self-Sufficiency

Tim Hall - Thu, 2017-08-03 01:30

Of the people and companies I spoke to, self-sufficiency was a common thread. They were looking for traits like the following in prospective employees.

  • The ability to learn for themselves. Of course companies will be looking to train you, but they don’t want a wet lettuce that sits waiting for the next handout of knowledge. You have to take some personal responsibility.
  • Willing to make decisions. There are a lot of people who refuse to step up to the plate when someone has got to make decisions. Typically those people will be the first to start moaning when things don’t go as planned. When push comes to shove you have to be willing to get in there and make a decision.
  • Willing to take responsibility for their actions. Sometimes you will make the wrong decision, or make a mistake. It’s important you can recognise your mistake, admit it and move on. Some people will spend incredible amounts of energy trying to hide their mistakes, or prove they were somehow correct. If they just owned up and moved on things would be better for everyone.
  • Willing to ask for help when it is appropriate. As much as you need to be self-sufficient, you need to know when it is right to ask for help. Nobody likes to work with a leach that never takes responsibility for things, but by the same token you don’t want someone to spend weeks figuring out something you could have pointed out to them in a couple of minutes. Finding that balance is important.
  • Never bullshit! I can’t stress enough how important this point is. Saying I don’t know is a perfectly valid response. People can smell bullshit a mile off and it’s going to make you look stupid. Be honest and you won’t have a problem. It takes a surprising amount of confidence to say I don’t know. People will respect you for it! In an interview you can always redirect the question like, “I don’t know, but it sounds similar to X, which I have done…”, so the “I don’t know” becomes an opening into you speaking about something you are good at…

Check out the rest of this series here.

Cheers

Tim…

What Employers Want : Self-Sufficiency was first posted on August 3, 2017 at 7:30 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Pages

Subscribe to Oracle FAQ aggregator