Skip navigation.

Feed aggregator

12.1.0.2 Released With Cool Indexing Features (Short Memory)

Richard Foote - Fri, 2014-07-25 00:18
Oracle Database 12.1.0.2 has finally been released and it has a number of really exciting goodies from an indexing perspective which include: Database In-Memory Option, which enables specific portions of the database to be in dual format, in both the existing row based format and additionally into an efficient memory only columnar based format. This in […]
Categories: DBA Blogs

How to find out session info about session that comes from remote database through dblink

XTended Oracle SQL - Thu, 2014-07-24 19:28
.syntaxhighlighter { width: 1800px; overflow-x: auto !important;}

It is well known thing and you can even find it on MOS, but I have a little more simple script for it, so I want to show little example.

First of all we need to start script on local database:

SQL>                                                                                                                                                                   
SQL> @transactions/global.sql
Enter filters(empty for any)...
Sid           :
Globalid mask :
Remote_db mask:

 INST_ID  SID    SERIAL# USERNAME REMOTE_DB REMOTE_DBID TRANS_ID         DIRECTION   GLOBALID                                           EVENT                      
-------- ---- ---------- -------- --------- ----------- ---------------- ----------- -------------------------------------------------- ---------------------------
       1  275       4469 XTENDER  BAIKAL     1742630060 8.20.7119        FROM REMOTE 4241494B414C2E63616336656437362E382E32302E37313139 SQL*Net message from client
                                                                                                                                                                  

Then we need to copy GLOBALID of interested session and run script on database that shown in column REMOTE_DBID, but with specifieng GLOBALID:

SQL>                                                                                                                                                                                                 
SQL> conn sys/syspass@baikal as sysdba
Connected.

======================================================================
=======  Connected to  SYS@BAIKAL(baikal)(BAIKAL)
=======  SID           203
=======  SERIAL#       38399
=======  SPID          6536
=======  DB_VERSION    11.2.0.4.0
======================================================================

SQL> @transactions/global.sql
Enter filters(empty for any)...
Sid           :
Globalid mask : 4241494B414C2E63616336656437362E382E32302E37313139
Remote_db mask:

INST_ID   SID    SERIAL# USERNAME  REMOTE_DB  REMOTE_DBID TRANS_ID   DIRECTION   GLOBALID                                            STATE                     
------- ----- ---------- --------- ---------- ----------- ---------- ----------- --------------------------------------------------  --------------------------
      1     9      39637 XTENDER   BAIKAL      1742630060 8.20.7119  TO REMOTE   4241494B414C2E63616336656437362E382E32302E37313139  [ORACLE COORDINATED]ACTIVE

It’s quite simple and fast.

Categories: Development

Standalone sqlplus script for plans comparing

XTended Oracle SQL - Thu, 2014-07-24 18:00

I have a couple scripts for plans comparing:

1. https://github.com/xtender/xt_scripts/blob/master/diff_plans.sql
2. http://github.com/xtender/xt_scripts/blob/master/plans/diff_plans_active.sql

But they have dependencies on other scripts, so I decided to create a standalone script for more convenient use without the need to download other scripts and to set up the sql*plus environment.
I’ve tested it already with firefox, so you can try it now: http://github.com/xtender/xt_scripts/blob/master/plans/diff_plans_active_standalone.sql

Some screenshots:
diff_plans.sql:
diff_plans

plans_active.sql:
plans_active

Usage:
1. plans_active:

SQL> @plans_active 0ws7ahf1d78qa 

2. diff_plans:

SQL> @diff_plans 0ws7ahf1d78qa 
 *** Diff plans by sql_id. Version with package XT_PLANS. 
Usage: @plans/diff_plans2 sqlid [+awr] [-v$sql] 

P_AWR           P_VSQL 
--------------- --------------- 
false           true 

Strictly speaking, we can do it sometimes easier: it’s quite simple to compare plans without first column “ID”, so we can simply compare “select .. from v$sql_plan/v$sql_plan_statistics_all/v$sql_plan_monitor” output with any comparing tool.

Categories: Development

Exploring Options of Using RMAN Configure to Simplify Backup

Pythian Group - Thu, 2014-07-24 14:06

I am a simple person who likes simple things, especially RMAN backup implementation.

I have yet to understand why RMAN backup implementation does not use configure command, and if you have a good explanation, please share.

Examples for RMAN configure command

configure device type disk parallelism 2 backup type to compressed backupset;
configure channel device type disk format '/oradata/backup/%d_%I_%T_%U' maxopenfiles 1;
configure channel 1 device type disk format '/oradata/backup1/%d_%I_%T_%U' maxopenfiles 1;
configure archivelog deletion policy to backed up 2 times to disk;
configure backup optimization on;

Do you know if backup is using parallelism?
Where is the backup to?
Is the backup to tape?

RMAN> show all;

RMAN configuration parameters for database with db_unique_name SAN are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO DISK;
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/oradata/backup/%d_%F.ctl';
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO COMPRESSED BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT   '/oradata/backup/%d_%I_%T_%U' MAXOPENFILES 1;
CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT   '/oradata/backup1/%d_%I_%T_%U' MAXOPENFILES 1;
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 2 TIMES TO DISK;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/11.2.0/dbhome_1/dbs/snapcf_san.f'; # default

RMAN>

Simple RMAN script.

set echo on;
connect target;
show all;
backup incremental level 0 check logical database filesperset 1 tag "fulldb"
plus archivelog filesperset 8 tag "archivelog";

Simple RMAN run.

$ rman @simple.rman

Recovery Manager: Release 11.2.0.4.0 - Production on Thu Jul 24 11:12:19 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

RMAN> set echo on;
2> connect target;
3> show all;
4> backup incremental level 0 check logical database filesperset 1 tag "fulldb"
5> plus archivelog filesperset 8 tag "archivelog";
6>
echo set on

connected to target database: SAN (DBID=2792912513)

using target database control file instead of recovery catalog
RMAN configuration parameters for database with db_unique_name SAN are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO DISK;
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/oradata/backup/%d_%F.ctl';
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO COMPRESSED BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT   '/oradata/backup/%d_%I_%T_%U' MAXOPENFILES 1;
CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT   '/oradata/backup1/%d_%I_%T_%U' MAXOPENFILES 1;
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 2 TIMES TO DISK;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/11.2.0/dbhome_1/dbs/snapcf_san.f'; # default


Starting backup at 2014-JUL-24 11:12:21
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=20 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=108 device type=DISK
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=326 RECID=337 STAMP=853758742
channel ORA_DISK_1: starting piece 1 at 2014-JUL-24 11:12:24
channel ORA_DISK_1: finished piece 1 at 2014-JUL-24 11:12:25
piece handle=/oradata/backup1/SAN_2792912513_20140724_8dpe6koo_1_1 tag=ARCHIVELOG comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 2014-JUL-24 11:12:25

Starting backup at 2014-JUL-24 11:12:25
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting compressed incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00003 name=/oradata/SAN/datafile/o1_mf_undotbs1_9oqwsjk6_.dbf
channel ORA_DISK_1: starting piece 1 at 2014-JUL-24 11:12:26
channel ORA_DISK_2: starting compressed incremental level 0 datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00008 name=/oradata/SAN/datafile/o1_mf_user_dat_9wvp8s78_.dbf
channel ORA_DISK_2: starting piece 1 at 2014-JUL-24 11:12:26
channel ORA_DISK_1: finished piece 1 at 2014-JUL-24 11:13:01
piece handle=/oradata/backup1/SAN_2792912513_20140724_8epe6koq_1_1 tag=FULLDB comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:35
channel ORA_DISK_1: starting compressed incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/oradata/SAN/datafile/o1_mf_system_9oqwr5tm_.dbf
channel ORA_DISK_1: starting piece 1 at 2014-JUL-24 11:13:04
channel ORA_DISK_1: finished piece 1 at 2014-JUL-24 11:13:29
piece handle=/oradata/backup1/SAN_2792912513_20140724_8gpe6kpu_1_1 tag=FULLDB comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting compressed incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00002 name=/oradata/SAN/datafile/o1_mf_sysaux_9oqwrv2b_.dbf
channel ORA_DISK_1: starting piece 1 at 2014-JUL-24 11:13:30
channel ORA_DISK_1: finished piece 1 at 2014-JUL-24 11:13:45
piece handle=/oradata/backup1/SAN_2792912513_20140724_8hpe6kqp_1_1 tag=FULLDB comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting compressed incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=/oradata/SAN/datafile/o1_mf_ggs_data_9or2h3tw_.dbf
channel ORA_DISK_1: starting piece 1 at 2014-JUL-24 11:13:45
channel ORA_DISK_1: finished piece 1 at 2014-JUL-24 11:13:48
piece handle=/oradata/backup1/SAN_2792912513_20140724_8ipe6kr9_1_1 tag=FULLDB comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting compressed incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00006 name=/oradata/SAN/datafile/o1_mf_testing_9rgp1q31_.dbf
channel ORA_DISK_1: starting piece 1 at 2014-JUL-24 11:13:49
channel ORA_DISK_1: finished piece 1 at 2014-JUL-24 11:13:52
piece handle=/oradata/backup1/SAN_2792912513_20140724_8jpe6krc_1_1 tag=FULLDB comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_2: finished piece 1 at 2014-JUL-24 11:14:44
piece handle=/oradata/backup/SAN_2792912513_20140724_8fpe6koq_1_1 tag=FULLDB comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:02:18
Finished backup at 2014-JUL-24 11:14:44

Starting backup at 2014-JUL-24 11:14:44
current log archived
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=327 RECID=338 STAMP=853758885
channel ORA_DISK_1: starting piece 1 at 2014-JUL-24 11:14:46
channel ORA_DISK_1: finished piece 1 at 2014-JUL-24 11:14:47
piece handle=/oradata/backup1/SAN_2792912513_20140724_8kpe6kt6_1_1 tag=ARCHIVELOG comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 2014-JUL-24 11:14:47

Starting Control File Autobackup at 2014-JUL-24 11:14:48
piece handle=/oradata/backup/SAN_c-2792912513-20140724-05.ctl comment=NONE
Finished Control File Autobackup at 2014-JUL-24 11:14:55

Recovery Manager complete.

-----

$ ls -l backup*
backup:
total 501172
-rw-r-----. 1 oracle oinstall 505167872 Jul 24 11:14 SAN_2792912513_20140724_8fpe6koq_1_1
-rw-r-----. 1 oracle oinstall   8028160 Jul 24 11:14 SAN_c-2792912513-20140724-05.ctl

backup1:
total 77108
-rw-r-----. 1 oracle oinstall   237056 Jul 24 11:12 SAN_2792912513_20140724_8dpe6koo_1_1
-rw-r-----. 1 oracle oinstall  1236992 Jul 24 11:12 SAN_2792912513_20140724_8epe6koq_1_1
-rw-r-----. 1 oracle oinstall 39452672 Jul 24 11:13 SAN_2792912513_20140724_8gpe6kpu_1_1
-rw-r-----. 1 oracle oinstall 34349056 Jul 24 11:13 SAN_2792912513_20140724_8hpe6kqp_1_1
-rw-r-----. 1 oracle oinstall  2539520 Jul 24 11:13 SAN_2792912513_20140724_8ipe6kr9_1_1
-rw-r-----. 1 oracle oinstall  1073152 Jul 24 11:13 SAN_2792912513_20140724_8jpe6krc_1_1
-rw-r-----. 1 oracle oinstall    67072 Jul 24 11:14 SAN_2792912513_20140724_8kpe6kt6_1_1

If this does not hit the nail on the head, then I don’t know what will.

Imagine someone, maybe me or yourself, deleting archivelog accidentally.

RMAN> delete noprompt archivelog all;

using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=108 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=20 device type=DISK
RMAN-08138: WARNING: archived log not deleted - must create more backups
archived log file name=/oradata/SAN/archivelog/arc_845895297_1_326.dbf thread=1 sequence=326
RMAN-08138: WARNING: archived log not deleted - must create more backups
archived log file name=/oradata/SAN/archivelog/arc_845895297_1_327.dbf thread=1 sequence=327

RMAN>

-----

RMAN> configure archivelog deletion policy to none;

old RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 2 TIMES TO DISK;
new RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE;
new RMAN configuration parameters are successfully stored

RMAN> delete noprompt archivelog all;

released channel: ORA_DISK_1
released channel: ORA_DISK_2
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=108 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=20 device type=DISK
List of Archived Log Copies for database with db_unique_name SAN
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - --------------------
337     1    326     A 2014-JUL-24 11:04:17
        Name: /oradata/SAN/archivelog/arc_845895297_1_326.dbf

338     1    327     A 2014-JUL-24 11:12:21
        Name: /oradata/SAN/archivelog/arc_845895297_1_327.dbf

deleted archived log
archived log file name=/oradata/SAN/archivelog/arc_845895297_1_326.dbf RECID=337 STAMP=853758742
deleted archived log
archived log file name=/oradata/SAN/archivelog/arc_845895297_1_327.dbf RECID=338 STAMP=853758885
Deleted 2 objects


RMAN>

Will you be using configure for your next RMAN implementation?

Categories: DBA Blogs

Bug with xmltable, xmlnamespaces and xquery_string specified using bind variable

XTended Oracle SQL - Thu, 2014-07-24 12:54

Today I was asked about strange problem: xmltable does not return data, if xquery specified by bind variable and xml data has xmlnamespaces:

SQL> var x_path varchar2(100);
SQL> var x_xml  varchar2(4000);
SQL> col x format a100;
SQL> begin
  2      :x_path:='/table/tr/td';
  3      :x_xml :=q'[
  4                  <table xmlns="http://www.w3.org/tr/html4/">
  5                    <tr>
  6                      <td>apples</td>
  7                      <td>bananas</td>
  8                    </tr>
  9                  </table>
 10                  ]';
 11  end;
 12  /

PL/SQL procedure successfully completed.

SQL> select
  2        i, x
  3   from xmltable( xmlnamespaces(default 'http://www.w3.org/tr/html4/'),
  4                  :x_path -- bind variable
  5                  --'/table/tr/td' -- same value as in the variable "X_PATH"
  6                  passing xmltype(:x_xml)
  7                  columns i    for ordinality,
  8                          x    xmltype path '.'
  9                );

no rows selected

But if we comment bind variable and comment out literal x_query ‘/table/tr/td’, query will return data:

SQL> select
  2        i, x
  3   from xmltable( xmlnamespaces(default 'http://www.w3.org/tr/html4/'),
  4                  --:x_path -- bind variable
  5                  '/table/tr/td' -- same value as in the variable "X_PATH"
  6                  passing xmltype(:x_xml)
  7                  columns i    for ordinality,
  8                          x    xmltype path '.'
  9                );

         I X
---------- -------------------------------------------------------------------
         1 <td xmlns="http://www.w3.org/tr/html4/">apples</td>
         2 <td xmlns="http://www.w3.org/tr/html4/">bananas</td>

2 rows selected.

The only workaround I found is the specifying any namespace in the x_query – ‘/*:table/*:tr/*:td’

SQL> exec :x_path:='/*:table/*:tr/*:td'

PL/SQL procedure successfully completed.

SQL> select
  2        i, x
  3   from xmltable( xmlnamespaces(default 'http://www.w3.org/tr/html4/'),
  4                  :x_path -- bind variable
  5                  passing xmltype(:x_xml)
  6                  columns i    for ordinality,
  7                          x    xmltype path '.'
  8                );

         I X
---------- -------------------------------------------------------------------
         1 <td xmlns="http://www.w3.org/tr/html4/">apples</td>
         2 <td xmlns="http://www.w3.org/tr/html4/">bananas</td>

2 rows selected.

It’s quite ugly solution, but I’m not sure whether there is another solution…

Categories: Development

SYNC 2014 !

Bas Klaassen - Thu, 2014-07-24 12:15
Vanuit Proact organiseren wij het kennisplatform SYNC 2014 op 17 september in de Rotterdam Cruise Terminal. Alle hedendaagse IT-infrastructuurontwikkelingen in 1 dag: • Een interactief programma o.l.v. dagvoorzitter Lars Sørensen o.a. bekend van BNR • Een keynote van Marco Gianotten van Giarte, de Nederlandse “Gartner” op het gebied van Outsoucing/Managed Services • Huisman Equipment over de Bas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com0
Categories: APPS Blogs

A Smart Holster for Law Enforcement

Oracle AppsLab - Thu, 2014-07-24 10:21

So, back in January, Noel (@noelportugal) took a team of developers to the AT&T Developer Summit Hackathon in Las Vegas.

Although they didn’t win, the built some very cool stuff, combining Google Glass, Philips Hue, Internet of Things, and possibly a kitchen sink in there somewhere, into what can only be described as a smart holster. You know, for guns.

You read that right. This project was way out of our usual wheelhouse, which is what made it so much fun, or so I’m told.

Friend of the ‘Lab Martin Taylor was kind enough to produce, direct and edit the following video, in which Noel describes and demonstrates the holster’s capabilities.

Did you catch the bit at 3:06? That’s Raymond behind the mask.

Enjoy.Possibly Related Posts:

Unlimited Session Timeout

Jim Marion - Thu, 2014-07-24 10:21

There are a lot of security admins out there that are going to hate me for this post. There are a lot of system administrators, developers, and users, however, that will LOVE me for this post. The code I'm about to share with you will keep the logged in PeopleSoft user's session active as long as the user has a browser window open that points to a PeopleSoft instance. Why would you do this? I can think of two reasons:

  • Your users have several PeopleSoft browser windows open. If one of them times out because of inactivity at the browser window level, then it will kill the session for ALL open windows. That just seems wrong.
  • Your users have long running tasks, such as completing performance reviews, that may require more time to complete than is available at a single sitting. For example, imagine you are preparing a performance review and you have to leave for a meeting. You don't have enough information in the transaction to save, but you can't be late for the meeting either. You know if you leave, your session will time out while you are gone and you will lose your work. This also seems wrong.

Before I show you how to keep the logged in user's session active, let's talk about security... Session timeouts exist for two reasons (at least two):

  • Security: no one is home, so lock the door
  • Server side resource cleanup: PeopleSoft components require web server state. Each logged in user session (and browser window) consumes resources on the web server. If the user is dormant for a specific period of time, reclaim those resources by killing the user's session.

We can "lock the door" without timing out the server side session with strong policies on the workstation: password protected screen savers, etc.

So here is how it works. Add the following JavaScript to the end of the HTML definition PT_COMMON (or PT_COPYURL if using an older version of PeopleTools) (or even better, if you are on PeopleTools 8.54+, use component and/or role based branding to activate this script). Next, turn down your web profile's timeout warning and timeout to something like 3 and 5 minutes or 5 and 10 minutes. On the timeout warning interval, the user's browser will place an Ajax request to keep the session active. When the user closes all browser windows, the reset won't happen so the user's server side session state will terminate.

What values should you use for the warning and timeout? As low as possible, but not so low you create too much network chatter. If the browser makes an ajax request on the warning interval and a user has 10 windows open, then that means the user will trigger up to 10 Ajax requests within the warning interval window. Now multiply that by the number of logged in users at any given moment. See how this could add up?

Here is the JavaScript:

(function (root) {
// xhr adapted from http://toddmotto.com/writing-a-standalone-ajax-xhr-javascript-micro-library/
var xhr = function (type, url, data) {
var methods = {
success: function () {
},
error: function () {
}
};

var parse = function (req) {
var result;
try {
result = JSON.parse(req.responseText);
} catch (e) {
result = req.responseText;
}
return [result, req];
};

var XHR = root.XMLHttpRequest || ActiveXObject;
var request = new XHR('MSXML2.XMLHTTP.3.0');
request.open(type, url, true);
request.setRequestHeader('Content-type', 'application/x-www-form-urlencoded');
request.onreadystatechange = function () {
if (request.readyState === 4) {
if (request.status === 200) {
methods.success.apply(methods, parse(request));
} else {
methods.error.apply(methods, parse(request));
}
}
};

request.send(data);
return {
success: function (callback) {
methods.success = callback;
return methods;
},
error: function (callback) {
methods.error = callback;
return methods;
}
};
}; // END xhr


var timeoutIntervalId;
var resetUrl;

/* replace warning message timeout with Ajax call
*
* clear old timeout after 30 seconds
* macs don't set timeout until 1000 ms
*/
root.setTimeout(function () {
/* some pages don't have timeouts defined */
if (typeof (timeOutURL) !== "undefined") {
if (timeOutURL.length > 0) {
resetUrl = timeOutURL.replace(/expire$/, "resettimeout");
if (totalTimeoutMilliseconds !== null) {
root.clearTimeout(timeoutWarningID);
root.clearTimeout(timeoutID);

timeoutIntervalId =
root.setInterval(resetTimeout /* defined below */,
root.warningTimeoutMilliseconds);
}
}
}
}, 30000);

var resetTimeout = function () {
xhr("GET", resetUrl)
.success(function (msg) {
/* do nothing */
})
.error(function (xhr, errMsg, exception) {
alert("failed to reset timeout");
/* error; fallback to delivered method */
(root.setupTimeout || root.setTimeout2)();
});
};
}(window));

A special "shout out" to Todd Motto for his Standalone Ajax/XHR JavaScript micro-library which is embedded (albeit modified) in the JavaScript above.

Oracle Priority Service Infogram for 24-JUL-2014

Oracle Infogram - Thu, 2014-07-24 09:51

RDBMS
From Upgrade your Database - NOW!: Remote Cloning of Pluggable Databases
in Oracle Database 12.1.0.1

From Oracle DB/EM Support: Oracle Database 12c Release 1 (12.1.0.1 ) availability and information.
From Oracle-Base: Full Database Caching Mode in Oracle Database 12cR1 (12.1.0.2)
Another great issue of the Log Bugger: Log Buffer #380, A Carnival of the Vanities for DBAs, from Pythian.
SQL Developer
Copy & Paste Imports from Excel to Oracle using SQL Developer, from that Jeff Smith.
Exadata
From Capgemini’s Capping IT Off blog: Using R and Oracle Exadata.
GoldenGate
How To Correlate Oracle Database Transaction with GoldenGate, From Pythian.
From Maximum Availability Architecture: Oracle GoldenGate Active-Active Part 2.
Java
A nice step by step Java Keystore Tutorial from Java Code Geeks.
WebCenter
WebCenter Portal:  Building Your Own. Part 1, from Mythics.
Framework Folder Support for WebCenter Portal? It's Coming!, from Proactive Support – Portals.
Fusion
An introduction to the new User Interface Text feature, from the Fusion Applications Developer Relations blog.
Linux
From Aman Sharma’sblog: Say Hello To Oracle Linux 7.0….
From SAPTECHNO: Note 1871318 - Linux: Disable Transparent HugePages for Oracle Database.
Data Science
From Data Science Central: Data Science Cheat Sheet.
Deep Learning
From O’Reilly’s Radar:How to build and run your first deep learning network
EBS
From the Oracle E-Business Suite Support Blog:
Discrete LCM Transactional Data Diagnostic Analyzer
Attention EBS Payroll Customers - Legislative Patches Released!
CRM Product Family Latest Patches For 12.1.3
Big Changes in Submitting, Monitoring and Voting on Procurement Enhancement Requests
Webcast: Using Commitments, Deposits & Guarantees In Oracle Receivables
Prepayments Headaches Gone!
Get more out of Product Information Management with PIM Training Resources
Discrete LCM Integration Key Setup Analyzer
From Oracle E-Business Suite Technology
Database 12.1.0.1 Certified with EBS on AIX, Itanium, Windows
Latest Updates to AD and TXK Tools for EBS 12.2
Microsoft Office 2013 Certified with E-Business Suite 12.0.6
Critical Patch Update for July 2014 Now Available
JRE 1.7.0_65 Certified with Oracle E-Business Suite
Java JRE 1.6.0_81 Certified with Oracle E-Business Suite
Creating a Maintenance Strategy for Oracle E-Business Suite
…and Finally
From Diply, 24 Genius Life Hacks Everyone Needs To Know Right Now. I started using the one on avocados yesterday.

From Techcrunch a new kind of online language test: Duolingo Launches Its Certification Program To Take On TOEFL.

Oracle BPM & Adaptive Case Management

WebCenter Team - Thu, 2014-07-24 07:00
Oracle's Prasen Palvankar speaks on Adaptive Case Management

Oracle BPM Suite offers in-built adaptive case management capabilities to manage unstructured processes and empower the knowledge workers to improve customer experience


Avio Discusses Oracle's Business Driven Process Management

Dan Atwood of Avio discusses how Oracle BPM Suite empowers businesses users to design and improve processes and achieve higher visibility and efficiency.

Oracle Database 12.1.0.2.0 – Turning on the In-Memory Database option

Marco Gralike - Thu, 2014-07-24 06:26
It is indeed that sample as switching a knob to turn it on. To enable it you will have to set a reasonable among of...

Read More

Oracle Database 12.1.0.2.0 – Native JSON Support (1)

Marco Gralike - Thu, 2014-07-24 06:17
Oracle Database 12.1.0.2 has now native support build-in for handling JSON (Javascript Object Notation) data. Oracle Database supports JSON natively with relational database features, including...

Read More

Recurring Conversations: AWR Intervals (Part 2)

Doug Burns - Thu, 2014-07-24 03:01

(Reminder, just in case we still need it, that the use of features in this post require Diagnostics Pack license.)

Damn me for taking so long to write blog posts these days. By the time I get around to them, certain very knowledgeable people have commented on part 1 and given the game away! ;-)

I finished the last part by suggesting that a narrow AWR interval makes less sense in a post-10g Diagnostics Pack landscape than it used to when we used Statspack.

Why do people argue for a Statspack/AWR interval of 15 or 30 minutes on important systems? Because when they encounter a performance problem that is happening right now or didn’t last for very long in the past, they can drill into a more narrow period of time in an attempt to improve the quality of the data available to them and any analysis based on it. (As an aside, I’m sure most of us have generated additional Statspack/AWR snapshots manually to *really* reduce the time scope to what is happening right now on the system, although this is not very smart if you’re using AWR and Adaptive Thresholds!)

However, there are better tools for the job these days.

If I have a user complaining about system performance then I would ideally want to narrow down the scope of the performance metrics to that user’s activity over the period of time they’re experiencing a slow-down. That can be a little difficult on modern systems that use complex connection pools, though. Which session should I trace? How do I capture what has already happened as well as what’s happening right now? Fortunately, if I’ve already paid for Diagnostics Pack then I have *Active Session History* at my disposal, constantly recording snapshots of information for all active sessions. In which case, why not look at

- The session or sessions of interest (which could also be *all* active sessions if I suspect a system-wide issue)
- For the short period of time I’m interested in
- To see what they’re actually doing

Rather than running a system-wide report for a 15 minute interval that aggregates the data I’m interested in with other irrelevant data? (To say nothing of having to wait for the next AWR snapshot or take a manual one and screwing up the regular AWR intervals ...)

When analysing system performance, it’s important to use the most appropriate tool for the job and, in particular, focus your data collection on what is *relevant to the problem under investigation*. The beauty of ASH is that if I’m not sure what *is* relevant yet, I can start with a wide scope of all sessions to help me find the session or sessions of interest and gradually narrow my focus. It has the history that AWR has, but with finer granularity of scope (whether that be sessions, sql statements, modules, actions or one of the many other ASH dimensions). Better still, if the issue turns out to be one long-running SQL statement, then a SQL Monitoring Active Report probably blows all the other tools out of the water!

With all that capability, why are experienced people still so obsessed with the Top 5 Timed Events section of an AWR report as one of their first points of reference? Is it just because they’ve become attached to it over the years of using Statspack? AWR has it’s uses (see JB’s comments for some thoughts on that and I’ve blogged about it extensively in the past) but analysing specific performance issues on Production databases is not it’s strength. In fact, if we’re going to use AWR, why not just use ADDM and let software perform automatically the same type of analysis most DBAs would do anyway (and in many cases, not as well!)

Remember, there’s a reason behind these Recurring Conversations posts. If I didn’t keep finding myself debating these issues with experienced Oracle techies, I wouldn’t harbour doubts about what seem to be common approaches. In this case, I still think there are far too many people using AWR where ASH or SQL Monitoring are far more appropriate tools. I also think that if we stick with a one hour interval rather than a 15 minute interval, we can retain four times as much *history* in the same space! When it comes to AWR – give me long retention over a shorter interval every time!

P.S. As well as thanking JB for his usual insightful comments, I also want to thank Martin Paul Nash. When I was giving an AWR/ASH presentation at this springs OUGN conference, he noticed the bullet point I had on the slide suggesting that we *shouldn’t* change the AWR interval and asked why. Rather than going into it at the time, I asked him to remind me at the end of the presentation and then because I had no time to answer, I promised I’d be blogging about it that weekend. That was almost 4 months ago! Sigh. But at least I got there in the end! ;-)

Oracle Database 12c Release 12.1.0.2 – My First Observations. Licensed Features Usage Concerns – Part I.

Kevin Closson - Thu, 2014-07-24 02:12

My very first words on Oracle Database 12c Release 12.1.0.2 can be summed up in a single quotable quote:

This release is hugely important.

I’ve received a lot of email from folks asking me to comment on the freshly released In-Memory Database Option. These words are so overused. This post, however, is about much more than word games. Please read on…

When querying the dba_feature_usage_statistics view the option is known as “In-Memory Column Store.”  On the other hand, I’ve read a paper on oracle.com that refers to it as the “In-Memory Option” as per this screen shot:

in-memory-paper

A Little In-Memory Here, A Little In-Memory There

None of us can forget the era when Oracle referred to the flash storage in Exadata as a “Database In-Memory” platform. I wrote about all that in a post you can view here: click this. But I’m not blogging about any of that. Nonetheless, I remained confused about the option/feature this morning as I was waiting for my download of Oracle Database 12c Release 12.1.0.2 to complete. So, I spent a little time trying to cut through the fog and get some more information about the In-Memory Option. My first play was to search for the term in the search bar at oracle.com. The following screen shot shows the detritus oracle.com returned due to the historical misuse and term overload–but, please, remember that I’m not blogging about any of that:

In-Memory-12c-Oraclecom-Pollution

As the screenshot shows one must eyeball their way down through 8 higher-ranking search results that have nothing to do with this very important new feature before one gets to a couple of obscure blog links. All this term overload and search failure monkey-business is annoying, yes, but I’m not blogging about any of that.

What Am I Blogging About?

This is part I in a short series about Oracle licensing ramifications of the In-Memory Option/In-Memory Column Store Feature.

The very first thing I did after installing the software was to invoke the SLOB database create scripts to quickly get me on my way. The following screen shot shows how I discovered that the separately-licensed In-Memory Option/In-Memory Column Store Feature is enabled by default:

inmem1

Now, this is no huge red flag because separately-licensed features like Real Application Clusters and Partitioning are sort of “on” by default. This doesn’t bother me because a) one can simply unlink RAC to be safe from accidental usage and b) everyone that uses Enterprise Edition uses the Partitioning Option (I am correct on that assertion, right?).  However, I think things are a little different with the In-Memory Option/In-Memory Column Store Feature since it is “on” by default and a simple command like the one in the following screen shot means your next license audit will be, um, more entertaining.

in-mem-timebomb

OK, please let me point out  that I’m trying as hard as I can to not make a mountain out of a mole-hill. I started this post by stating my true feelings about this release. This release is, indeed, hugely important. That said, I do not believe for a second that every Enterprise Edition deployment of Oracle Database 12c Release 12.1.0.2 will need to have the In-Memory Option/In-Memory Column Store Feature in the shopping cart–much unlike Partitioning for example. Given the crushing cost of this option/feature I expect that its use will be very selective. It’s for this reason I wanted to draw to people’s attention the fact that–in my assessment–this option/feature is very easy to use “accidentally.” It really should have a default initialization setting that renders the option/feature nascent–but the reality is quite the opposite.

Summary

I have to make this post short and relegate it to part I in a series because I can’t take it to the next level which is to write about monitoring the feature usage. Why? Well, as I tweeted earlier today, the scripts most widely used for monitoring feature usage are out of date because they don’t (yet) report on the In-Memory Column Store feature. The scripts I allude to are widely known by Google search as MOS 1317265.1. Once these are updated to report usage of the In-Memory Option/In-Memory Column Store Feature I’ll post part II in the series.

Thoughts?

 

 

 


Filed under: oracle

AWR Warehouse

Asif Momen - Wed, 2014-07-23 20:10
AWR Warehouse is a central repository configured for long term AWR data retention. It stores AWR snapshots from multiple database sources. Increasing AWR retention in the production systems would typically increase overhead and cost of mission critical databases. Hence, offloading the AWR snapshots to a central repository is a better idea. Unlike AWR retention period of default 8 days, the AWR Warehouse default retention period is "forever". However, it is configurable for weeks, months, or years. 

For more information on AWR Warehouse click on the following link for a video tutorial. 

http://www.youtube.com/watch?v=StydMitHtuI&feature=youtu.be

My Oracle Support Community Enhancement Brings New Features

Joshua Solomin - Wed, 2014-07-23 17:33
Untitled Document

 

GPIcon


Be sure to visit our My Oracle Support Community Information Center to see what is new. Choose from the tabs to watch the How to Video Series. You can also enroll for a live webcast on Wednesday, August 6 at 9am PST.

One change, you can now read blogs in My Oracle Support Community. The new Support Blogs space provides access to Support related blogs. The My Oracle Support Blog provides posts on the portal and tools that span all product areas.

Support Blogs also allow you to stay in touch with the latest product-specific news, tools, and troubleshooting tips in a growing list of product blogs maintained by Support engineers. Check back frequently to read new posts and discover new blogs.

My Oracle Support Upgrade

Joshua Solomin - Wed, 2014-07-23 16:17
Untitled Document

GPIcon
Over the weekend, we upgraded My Oracle Support. This upgrade brings changes to help you work more effectively with Oracle Support.

Among the areas you will notice enhancements are:

  • The My Oracle Support customer experience
  • My Oracle Support Community
  • Customer User Administration
  • Knowledge Management
For details about the latest features visit the My Oracle Support User Resource Center.

 

 

Complément: Mettre en place AWR Warehouse

Jean-Philippe Pinte - Wed, 2014-07-23 15:35
Le patch permettant la mise en place d'AWR Warehouse dans Enterprise Manager 12.1.0.4 est disponible ( cf note 1907335.1) !

Pour activer AWR Warehouse, l'administrateur d'Enterprise Manager doit :
  • Créer une instance de base de données Oracle Database 12.1.0.2 (recommandé) ou 11.2.0.4+ pour le référentiel AWR
  • Découvrir l'instance dans EM12c
  • Appliquer le patch sur l'OMS (cela permet d'avoir AWR Warehouse dans le menu)
  • Appliquer le patch sur les agents des serveurs sources (cad des DBs dont on veut remonter les AWR dans le référentiel); sources qui peuvent être 10.2, 11.1, 11.2 (SI ou RAC) & 12.1 (SI Muti-tenant ou non & RAC)
  • Configurer AWR Warehouse via l'interface d'EM
  • Ajouter chacune des sources à remonter dans le référentiel 

Plus d'information :

Vamos to the OTN Latin America Tour!

OTN TechBlog - Wed, 2014-07-23 15:33

Rick Ramsey, OTN Systems Community Manager, just wrote a GREAT blog post about the upcoming OTN Latin America Tour! A brief excerpt is below - go to his blog post to see the full schedule and register for this series of events which kicks off August 2nd.

"Oracle User Groups in Latin America, our friends in the Oracle product teams, Oracle ACES, and the Oracle Technology Network have put together a terrific agenda for the 2014 Tour. Hands-on labs, demos, and presentations for developers and deployers of the technologies in the Oracle stack, from applications all the way to systems, including Oracle Database and trending topics such as Big Data. Presenters will be product experts drawn from the Oracle ACE Community and Oracle product teams."