Feed aggregator

Most efficient way to UNPIVOT a table with large # of columns

Tom Kyte - Mon, 2016-12-12 07:46
Database is 11.2.0.4. At this point, cannot use full featured FLASHBACK ARCHIVE with CONTEXT... but it would have the same issues I think <b>ENVIRONMENT</b> I have a change logging table (e.g. CHANGESTABLE) that is basically the mirror image col...
Categories: DBA Blogs

ORA-01460: unimplemented or unreasonable conversion requested in /var/www/html/rest/price/resolve/ResolvePrice.php

Tom Kyte - Mon, 2016-12-12 07:46
when i pass the hardcoded values $PRODUCT_NUM_ARR and $MEMBER_NAME through an oracle bind variable to execute a stored function,it works fine and i get the result. But when i pass the same values from an array i get the ORA error. I have found diffic...
Categories: DBA Blogs

Limit DOP of automatic statistics gathering

Tom Kyte - Mon, 2016-12-12 07:46
Tom - we run a 11.2.0.4.0 RAC database with Automatic DOP enabled. The 2 rack nodes have 16 CPU cores each (32 with Hyperthreading) and 128GB of RAM. The database did run smoothly for over a year. The OS is Redhat Enterprise Linux 6.5 on both nod...
Categories: DBA Blogs

A Guide to the Oracle ALTER TABLE SQL Statement

Complete IT Professional - Mon, 2016-12-12 05:00
The Oracle ALTER TABLE statement allows you to make changes to an existing table. Learn how to use it and see some examples in this guide. What Is The Oracle ALTER TABLE Statement? The ALTER TABLE SQL statement lets you change a table that has already been created. Using the CREATE TABLE statement lets you create […]
Categories: Development

Enhanced Usage Tracking for OBIEE - Now Available as Open Source!

Rittman Mead Consulting - Mon, 2016-12-12 04:00
Introduction

OBIEE provides Usage Tracking as part of the core product functionality. It writes directly to a database table every Logical Query that hits the BI Server, including details of who ran it, when, and information about how it executed including for how long, how many rows, and so on. This in itself is a veritable goldmine of information about your OBIEE system. All OBIEE deployments should have Usage Tracking enabled, for supporting performance analysis, capacity planning, catalog rationalisation, and more.

What Usage Tracking doesn't track is interactions between the end user and the Presentation Services component. Presentation Services sits between the end user and the BI Server from where the actual queries get executed. This means that until a user executes an analysis, there's no record of their actions in Usage Tracking. There is this audit data available, but you have to manually enable and collect it, which can be tricky. This is where Enhanced Usage Tracking comes in. It enables the collection and parsing of every click a user makes in OBIEE. For an overview of the potential of this data, see the article here and here.

Today we're pleased to announce the release into open-source of Enhanced Usage Tracking! You can find the github repository here: https://github.com/RittmanMead/obi-enhanced-usage-tracking.

Highlights of the data that Enhanced Usage Tracking provides includes:

  • Which web browsers do people use? Who is accessing OBIEE with a mobile device?

  • Who deleted a catalog object? Who moved it?

  • What dashboards get exported to Excel most frequently, and by whom?

The above visualisations are from both Kibana, and OBIEE. The data from Enhanced Usage Tracking can be loaded into Elasticsearch, and is also available from Oracle tables too, hence you can put OBIEE itself on top of it, or DV:

eut108.png

How to use Enhanced Usage Tracking

See the github repository for full detail on how to install and run the code.

TODO

What's left TODO? Here are a few ideas if you'd like to help build on this tool. I've linked each title to the relevant github issue.

TODO 01

The sawlog is a rich source of lots of data, but the Logstash script has to know how to parse it. It's all down to the grok statement which identifies fields to extract and defined their deliniators. Use grokdebug.herokuapp.com to help master your syntax. From there, the data can be emitted to CSV and loaded into Oracle.

Here's an example of something yet to build - when items are moved and deleted in the Catalog, it is all logged. What, who, and when. The Logstash grok currently scrapes this, but the data isn't included in the CSV output, nor loaded into Oracle.

eut105.png

Don't forget to submit a pull request for any changes to the code that would benefit others in the community!

You'll also find loading the data directly into Elasticsearch easier than redefining the Oracle table DDL and load script each time, since in Elasticsearch the 'schema' can evolve based simply on the data that Logstash sends to it.

TODO 02

Version 5 of the Elastic stack was released in late 2016, and it would be good to test this code with it and update the README section above to indicate if it works - or submit the required changes needed for it to do so.

TODO 03

There's lots of possibilities for this data. Auditing who did what, when, is useful (e.g. who deleted a report?). Taking it a step further, are there patterns in user behaviour? Certain patterns of clicks that could be identified to highlight users who are struggling to find the data that they want? For example, opening lots of presentation folders in the Answers editor before adding columns to the analysis? Can we extend that to identify users who are struggling to use the tool and are going to "churn" (stop using it) and thus contact them before they do so to help resolve any issues they have?

TODO 04

At the moment the scripts are manual to invoke and run. It would be neat to package this up into a service (or set of services) that could run automagically at server boot.

Until then, using GNU screen is a handy hack for setting scripts running and being able to disconnect from the server without terminating them. It's like using nohup ... &, except you can reconnect to the session itself as and when you want to.

TODO 05

Click events have defined 'Request' types, and these I have roughly grouped together into 'Request Groups' to help describe what the user was doing (e.g. Logon / Edit Report / Run Report). Not all requests have been assigned to request groups. It would be useful to identify all request types, and refine further the groupings.

TODO 06

At the moment only clicks in Presentation Services are captured and analysed. I bet the same can be done for Data Visualization/Visual Analyzer too ...

Problems?

Please raise any issues on the github issue tracker. This is open source, so bear in mind that it's no-one's "job" to maintain the code - it's open to the community to use, benefit from, and maintain.

If you'd like specific help with an implementation, Rittman Mead would be delighted to assist - please do get in touch to discuss our rates.

Categories: BI & Warehousing

Microsoft Ending Support for Office 2007 in October 2017

Steven Chan - Mon, 2016-12-12 02:05

Microsoft Office 2007 logoMicrosoft is ending support for Office 2007 on October 10, 2017.  The official announcement is published here:

Microsoft will stop producing security updates and non-security updates for Office 2007 after that date.

Office 2007 included Word, Outlook, Excel, PowerPoint and others.  EBS integrations with these products are certified for desktop clients accessing the E-Business Suite today.  Our general policy is that we support certified third-party products as long as the third-party vendor supports them.  When the third-party vendor retires a product, we consider that to be an historical certification for EBS.

What can EBS customers expect after October 2017?

After Microsoft desupports Office 2007 in October 2017:

  • Oracle Support will continue to assist, where possible, in investigating issues that involve Office 2007.
  • Oracle's ability to assist may be limited due to limited access to PCs running Office 2007.
  • Oracle will continue to provide access to existing EBS patches for Office 2007 issues.
  • Oracle will provide new EBS patches only for issues that can be reproduced on later Office versions that Microsoft is actively supporting (e.g. Office 2010, Office 2013, Office 2016).

What should EBS customers do?

Oracle strongly recommends that E-Business Suite customers upgrade their desktops from Office 2007 to the latest certified equivalents.  As of today, those are Office 2010, Office 2013, and Office 2016.

Related Articles

Categories: APPS Blogs

12.2 Index Advanced Compression “High” Part II (One Of My Turns)

Richard Foote - Mon, 2016-12-12 00:17
In Part I, I introduced the new Index Advanced Compression default value of “HIGH”, which has the potential to significantly compress indexes much more than previously possible. This is due to new index compression algorithms that do more than simply de-duplicate indexed values within a leaf block. Previously, any attempt to completely compress a Unique […]
Categories: DBA Blogs

Google Cloud Platform Fundamentals in Sydney

Pakistan's First Oracle Blog - Sun, 2016-12-11 22:58
Just finished up one day training at Google's Sydney office in Google Cloud Platform Fundamentals. GCP is pretty cool and I think I like it.

Lots of our customers at Pythian are already hosting, migrating and thinking of doing so on cloud. Pythian already has a huge presence in cloud using various technologies.

So it was good to learn something about the Google's cloud offering. It was a pleasant surprise as it all made sense. From App engine to compute engine and from big table to big query, the features are sound, mature and ready to use.

The dashboard is simple too. I will be blogging more about it as I play with it in coming days.
Categories: DBA Blogs

Authentication and Authorization Identifiers

Anthony Shorten - Sun, 2016-12-11 19:23

In Oracle Utilities Application Framework, the user identification is actually divided into two parts:

  • Authentication Identifier (aka Login Id) -  This the identifier used for authentication (challenge/response) for the product. This identifier is up to 256 characters in length and must be matched by the configured security repository for it to be checked against. By default, if you are using Oracle WebLogic, there is an internal LDAP based security system that can be used for this purpose. It is possible to link to external security repositories using the wide range of Oracle WebLogic security providers included in the installation. This applies to Single Sign On solutions as well.
  • Authorization Identifier (aka UserId) - This is the short user identifier (up to 8 characters in length) used for all service and action authorization as well as low level access. 

The two identifiers are separated for a couple of key reasons:

  • Authentication Identifiers can be changed. Use cases like changing your name, business changes etc mean that the authentication identifier needs to be able to be changed. As long as the security repository is also changed then this identifier will be in synchronization for correct login.
  • Authentication Identifiers are typically email addresses which can vary and are subject to change. For example, if the company is acquired then the user domain most probably will change.
  • Changes to Authentication Identifiers do not affect any existing audit or authorization records. As the authorization user is used for internal processing, after login the authentication identifier, while tracked, is not used for security internally once you have be successfully authenticated.
  • Authorization Identifiers are not changeable and can be related to the Authentication Identifier, such as using first initial and first 7 characters of the surname or be randomly generated by an external Identity Management solution.
  • One of the main reasons the Authorization Identifier is limited in size is to allow a wide range of security solutions to be hooked into the architecture and provide an efficient means of tracking. For example, the identifier is propagated in the connection across the architecture to allow for end to end tracking of transactions.

Security has been augmented in the last few releases of the Oracle Utilities Application Framework to allow various flexible levels of control and tracking. Each implementation can decide to track what aspects of security they want to track using tools available or using third party tools (if they want that).

Queue-based Concurrent Stats Prototype Implementation

Randolf Geist - Sun, 2016-12-11 17:14
This is just a prototype of a queue-based concurrent statistics implementation - using the same basic implementation I've used a a couple of years ago to create indexes concurrently.

There are reasons why such an implementation might be useful - in 11.2.0.x the built-in Concurrent Stats feature might turn out to be not really that efficient by creating lots of jobs that potentially attempt to gather statistics for different sub-objects of the same table at the same time - which can lead to massive contention on Library Cache level due to the exclusive Library Cache locks required by DDL / DBMS_STATS calls.

In 12.1 the Concurrent Stats feature obviously got a major re-write by using some more intelligent processing what and how should be processed concurrently - some of the details are exposed via the new view DBA_OPTSTAT_OPERATION_TASKS, but again I've seen it running lots of very small jobs serially one of the other in the main session which can be a performance problem if the sheer number of objects to analyze is huge.

This prototype here tries to work around these problems by using a queue-based approach for true concurrent processing in combination with an attempt to use some "intelligent" ordering of the objects to analyze in the hope to minimize contention on Library Cache level.

This prototype determines the objects to gather statistics on by calling DBMS_STATS.GATHER_DATABASE_STATS using one of the available LIST* options - so it's supposed to replace a call to GATHER_DATABASE_STATS or the built-in default nightly statistics job.

The jobs for concurrent stats gathering are created using DBMS_SCHEDULER and a custom job class, which offers the feature of binding the jobs to a specific service, which can come handy if you for example want these jobs only to execute on certain node(s) in a RAC cluster database.

It comes with the following (known) limitations:

- The current implementation offers only rudimentary logging to a very simple log table, which also gets truncated at the start of each run, so no history from previous runs gets retained. In 12c this is not such an issue since DBA_OPTSTAT_OPERATION_TASKS contains a lot of details for each individual stats gathering call.

- Currently only objects of type TABLE returned by GATHER_DATABASE_STATS are considered assuming the CASCADE option will take care of any indexes to gather statistics for

- The default behaviour attempts to make use of all available CPUs of a single node by starting as many threads as defined via CPU_COUNT. If you explicitly specify the number of concurrent threads the default behaviour is to use a DEGREE (or DOP) per gather stats operation that again makes use of all available CPU resources by using a DEGREE = CPU_COUNT divided by number_of_threads. If you don't want that you will have to specify the DEGREE / DOP explicitly, too

- I haven't spend time to investigate how this behaves with regards to incremental statistics - since it's possible that GLOBAL and (sub-)partition statistics of the same object get gathered in different jobs, potentially at the same time and in random order (so GLOBAL prior to partition for example) the outcome and behaviour with incremental statistics turned on could be a problem

More details can be found in the comments section of the code, in particular what privileges might be required and how you could replace the default statistics job if desired.

The script provided includes a de-installation and installation part of the code. All that needs to be called then to start a database-wide gather statistics processing is the main entry point "pk_stats_concurrent.stats_concurrent" using any optional parameters as desired - consider in particular the parameters to control the number of threads and the intra-operation DOP as just outlined. See the code comments for a detailed description of the available parameters.

--------------------------------------------------------------------------------
--
-- Script: pk_stats_concurrent.sql
--
-- Author: Randolf Geist
--
-- Copyright: http://oracle-randolf.blogspot.com
--
-- Purpose: A queue based concurrent stats implementation - installation script
--
-- Usage: @pk_stats_concurrent
--
-- The script will first drop all objects (can be skipped)
--
-- And then attempt to create the objects required (can be skipped)
--
--------------------------------------------------------------------------------

spool pk_stats_concurrent.log

prompt Concurrent Stats - Deinstallation
prompt *----------------------------------------------------*
prompt This script will now attempt to drop the objects
prompt belonging to this installation
accept skip_deinstall prompt 'Hit Enter to continue, CTRL+C to cancel or enter S to skip deinstall: '

set serveroutput on

declare
procedure exec_ignore_fail(p_sql in varchar2)
as
begin
execute immediate p_sql;
exception
when others then
dbms_output.put_line('Error executing: ' || p_sql);
dbms_output.put_line('Error message: ' || SQLERRM);
end;
begin
if upper('&skip_deinstall') = 'S' then
null;
else
exec_ignore_fail('begin pk_stats_concurrent.teardown_aq; end;');
exec_ignore_fail('drop table stats_concurrent_log');
exec_ignore_fail('drop type stats_concurrent_info force');
exec_ignore_fail('drop package pk_stats_concurrent');
exec_ignore_fail('begin dbms_scheduler.drop_job_class(''CONC_STATS''); end;');
end if;
end;
/

prompt Concurrent Stats - Installation
prompt *----------------------------------------------------*
prompt This script will now attempt to create the objects
prompt belonging to this installation
PAUSE Hit CTRL+C to cancel, ENTER to continue...

/**
* The log table for minimum logging of the concurrent execution threads
* Since we cannot access the DBMS_OUTPUT of these separate processes
* This needs to be cleaned up manually if / when required
**/
create table stats_concurrent_log (log_timestamp timestamp, sql_statement clob, message clob);

/**
* The single object type used as payload in the AQ queue for concurrent execution
* Each message will have a description of the index plus the actual DDL text as payload
**/
create or replace type stats_concurrent_info as object
(
ownname varchar2(30)
, tabname varchar2(30)
, partname varchar2(30)
, degree number
, granularity varchar2(30)
);
/

show errors

create or replace package pk_stats_concurrent authid current_user
as

------------------------------------------------------------------------------
-- $Id$
------------------------------------------------------------------------------

/**
* PK_STATS_CONCURRENT.SQL
*
* Created By : Randolf Geist (http://oracle-randolf.blogspot.com)
* Creation Date : 31-OCT-2016
* Last Update : 06-DEC-2016
* Authors : Randolf Geist (RG)
*
* History :
*
* When | Who | What
* ----------------------------------
* 31-OCT-2016 | RG | Created
* 06-DEC-2016 | RG | This header comment updated
*
* Description :
*
* This is a simple prototype implementation for the given task of gathering database stats
* concurrently, in case you are not satisfied with the built-in concurrent stats option available since 11.2.0.x
*
* In 11.2, the CONCURRENT stats option creates as many jobs as there are objects to gather
* And the JOB_QUEUE_PROCESSES parameter then controls the number of concurrent jobs running
* Since the ordering of execution isn't really optimized, many of these concurrent jobs might attempt to gather stats on the same object in case it is (sub)partitioned
* This can lead to significant contention on Library Cache level (due to exclusive Library Cache Locks required by DDL / DBMS_STATS)
*
* In 12.1 the CONCURRENT stats option was obviously completed rewritten and uses some more intelligent processing
* by calculating if and yes how many jobs should run concurrently for what kind of objects (see for example the new DBA_OPTSTAT_OPERATION_TASKS view that exposes some of these details)
* Still I've observed many occasions with this new implementation where lots of objects were deliberately gathered in the main session
* one after the other which doesn't really make good use of available resources in case many objects need to be analyzed
*
* This implementation tries to work around these points by using a simple queue-based approach for true concurrent stats processing
* combined with an attempt to distribute the tables to analyze across the different threads in a way that minimizes the contention on Library Cache level
*
* It needs to be installed / executed under a suitable account that has the privileges to create queues, types, packages, tables, jobs and job classes and gather stats on the whole database
*
* A sample user profile could look like this:

create user conc_stats identified by conc_stats;

grant create session, create table, create procedure, create type, create job, manage scheduler, analyze any, analyze any dictionary to conc_stats;

grant execute on sys.dbms_aq to conc_stats;

grant execute on sys.dbms_aqadm to conc_stats;

grant select on sys.v_$parameter to conc_stats;

alter user conc_stats default tablespace users;

alter user conc_stats quota unlimited on users;

* Parameters to be checked, depending on concurrency desired:
*
* job_queue_processes: Needs to be set high enough to accommodate for the concurrent stats threads spawned. By default this package spawns CPU_COUNT concurrent threads
* parallel_max_servers: If a stats thread is supposed to use Parallel Execution (degree > 1) for gathering stats you'll need at least threads * degree Parallel Slaves configured
* services: It's possible to specify a service to have the stats threads only executed on RAC nodes that run that service
*
* The jobs are created under the same job class, currently hard coded value CONC_STATS - this makes the handling easier in case you want to stop / drop the jobs submitted manually
*
* The job class name can be passed to calls to DBMS_SCHEDULER.DROP_JOB or STOP_JOB - remember that job classes are owned by SYS, so you have to specify SYS.CONC_STATS for the job class name used here
*
* The main entry point STATS_CONCURRENT is all you need to call to start concurrent stats gathering on the database
* similar to GATHER_DATABASE_STATS using one of the options GATHER, GATHER STALE or GATHER AUTO (default) - here you have to use LIST EMPTY, LIST STALE or LIST AUTO (default)
*
* The default behaviour when not specifying any parameters is to start as many threads as there are CPUs by using the CPU_COUNT parameter
* If you want this to be multiplied by the number of instances in a RAC cluster uncomment the CLUSTER_DATABASE_INSTANCES reference below in the code (assuming same CPU_COUNT on all nodes)
*
* This also means that the "intra" parallelism per gather_table_stats call will be one in such a case since the intra parallelism is calculated by default as CPU_COUNT / number of threads
*
* If you don't want to have that many threads / PX slaves started, specify the number of concurrent threads and an intra-operation DOP explicitly when calling STATS_CONCURRENT
*
* If you want to replace the default nightly stats job with this here, the following steps should achieve this:

BEGIN DBMS_AUTO_TASK_ADMIN.disable(
client_name => 'auto optimizer stats collection',
operation => NULL,
window_name => NULL);
END;

BEGIN DBMS_SCHEDULER.CREATE_JOB(
job_name => '',
schedule_name => 'MAINTENANCE_WINDOW_GROUP',
job_type => 'PLSQL_BLOCK',
job_action => 'begin pk_stats_concurrent.stats_concurrent(); end;',
comments => 'auto optimizer stats collection replacement using concurrent stats operations based on AQ'
enabled => true);
END;

* Please ensure the job is submitted under the account it's supposed to be run - using a different account like SYS to submit the job for a different schema
* seems to cause problems with privileges (insufficient privileges error messages), at least this was reported to me
*
* The code at present only processes objects of type TABLE returned by GATHER_DATABASE_STATS but not indexes
* assuming that these should be covered by the CASCADE functionality
*
* Note: This script is a prototype and comes with NO warranty. Use at your own risk and test/adjust in your environment as necessary
* before using it in any production-like case
*
* @headcom
**/

subtype oracle_object is varchar2(30);

/**
* Let the procedure stats_concurrent decide itself which degree to use.
* At present this means simply to spawn as many child threads as defined by the CPU_COUNT parameter
**/
G_AUTO_PARALLEL_DEGREE constant integer := null;

/**
* The main entry point to gather statistics via parallel threads / AQ
* @param p_parallel_degree The number of threads to start G_AUTO_PARALLEL_DEGREE means use the CPU_COUNT parameter to determine number of threads automatically
* @param p_intra_degree The DOP to use per stats operation, by default calculate DOP based on CPU_COUNT and number of threads to run concurrently (Default DOP = CPU_COUNT / number of threads)
* @param p_service Specify a service if you want the jobs to be assigned to that particular service, default NULL
* @param p_gather_option What to pass to GATHER_DATABASE_STATS as option, default LIST AUTO
* @param p_optional_init Optionally a SQL can be passed usually used to initialize the session
for example forcing a particular parallel degree
**/
procedure stats_concurrent(
p_parallel_degree in integer default G_AUTO_PARALLEL_DEGREE
, p_intra_degree in integer default null
, p_service in varchar2 default null
, p_gather_option in varchar2 default 'LIST AUTO'
, p_optional_init in varchar2 default null
);

/**
* Setup the AQ infrastructure (Queue tables, Queues)
**/
procedure setup_aq;

/**
* Teardown the AQ infrastructure (Queue tables, Queues)
**/
procedure teardown_aq;

/**
* Helper function to populate the AQ queue with data to process
* @param p_gather_option What to pass to GATHER_DATABASE_STATS as option, default LIST AUTO
**/
function list_stale_database_stats (
p_gather_option in varchar2 default 'LIST AUTO'
)
return dbms_stats.objecttab pipelined;

/**
* Populate the AQ queue with data to process
* @param p_parallel_degree The number threads to use - will be used for proper data preparation / queueing order
* @param p_intra_degree The DOP to use per stats operation, by default calculate DOP based on CPU_COUNT and number of threads to run concurrently
* @param p_gather_option What to pass to GATHER_DATABASE_STATS as option, default LIST AUTO
**/
procedure populate_queue(
p_parallel_degree in integer
, p_intra_degree in integer default null
, p_gather_option in varchar2 default 'LIST AUTO'
);

/**
* This gets called for every stats thread
* It pulls the object to gather from the AQ queue
* @param p_optional_init Optionally a SQL can be passed usually used to initialize the session
for example forcing a particular parallel degree
**/
procedure stats_thread(
p_optional_init in varchar2 default null
);

end pk_stats_concurrent;
/

show errors

create or replace package body pk_stats_concurrent
as
------------------------------------------------------------------------------
-- $Id$
------------------------------------------------------------------------------

/**
* PK_STATS_CONCURRENT.SQL
*
* Created By : Randolf Geist (http://oracle-randolf.blogspot.com)
* Creation Date : 31-OCT-2016
* Last Update : 06-DEC-2016
* Authors : Randolf Geist (RG)
*
* History :
*
* When | Who | What
* ----------------------------------
* 31-OCT-2016 | RG | Created
* 06-DEC-2016 | RG | This header comment updated
*
* Description :
*
* This is a simple prototype implementation for the given task of gathering database stats
* concurrently, in case you are not satisfied with the built-in concurrent stats option available since 11.2.0.x
*
* In 11.2, the CONCURRENT stats option creates as many jobs as there are objects to gather
* And the JOB_QUEUE_PROCESSES parameter then controls the number of concurrent jobs running
* Since the ordering of execution isn't really optimized, many of these concurrent jobs might attempt to gather stats on the same object in case it is (sub)partitioned
* This can lead to significant contention on Library Cache level (due to exclusive Library Cache Locks required by DDL / DBMS_STATS)
*
* In 12.1 the CONCURRENT stats option was obviously completed rewritten and uses some more intelligent processing
* by calculating if and yes how many jobs should run concurrently for what kind of objects (see for example the new DBA_OPTSTAT_OPERATION_TASKS view that exposes some of these details)
* Still I've observed many occasions with this new implementation where lots of objects were deliberately gathered in the main session
* one after the other which doesn't really make good use of available resources in case many objects need to be analyzed
*
* This implementation tries to work around these points by using a simple queue-based approach for true concurrent stats processing
* combined with an attempt to distribute the tables to analyze across the different threads in a way that minimizes the contention on Library Cache level
*
* It needs to be installed / executed under a suitable account that has the privileges to create queues, types, packages, tables, jobs and job classes and gather stats on the whole database
*
* A sample user profile could look like this:

create user conc_stats identified by conc_stats;

grant create session, create table, create procedure, create type, create job, manage scheduler, analyze any, analyze any dictionary to conc_stats;

grant execute on sys.dbms_aq to conc_stats;

grant execute on sys.dbms_aqadm to conc_stats;

grant select on sys.v_$parameter to conc_stats;

alter user conc_stats default tablespace users;

alter user conc_stats quota unlimited on users;

* Parameters to be checked, depending on concurrency desired:
*
* job_queue_processes: Needs to be set high enough to accommodate for the concurrent stats threads spawned. By default this package spawns CPU_COUNT concurrent threads
* parallel_max_servers: If a stats thread is supposed to use Parallel Execution (degree > 1) for gathering stats you'll need at least threads * degree Parallel Slaves configured
* services: It's possible to specify a service to have the stats threads only executed on RAC nodes that run that service
*
* The jobs are created under the same job class, currently hard coded value CONC_STATS - this makes the handling easier in case you want to stop / drop the jobs submitted manually
*
* The job class name can be passed to calls to DBMS_SCHEDULER.DROP_JOB or STOP_JOB - remember that job classes are owned by SYS, so you have to specify SYS.CONC_STATS for the job class name used here
*
* The main entry point STATS_CONCURRENT is all you need to call to start concurrent stats gathering on the database
* similar to GATHER_DATABASE_STATS using one of the options GATHER, GATHER STALE or GATHER AUTO (default) - here you have to use LIST EMPTY, LIST STALE or LIST AUTO (default)
*
* The default behaviour when not specifying any parameters is to start as many threads as there are CPUs by using the CPU_COUNT parameter
* If you want this to be multiplied by the number of instances in a RAC cluster uncomment the CLUSTER_DATABASE_INSTANCES reference below in the code (assuming same CPU_COUNT on all nodes)
*
* This also means that the "intra" parallelism per gather_table_stats call will be one in such a case since the intra parallelism is calculated by default as CPU_COUNT / number of threads
*
* If you don't want to have that many threads / PX slaves started, specify the number of concurrent threads and an intra-operation DOP explicitly when calling STATS_CONCURRENT
*
* If you want to replace the default nightly stats job with this here, the following steps should achieve this:

BEGIN DBMS_AUTO_TASK_ADMIN.disable(
client_name => 'auto optimizer stats collection',
operation => NULL,
window_name => NULL);
END;

BEGIN DBMS_SCHEDULER.CREATE_JOB(
job_name => '',
schedule_name => 'MAINTENANCE_WINDOW_GROUP',
job_type => 'PLSQL_BLOCK',
job_action => 'begin pk_stats_concurrent.stats_concurrent(); end;',
comments => 'auto optimizer stats collection replacement using concurrent stats operations based on AQ'
enabled => true);
END;

* Please ensure the job is submitted under the account it's supposed to be run - using a different account like SYS to submit the job for a different schema
* seems to cause problems with privileges (insufficient privileges error messages), at least this was reported to me
*
* The code at present only processes objects of type TABLE returned by GATHER_DATABASE_STATS but not indexes
* assuming that these should be covered by the CASCADE functionality
*
* Note: This script is a prototype and comes with NO warranty. Use at your own risk and test/adjust in your environment as necessary
* before using it in any production-like case
*
* @headcom
**/

-- The queue name to use for AQ operations
G_QUEUE_NAME constant varchar2(24) := 'STATS_QUEUE';

/**
* Rudimentary logging required by the parallel threads since the
* serveroutput generated can not be accessed
* @param p_sql The SQL to log that raised the error
* @param p_error_msg The error message to log
**/
procedure log(
p_sql in clob
, p_msg in clob
)
as
-- We do this in an autonomous transaction since we want the logging
-- to be visible while any other main transactions might be still going on
pragma autonomous_transaction;
begin
insert into stats_concurrent_log(
log_timestamp
, sql_statement
, message
) values (
systimestamp
, p_sql
, p_msg
);
commit;
end log;

/**
* Execute a SQL statement potentially in a different schema (dummy implementation here).
* The string will be put to serveroutput before being executed
* @param p_owner The schema to execute
* @param p_sql The SQL to execute
* @param p_log_error Should an error be logged or not. Default is true
**/
procedure execute(
p_owner in oracle_object
, p_sql in clob
, p_log_error in boolean default true
)
as
begin
-- dbms_output.put_line('Owner: ' || p_owner || ' SQL: ' || substr(p_sql, 1, 4000));
$if dbms_db_version.ver_le_10 $then
declare
a_sql dbms_sql.varchar2a;
n_start_line number;
n_end_line number;
c integer;
n integer;
LF constant varchar2(10) := '
';
len_LF constant integer := length(LF);
begin
n_start_line := 1 - len_LF;
loop
n_end_line := instr(p_sql, LF, n_start_line + len_LF);
a_sql(a_sql.count + 1) := substr(p_sql, n_start_line + len_LF, case when n_end_line = 0 then length(p_sql) else n_end_line end - (n_start_line + len_LF) + len_LF);
-- dbms_output.put_line(a_sql.count || ':' || a_sql(a_sql.count));
exit when n_end_line = 0;
n_start_line := n_end_line;
end loop;
c := dbms_sql.open_cursor;
dbms_sql.parse(c, a_sql, 1, a_sql.count, false, dbms_sql.NATIVE);
n := dbms_sql.execute(c);
dbms_sql.close_cursor(c);
end;
$elsif dbms_db_version.ver_le_11 $then
execute immediate p_sql;
$else
execute immediate p_sql;
$end
exception
when others then
dbms_output.put_line('Error: ' || SQLERRM);
if p_log_error then
log(p_sql, SQLERRM);
end if;
raise;
end execute;

/**
* Execute a SQL statement potentially in a different schema (dummy implementation here).
* This one uses an autonomous transaction.
* The string will be put to serveroutput before being executed
* @param p_owner The schema to execute
* @param p_sql The SQL to execute
* @param p_log_error Should an error be logged or not. Default is true
**/
procedure execute_autonomous(
p_owner in oracle_object
, p_sql in clob
, p_log_error in boolean default true
)
as
pragma autonomous_transaction;
begin
execute(p_owner, p_sql, p_log_error);
end execute_autonomous;

/**
* Setup the AQ infrastructure (Queue tables, Queues)
**/
procedure setup_aq
as
begin
begin
execute(
null
, 'begin dbms_aqadm.create_queue_table(
queue_table => ''' || G_QUEUE_NAME || '''
, queue_payload_type => ''stats_concurrent_info''
); end;'
);
exception
when others then
dbms_output.put_line('Error creating Queue table: ' || SQLERRM);
raise;
end;

begin
execute(
null
, 'begin dbms_aqadm.create_queue(
queue_name => ''' || G_QUEUE_NAME || '''
, queue_table => ''' || G_QUEUE_NAME || '''
); end;'
);
exception
when others then
dbms_output.put_line('Error creating Queue: ' || SQLERRM);
raise;
end;

begin
execute(
null
, 'begin dbms_aqadm.start_queue(
queue_name => ''' || G_QUEUE_NAME || '''
); end;'
);
exception
when others then
dbms_output.put_line('Error starting Queue: ' || SQLERRM);
raise;
end;
end setup_aq;

/**
* Teardown the AQ infrastructure (Queue tables, Queues)
**/
procedure teardown_aq
as
begin
begin
execute(
null
, 'begin dbms_aqadm.stop_queue(
queue_name => ''' || G_QUEUE_NAME || '''
, wait => true
); end;'
, false
);
exception
when others then
dbms_output.put_line('Error stopping Queue: ' || SQLERRM);
-- raise;
end;

begin
execute(
null
, 'begin dbms_aqadm.drop_queue(
queue_name => ''' || G_QUEUE_NAME || '''
); end;'
, false
);
exception
when others then
dbms_output.put_line('Error dropping Queue: ' || SQLERRM);
-- raise;
end;

begin
execute(
null
, 'begin dbms_aqadm.drop_queue_table(
queue_table => ''' || G_QUEUE_NAME || '''
, force => true
); end;'
, false
);
exception
when others then
dbms_output.put_line('Error dropping Queue table: ' || SQLERRM);
-- raise;
end;

end teardown_aq;

/**
* Helper function to populate the AQ queue with data to process
* @param p_gather_option What to pass to GATHER_DATABASE_STATS as option, default LIST AUTO
**/
function list_stale_database_stats (
p_gather_option in varchar2 default 'LIST AUTO'
)
return dbms_stats.objecttab pipelined
as
pragma autonomous_transaction;
m_object_list dbms_stats.objecttab;
begin
if p_gather_option not in (
'LIST AUTO', 'LIST STALE','LIST EMPTY'
) then
null;
else
dbms_stats.gather_database_stats(
options => p_gather_option,
objlist => m_object_list
);
for i in 1..m_object_list.count loop
pipe row (m_object_list(i));
end loop;
end if;
return;
end list_stale_database_stats;

/**
* Populate the AQ queue with data to process
* @param p_parallel_degree The number threads to use - will be used for proper data preparation / queueing order
* @param p_intra_degree The DOP to use per stats operation, by default calculate DOP based on CPU_COUNT and number of threads to run concurrently
* @param p_gather_option What to pass to GATHER_DATABASE_STATS as option, default LIST AUTO
**/
procedure populate_queue(
p_parallel_degree in integer
, p_intra_degree in integer default null
, p_gather_option in varchar2 default 'LIST AUTO'
)
as
enq_msgid raw(16);
payload stats_concurrent_info := stats_concurrent_info(null, null, null, null, null);
n_dop integer;
begin
-- By default determine what intra-operation DOP to use depending on how many concurrent stats threads are supposed to run
select nvl(p_intra_degree, ceil((select to_number(value) from v$parameter where name = 'cpu_count') / p_parallel_degree)) as dop
into n_dop
from dual;
-- Populate the queue and use some "intelligent" ordering attempting to minimize (library cache) contention on the objects
for rec in (
with
-- The baseline, all TABLE objects returned by GATHER_DATABASE_STATS LIST* call
a as (
select /*+ materialize */ rownum as rn, a.* from table(pk_stats_concurrent.list_stale_database_stats(p_gather_option)) a where objtype = 'TABLE'
),
-- Assign all table, partitions and subpartitions to p_parallel_degree buckets
concurrent_stats as (
select ntile(p_parallel_degree) over (order by rn) as new_order, a.* from a where partname is null
union all
select ntile(p_parallel_degree) over (order by rn) as new_order, a.* from a where partname is not null and subpartname is null
union all
select ntile(p_parallel_degree) over (order by rn) as new_order, a.* from a where partname is not null and subpartname is not null
),
-- Now assign a row number within each bucket
b as (
select c.*, row_number() over (partition by new_order order by rn) as new_rn from concurrent_stats c
)
-- And pick one from each bucket in turn for queuing order
select
ownname
, objname as tabname
, coalesce(subpartname, partname) as partname
, n_dop as degree
, case when partname is null then 'GLOBAL' when partname is not null and subpartname is null then 'PARTITION' else 'SUBPARTITION' end as granularity
from
b
order by
new_rn, new_order
) loop
payload.ownname := rec.ownname;
payload.tabname := rec.tabname;
payload.partname := rec.partname;
payload.degree := rec.degree;
payload.granularity := rec.granularity;
-- TODO: Enqueue via array using ENQUEUE_ARRAY
execute immediate '
declare
eopt dbms_aq.enqueue_options_t;
mprop dbms_aq.message_properties_t;
begin
dbms_aq.enqueue(
queue_name => ''' || G_QUEUE_NAME || ''',
enqueue_options => eopt,
message_properties => mprop,
payload => :payload,
msgid => :enq_msgid);
end;'
using payload, out enq_msgid;
end loop;
commit;
end populate_queue;

/**
* This gets called for every stats thread
* It pulls the object to gather from the AQ queue
* @param p_optional_init Optionally a SQL can be passed usually used to initialize the session
for example forcing a particular parallel degree
**/
procedure stats_thread(
p_optional_init in varchar2 default null
)
as
deq_msgid RAW(16);
payload stats_concurrent_info;
no_messages exception;
pragma exception_init(no_messages, -25228);
s_sql clob;
begin
if p_optional_init is not null then
execute(null, p_optional_init);
end if;

-- If the VISIBILITY is set to IMMEDIATE
-- it will cause the "queue transaction" to be committed
-- Which means that the STOP_QUEUE call with the WAIT option will
-- be able to stop the queue while the processing takes place
-- and the queue table can be monitored for progress
loop
begin
execute immediate '
declare
dopt dbms_aq.dequeue_options_t;
mprop dbms_aq.message_properties_t;
begin
dopt.visibility := dbms_aq.IMMEDIATE;
dopt.wait := dbms_aq.NO_WAIT;
dbms_aq.dequeue(
queue_name => ''' || G_QUEUE_NAME || ''',
dequeue_options => dopt,
message_properties => mprop,
payload => :payload,
msgid => :deq_msgid);
end;'
using out payload, out deq_msgid;
s_sql := '
begin
dbms_stats.gather_table_stats(
ownname => ''' || payload.ownname || '''
, tabname => ''' || payload.tabname || '''
, partname => ''' || payload.partname || '''
, degree => ' || payload.degree || '
, granularity => ''' || payload.granularity || '''
);
end;
';
-- Execute the command
log(s_sql, 'Ownname: ' || payload.ownname || ' Tabname: ' || payload.tabname || ' Partname: ' || payload.partname || ' Degree: ' || payload.degree || ' Granularity: ' || payload.granularity);
begin
execute_autonomous(payload.ownname, s_sql);
exception
/*
when object_already_exists then
null;
when object_does_not_exist then
null;
*/
when others then
null;
end;
exception
when no_messages then
exit;
end;
end loop;
commit;
end stats_thread;

/**
* The main entry point to gather statistics via parallel threads / AQ
* @param p_parallel_degree The number of threads to start G_AUTO_PARALLEL_DEGREE means use the CPU_COUNT (but not
CLUSTER_DATABASE_INSTANCES parameter, commented out below) to determine number of threads automatically
* @param p_intra_degree The DOP to use per stats operation, by default calculate DOP based on CPU_COUNT and number of threads to run concurrently
* @param p_service Specify a service if you want the jobs to be assigned to that particular service, default NULL
* @param p_gather_option What to pass to GATHER_DATABASE_STATS as option, default LIST AUTO
* @param p_optional_init Optionally a SQL can be passed usually used to initialize the session
for example forcing a particular parallel degree
**/
procedure stats_concurrent(
p_parallel_degree in integer default G_AUTO_PARALLEL_DEGREE
, p_intra_degree in integer default null
, p_service in varchar2 default null
, p_gather_option in varchar2 default 'LIST AUTO'
, p_optional_init in varchar2 default null
)
as
n_cpu_count binary_integer;
n_instance_count binary_integer;
n_thread_count binary_integer;
strval varchar2(256);
partyp binary_integer;
e_job_class_exists exception;
pragma exception_init(e_job_class_exists, -27477);
s_job_class constant varchar2(30) := 'CONC_STATS';
begin
-- Truncate the log table
execute immediate 'truncate table stats_concurrent_log';
-- Just in case something has been left over from a previous run
teardown_aq;
setup_aq;
-- Populate the queue
populate_queue(p_parallel_degree, p_intra_degree, p_gather_option);
-- Determine auto degree of parallelism
partyp := dbms_utility.get_parameter_value('cpu_count', n_cpu_count, strval);
partyp := dbms_utility.get_parameter_value('cluster_database_instances', n_instance_count, strval);
n_thread_count := nvl(p_parallel_degree, n_cpu_count/* * n_instance_count*/);
-- Create/use a common job class, makes job handling easier and allows binding to a specific service
begin
dbms_scheduler.create_job_class(s_job_class);
exception
when e_job_class_exists then
null;
end;
-- Assign jobs to a particular service if requested
if p_service is null then
dbms_scheduler.set_attribute_null('SYS.' || s_job_class, 'SERVICE');
else
dbms_scheduler.set_attribute('SYS.' || s_job_class, 'SERVICE', p_service);
end if;
-- Submit the jobs
for i in 1..n_thread_count loop
dbms_scheduler.create_job(
job_name => s_job_class || '_' || i
, job_type => 'PLSQL_BLOCK'
, job_class => s_job_class
, enabled => true
, job_action => 'begin dbms_session.set_role(''ALL''); pk_stats_concurrent.stats_thread(''' || p_optional_init || '''); end;'
);
end loop;
-- Just in case anyone wants to use DBMS_JOB instead we need to commit the DBMS_JOB.SUBMIT
commit;
--execute immediate 'begin dbms_lock.sleep(:p_sleep_seconds); end;' using p_sleep_seconds;
--teardown_aq;
end stats_concurrent;
end pk_stats_concurrent;
/

show errors

spool off

OAUX SaaS Sizzle Meets Sinterklaas at Oracle Cloud Day 2016 Amsterdam

Usable Apps - Sun, 2016-12-11 17:13

Oracle Cloud Day 2016, held in the RAI Amsterdam, was a huge, rip-roaring, soaraway success!

Oracle Cloud Day 2016, Amsterdam

OAUX Was There 

Oracle Applications User Experience (OAUX) attended and was in action, showing off our SaaS user experience (UX) and innovation goodness and also contributing a UX enablement for developers session to the day's development track, sponsored by Oracle Partner AMIS NL (@AmisNL).

We were also delighted to make even more Oracle Cloud connections happen through the ever-brilliant Oracle PaaS and Fusion Middleware community leader Jürgen Kress (@soacommunity), Senior Manager for SOA/FMW Partner Programs in EMEA, who provided  the Oracle Cloud services used by the partner teams during the on-site hackathon. He’s like our very own Sinterklaas as a Service for Oracle Partners developing cloud solutions!

Making an Entrance in Style 

What a kickoff! 12 year-old Micha Barenholz’s kicked off the day to a super start with a dramatic entry on a hover board, reminding us why we need to talk to technology users like him and about why the cloud rocks the world of perennials and millennials alike.

Micha kicks off the Cloud Day 2016 NL Event

Micha takes the rest of us look slightly old, but we should all be thinking like him. Age is just a number!

One Big Audience 

At the last count I heard there were over 1,100 people in attendance and a strong partner presence close to 300. It was the largest Oracle Cloud Day event I have been to, and the best! With tracks dedicated to CX, HCM, ERP, PaaS, IaaS, development, and the rest, all in one fast-paced, business-like, and fun environment, what more could you ask for? 

There was strong turnout for my session co-delivered with Oracle Nederland Director of Business Development Jan Leemans (@janleemans) about cloud services and UX for the developer community. I covered our Rapid Development Kit UX enablement offerings as part of an rich developer track on offer that included Java, UX, Mobile, and more on Oracle Cloud services.

Ultan and Jan set up the cloud developer talk, overseen by Martjin

Setting up our Cloud services and RDK talk for development teams with Jan (middle) and moderator Martjin Vlek (@martjinvlek), Senior Director, Fusion Middleware, Oracle Nederland on the right.

Throughout the day, Ana Tomescu (@annatomsescu) with the assistance of another 12 year-old onsite, Fionn O’Broin, provided the main showcase section of the event engaged with our SaaS, Smart UX, and emerging technology demos and provided some goodies to those dropping by our demo station.

Fionn O'Broin demos the SaaS UX and chatbots

Fionn O'Broin explores some emerging technology UX and keeps an eye on his Pokémon GO pipeline in the Oracle Cloud.

I also checked out what the local sales folks had for attendees about the Oracle HCM Could user experience. I totally loved the show put on by Oracle Nederland Senior Sales Consultants Randy Lens (@RandyLens) and Nele Odegard (@neleodegard). They explained in simple terms to the audience what UX was, why it was important, and exploring best practices in a fun, engaging and very real way with iPads and a competition!

Randy and Nele made it all about participation . . .

Randy and Nele rocked the Oracle HCM Cloud Experience

Randy and Nele rock the Oracle HCM Cloud experience! 

Hackathon: the Best of Oracle Partners and Oracle Cloud Services

The hackathon proved to be a tremendous technical yet practical dimension to the event, covering a complete process from discovery to design to deployment of a digital solution using the integration power of Oracle Cloud services.

As I explored the hackathon I met representatives from AMIS, eProseed (@eproseed), Capgemini (@capgemini), Ordina (@ordina), and DigitasLBi (@digitaslbi). I was knocked out not only by the experience journey mapping driven by DigitasLBi (who now have an Amsterdam office) but by the productivity of the development effort and the general atmosphere.

This great time lapse video of the hackathon will give you a flavor of the proceedings!

Customer Experience Journey Mapping

Journey experience mapping example from the hackathon

Example mobile screen built during the hackathon

Example mobile screen built during the hackathon 

360 View of Hackathon

360 view of the hackathon. Check out the video about this and the rest of Cloud Day on YouTube. 

At the end of the day the hackathon rockstars took to the awesome stage to show off their latest release. What a platform, in the real sense of the word!

Hackathon participants on stage

The hackathon's partners take a bow at the end of the event.

Digital Marketing Meets Dutch Masters

It was great to check in with the Oracle NL people who brought this event to life and who worked the magic on outreach, people such as Business Development Manager Arianne Hageman (@ariannehageman) and Digital Marketing Manager Conny Groen in t Woud (@conny_groen). Throughout the day there was much vlogging and tweeting, as well as hand painted beermat portraits, a massive Tweet visualization, and more exciting engagement ideas we all can learn from.

Ana shows off a OBUG portrait beermat

Ana Tomescu shows off a hand painted portrait OBUG beermat from Oracle Cloud Day. It's how Dutch masters would have done selfies . . .

Many thanks to Arianne and the event team for inviting OAUX. We are looking forward to more delivering more UX enablement in the Netherlands soon!

More Information? 

For more information on the Oracle Cloud Day 2016 in Amsterdam check out the YouTube videos and see the Usable Apps Twitter and Instagram accounts.

 Mobile and PaaS4SaaS using Oracle ADF

Oracle Mobile Application Framework and Oracle Application Development Framework RDKs for mobile applications and PaaS4SaaS were explored at the event.

And, if you want to find out more about the RDK enablement for developers that I spoke about?

Then,see this blog post about the RDK experience at Oracle OpenWorld 2016.

RMAN BACKUP PERFORMANCE

Tom Kyte - Sun, 2016-12-11 13:26
Hi,guy! How can I improve my rman backup speed? Please more details! Thanks a lot!
Categories: DBA Blogs

Package Specification varaible

Tom Kyte - Sun, 2016-12-11 13:26
Hi I had defined the few global variables in the Package Specification.One of the main procedure is initializing some values into those variables. In the main procedure we are calling different sub procedures. But unfortunately some times global...
Categories: DBA Blogs

Partitioning Strategy

Tom Kyte - Sun, 2016-12-11 13:26
Hi Tom, This is my first time that I'm posting a question to you. I have been huge fan of your answer and humor sometimes :) Keep it up !! Q) I have a schema that contains around 50 tables, 5 tables contain around 1.5 billion rows and rest 45 ...
Categories: DBA Blogs

how to fetch sqlid from old transaction_id

Tom Kyte - Sun, 2016-12-11 13:26
Hi, We have Goldengate 11gR2 for Unidirectional replication from Oracle(11.2.0.4) to Oracle(11.2.0.4) database. Both source and target are RAC environment. We received below warning message on Goldengate, letting us know why one of the extract ...
Categories: DBA Blogs

Operating System Block Size

Tom Kyte - Sun, 2016-12-11 13:26
Good Evening, I would like to bring up a topic that comes up every time I work on a brand new Unix database server, but I never do anything about it. I mention it to the system administrators, but basically get ignored. The reason I am not very ...
Categories: DBA Blogs

Group by function

Tom Kyte - Sun, 2016-12-11 13:26
Hi Tom , Please see the below query : select financial_transaction_nk , max(financial_transaction_dim_key) from FTD where FTD.financial_transaction_nk in (select financial_transaction_nk from FTD where financial_transaction_dim_key in (s...
Categories: DBA Blogs

Mirroring Oracle Database

Tom Kyte - Sun, 2016-12-11 13:26
My company is in Indonesia. Having branches separates in all province. We use Oracle Database. All the branches must having internet connection to connect the application. But there is one branch, the province is so deep inside. Sometimes the interne...
Categories: DBA Blogs

Multiple DB instances

Tom Kyte - Sun, 2016-12-11 13:26
Dear Sir How to login / access a particular DB instances through connect /as sysdba when the server has multiple database instances (SIDs) like APP_QA, APP_DEV, APP_UAT. Thanks & Regards
Categories: DBA Blogs

Encode function

Tom Kyte - Sun, 2016-12-11 13:26
Hi , i'm trying to send sms using database , and my problem is in double encoding . i want to send Russian letters , English working for me fine . only the way of that is to translate the text using http://www.freeformatter.com/url-encoder.html#ad...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator